<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Kubernetes Blog</title>
    <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/</link>
    <description>The Kubernetes blog is used by the project to communicate new features, community reports, and any news that might be relevant to the Kubernetes community.</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>zh-cn</language>
    
    
    <atom:link href="https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/feed.xml" rel="self" type="application/rss+xml" />
    
    
    <item>
      <title>Kubernetes v1.31：kubeadm v1beta4</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/08/23/kubernetes-1-31-kubeadm-v1beta4/</link>
      <pubDate>Fri, 23 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/08/23/kubernetes-1-31-kubeadm-v1beta4/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#39;Kubernetes v1.31: kubeadm v1beta4&#39;
date: 2024-08-23
slug: kubernetes-1-31-kubeadm-v1beta4
author: &gt;
   Paco Xu (DaoCloud)
--&gt;
&lt;!--
As part of the Kubernetes v1.31 release, [`kubeadm`](/docs/reference/setup-tools/kubeadm/) is
adopting a new ([v1beta4](/docs/reference/config-api/kubeadm-config.v1beta4/)) version of
its configuration file format. Configuration in the previous v1beta3 format is now formally
deprecated, which means it&#39;s supported but you should migrate to v1beta4 and stop using
the deprecated format.
Support for v1beta3 configuration will be removed after a minimum of 3 Kubernetes minor releases.
--&gt;
&lt;p&gt;作为 Kubernetes v1.31 发布的一部分，&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/&#34;&gt;&lt;code&gt;kubeadm&lt;/code&gt;&lt;/a&gt;
采用了全新版本（&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/config-api/kubeadm-config.v1beta4/&#34;&gt;v1beta4&lt;/a&gt;）的配置文件格式。
之前 v1beta3 格式的配置现已正式弃用，这意味着尽管之前的格式仍然受支持，但你应迁移到 v1beta4 并停止使用已弃用的格式。
对 v1beta3 配置的支持将在至少 3 次 Kubernetes 次要版本发布后被移除。&lt;/p&gt;
&lt;!--
In this article, I&#39;ll walk you through key changes;
I&#39;ll explain about the kubeadm v1beta4 configuration format,
and how to migrate from v1beta3 to v1beta4.

You can read the reference for the v1beta4 configuration format:
[kubeadm Configuration (v1beta4)]((/docs/reference/config-api/kubeadm-config.v1beta4/)).
--&gt;
&lt;p&gt;在本文中，我将介绍关键的变更；我将解释 kubeadm v1beta4 配置格式，以及如何从 v1beta3 迁移到 v1beta4。&lt;/p&gt;
&lt;p&gt;你可以参阅 v1beta4 配置格式的参考文档：
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/config-api/kubeadm-config.v1beta4/&#34;&gt;kubeadm 配置 (v1beta4)&lt;/a&gt;。&lt;/p&gt;
&lt;!--
### A list of changes since v1beta3

This version improves on the [v1beta3](/docs/reference/config-api/kubeadm-config.v1beta3/)
format by fixing some minor issues and adding a few new fields.

To put it simply,
--&gt;
&lt;h3 id=&#34;自-v1beta3-以来的变更列表&#34;&gt;自 v1beta3 以来的变更列表&lt;/h3&gt;
&lt;p&gt;此版本通过修复一些小问题并添加一些新字段来改进
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/&#34;&gt;v1beta3&lt;/a&gt; 格式。&lt;/p&gt;
&lt;p&gt;简单而言，&lt;/p&gt;
&lt;!--
- Two new configuration elements: ResetConfiguration and UpgradeConfiguration
- For InitConfiguration and JoinConfiguration, `dryRun` mode and `nodeRegistration.imagePullSerial` are supported
- For ClusterConfiguration, there are new fields including `certificateValidityPeriod`,
`caCertificateValidityPeriod`, `encryptionAlgorithm`, `dns.disabled` and `proxy.disabled`.
- Support `extraEnvs` for all control plan components
- `extraArgs` changed from a map to structured extra arguments for duplicates
- Add a `timeouts` structure for init, join, upgrade and reset.
--&gt;
&lt;ul&gt;
&lt;li&gt;增加了两个新的配置元素：ResetConfiguration 和 UpgradeConfiguration&lt;/li&gt;
&lt;li&gt;对于 InitConfiguration 和 JoinConfiguration，支持 &lt;code&gt;dryRun&lt;/code&gt; 模式和 &lt;code&gt;nodeRegistration.imagePullSerial&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;对于 ClusterConfiguration，新增字段包括 &lt;code&gt;certificateValidityPeriod&lt;/code&gt;、&lt;code&gt;caCertificateValidityPeriod&lt;/code&gt;、
&lt;code&gt;encryptionAlgorithm&lt;/code&gt;、&lt;code&gt;dns.disabled&lt;/code&gt; 和 &lt;code&gt;proxy.disabled&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;所有控制平面组件支持 &lt;code&gt;extraEnvs&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;extraArgs&lt;/code&gt; 从映射变更为支持重复的结构化额外参数&lt;/li&gt;
&lt;li&gt;为 init、join、upgrade 和 reset 添加了 &lt;code&gt;timeouts&lt;/code&gt; 结构&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
For details, you can see the [official document](/docs/reference/config-api/kubeadm-config.v1beta4/) below:

- Support custom environment variables in control plane components under `ClusterConfiguration`.
Use `apiServer.extraEnvs`, `controllerManager.extraEnvs`, `scheduler.extraEnvs`, `etcd.local.extraEnvs`.
- The ResetConfiguration API type is now supported in v1beta4. Users are able to reset a node by passing
a `--config` file to `kubeadm reset`.
- `dryRun` mode is now configurable in InitConfiguration and JoinConfiguration.
--&gt;
&lt;p&gt;有关细节请参阅以下&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/config-api/kubeadm-config.v1beta4/&#34;&gt;官方文档&lt;/a&gt;：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;在 &lt;code&gt;ClusterConfiguration&lt;/code&gt; 下支持控制平面组件的自定义环境变量。
可以使用 &lt;code&gt;apiServer.extraEnvs&lt;/code&gt;、&lt;code&gt;controllerManager.extraEnvs&lt;/code&gt;、&lt;code&gt;scheduler.extraEnvs&lt;/code&gt;、&lt;code&gt;etcd.local.extraEnvs&lt;/code&gt;。&lt;/li&gt;
&lt;li&gt;ResetConfiguration API 类型现在在 v1beta4 中得到支持。用户可以通过将 &lt;code&gt;--config&lt;/code&gt; 文件传递给 &lt;code&gt;kubeadm reset&lt;/code&gt; 来重置节点。&lt;/li&gt;
&lt;li&gt;&lt;code&gt;dryRun&lt;/code&gt; 模式现在在 InitConfiguration 和 JoinConfiguration 中可配置。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- Replace the existing string/string extra argument maps with structured extra arguments that support duplicates.
 The change applies to `ClusterConfiguration` - `apiServer.extraArgs`, `controllerManager.extraArgs`,
 `scheduler.extraArgs`, `etcd.local.extraArgs`. Also to `nodeRegistrationOptions.kubeletExtraArgs`.
- Added `ClusterConfiguration.encryptionAlgorithm` that can be used to set the asymmetric encryption
 algorithm used for this cluster&#39;s keys and certificates. Can be one of &#34;RSA-2048&#34; (default), &#34;RSA-3072&#34;,
  &#34;RSA-4096&#34; or &#34;ECDSA-P256&#34;.
- Added `ClusterConfiguration.dns.disabled` and `ClusterConfiguration.proxy.disabled` that can be used
  to disable the CoreDNS and kube-proxy addons during cluster initialization.
  Skipping the related addons phases, during cluster creation will set the same fields to `true`.
--&gt;
&lt;ul&gt;
&lt;li&gt;用支持重复的结构化额外参数替换现有的 string/string 额外参数映射。
此变更适用于 &lt;code&gt;ClusterConfiguration&lt;/code&gt; - &lt;code&gt;apiServer.extraArgs&lt;/code&gt;、&lt;code&gt;controllerManager.extraArgs&lt;/code&gt;、
&lt;code&gt;scheduler.extraArgs&lt;/code&gt;、&lt;code&gt;etcd.local.extraArgs&lt;/code&gt;。也适用于 &lt;code&gt;nodeRegistrationOptions.kubeletExtraArgs&lt;/code&gt;。&lt;/li&gt;
&lt;li&gt;添加了 &lt;code&gt;ClusterConfiguration.encryptionAlgorithm&lt;/code&gt;，可用于设置此集群的密钥和证书所使用的非对称加密算法。
可以是 &amp;quot;RSA-2048&amp;quot;（默认）、&amp;quot;RSA-3072&amp;quot;、&amp;quot;RSA-4096&amp;quot; 或 &amp;quot;ECDSA-P256&amp;quot; 之一。&lt;/li&gt;
&lt;li&gt;添加了 &lt;code&gt;ClusterConfiguration.dns.disabled&lt;/code&gt; 和 &lt;code&gt;ClusterConfiguration.proxy.disabled&lt;/code&gt;，
可用于在集群初始化期间禁用 CoreDNS 和 kube-proxy 插件。
在集群创建期间跳过相关插件阶段将把相同的字段设置为 &lt;code&gt;true&lt;/code&gt;。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- Added the `nodeRegistration.imagePullSerial` field in `InitConfiguration` and `JoinConfiguration`,
  which can be used to control if kubeadm pulls images serially or in parallel.
- The UpgradeConfiguration kubeadm API is now supported in v1beta4 when passing `--config` to
  `kubeadm upgrade` subcommands.
  For upgrade subcommands, the usage of component configuration for kubelet and kube-proxy, as well as
  InitConfiguration and ClusterConfiguration, is now deprecated and will be ignored when passing `--config`.
- Added a `timeouts` structure to `InitConfiguration`, `JoinConfiguration`, `ResetConfiguration` and
  `UpgradeConfiguration` that can be used to configure various timeouts.
  The `ClusterConfiguration.timeoutForControlPlane` field is replaced by `timeouts.controlPlaneComponentHealthCheck`.
  The `JoinConfiguration.discovery.timeout` is replaced by `timeouts.discovery`.
--&gt;
&lt;ul&gt;
&lt;li&gt;在 &lt;code&gt;InitConfiguration&lt;/code&gt; 和 &lt;code&gt;JoinConfiguration&lt;/code&gt; 中添加了 &lt;code&gt;nodeRegistration.imagePullSerial&lt;/code&gt; 字段，
可用于控制 kubeadm 是顺序拉取镜像还是并行拉取镜像。&lt;/li&gt;
&lt;li&gt;当将 &lt;code&gt;--config&lt;/code&gt; 传递给 &lt;code&gt;kubeadm upgrade&lt;/code&gt; 子命令时，现已在 v1beta4 中支持 UpgradeConfiguration kubeadm API。
对于升级子命令，kubelet 和 kube-proxy 的组件配置以及 InitConfiguration 和 ClusterConfiguration 的用法现已弃用，
并将在传递 &lt;code&gt;--config&lt;/code&gt; 时被忽略。&lt;/li&gt;
&lt;li&gt;在 &lt;code&gt;InitConfiguration&lt;/code&gt;、&lt;code&gt;JoinConfiguration&lt;/code&gt;、&lt;code&gt;ResetConfiguration&lt;/code&gt; 和 &lt;code&gt;UpgradeConfiguration&lt;/code&gt;
中添加了 &lt;code&gt;timeouts&lt;/code&gt; 结构，可用于配置各种超时。
&lt;code&gt;ClusterConfiguration.timeoutForControlPlane&lt;/code&gt; 字段被 &lt;code&gt;timeouts.controlPlaneComponentHealthCheck&lt;/code&gt; 替换。
&lt;code&gt;JoinConfiguration.discovery.timeout&lt;/code&gt; 被 &lt;code&gt;timeouts.discovery&lt;/code&gt; 替换。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- Added a `certificateValidityPeriod` and `caCertificateValidityPeriod` fields to `ClusterConfiguration`.
  These fields can be used to control the validity period of certificates generated by kubeadm during
  sub-commands such as `init`, `join`, `upgrade` and `certs`.
  Default values continue to be 1 year for non-CA certificates and 10 years for CA certificates.
  Also note that only non-CA certificates are renewable by `kubeadm certs renew`.

These changes simplify the configuration of tools that use kubeadm
and improve the extensibility of kubeadm itself.
--&gt;
&lt;ul&gt;
&lt;li&gt;向 &lt;code&gt;ClusterConfiguration&lt;/code&gt; 添加了 &lt;code&gt;certificateValidityPeriod&lt;/code&gt; 和 &lt;code&gt;caCertificateValidityPeriod&lt;/code&gt; 字段。
这些字段可用于控制 kubeadm 在 &lt;code&gt;init&lt;/code&gt;、&lt;code&gt;join&lt;/code&gt;、&lt;code&gt;upgrade&lt;/code&gt; 和 &lt;code&gt;certs&lt;/code&gt; 等子命令中生成的证书的有效期。
默认值继续为非 CA 证书 1 年和 CA 证书 10 年。另请注意，只有非 CA 证书可以通过 &lt;code&gt;kubeadm certs renew&lt;/code&gt; 进行续期。&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;这些变更简化了使用 kubeadm 的工具的配置，并提高了 kubeadm 本身的可扩展性。&lt;/p&gt;
&lt;!--
### How to migrate v1beta3 configuration to v1beta4?

If your configuration is not using the latest version, it is recommended that you migrate using
the [kubeadm config migrate](/docs/reference/setup-tools/kubeadm/kubeadm-config/#cmd-config-migrate) command.

This command reads an existing configuration file that uses the old format, and writes a new
file that uses the current format.
--&gt;
&lt;h3 id=&#34;如何将-v1beta3-配置迁移到-v1beta4&#34;&gt;如何将 v1beta3 配置迁移到 v1beta4？&lt;/h3&gt;
&lt;p&gt;如果你的配置未使用最新版本，建议你使用
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-config/#cmd-config-migrate&#34;&gt;kubeadm config migrate&lt;/a&gt;
命令进行迁移。&lt;/p&gt;
&lt;p&gt;此命令读取使用旧格式的现有配置文件，并写入一个使用当前格式的新文件。&lt;/p&gt;
&lt;!--
#### Example {#example-kubeadm-config-migrate}

Using kubeadm v1.31, run `kubeadm config migrate --old-config old-v1beta3.yaml --new-config new-v1beta4.yaml`

## How do I get involved?

Huge thanks to all the contributors who helped with the design, implementation,
and review of this feature:
--&gt;
&lt;h4 id=&#34;example-kubeadm-config-migrate&#34;&gt;示例&lt;/h4&gt;
&lt;p&gt;使用 kubeadm v1.31，运行 &lt;code&gt;kubeadm config migrate --old-config old-v1beta3.yaml --new-config new-v1beta4.yaml&lt;/code&gt;&lt;/p&gt;
&lt;h2 id=&#34;我该如何参与&#34;&gt;我该如何参与？&lt;/h2&gt;
&lt;p&gt;衷心感谢在此特性的设计、实现和评审中提供帮助的所有贡献者：&lt;/p&gt;
&lt;!--
- Lubomir I. Ivanov ([neolit123](https://github.com/neolit123))
- Dave Chen([chendave](https://github.com/chendave))
- Paco Xu ([pacoxu](https://github.com/pacoxu))
- Sata Qiu([sataqiu](https://github.com/sataqiu))
- Baofa Fan([carlory](https://github.com/carlory))
- Calvin Chen([calvin0327](https://github.com/calvin0327))
- Ruquan Zhao([ruquanzhao](https://github.com/ruquanzhao))
--&gt;
&lt;ul&gt;
&lt;li&gt;Lubomir I. Ivanov (&lt;a href=&#34;https://github.com/neolit123&#34;&gt;neolit123&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Dave Chen (&lt;a href=&#34;https://github.com/chendave&#34;&gt;chendave&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Paco Xu (&lt;a href=&#34;https://github.com/pacoxu&#34;&gt;pacoxu&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Sata Qiu (&lt;a href=&#34;https://github.com/sataqiu&#34;&gt;sataqiu&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Baofa Fan (&lt;a href=&#34;https://github.com/carlory&#34;&gt;carlory&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Calvin Chen (&lt;a href=&#34;https://github.com/calvin0327&#34;&gt;calvin0327&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Ruquan Zhao (&lt;a href=&#34;https://github.com/ruquanzhao&#34;&gt;ruquanzhao&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
For those interested in getting involved in future discussions on kubeadm configuration,
you can reach out kubeadm or [SIG-cluster-lifecycle](https://github.com/kubernetes/community/blob/master/sig-cluster-lifecycle/README.md) by several means:

- v1beta4 related items are tracked in [kubeadm issue #2890](https://github.com/kubernetes/kubeadm/issues/2890).
- Slack: [#kubeadm](https://kubernetes.slack.com/messages/kubeadm) or [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle)
- [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
--&gt;
&lt;p&gt;如果你有兴趣参与 kubeadm 配置的后续讨论，可以通过多种方式与 kubeadm 或
&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-cluster-lifecycle/README.md&#34;&gt;SIG-cluster-lifecycle&lt;/a&gt; 联系：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;v1beta4 相关事项在 &lt;a href=&#34;https://github.com/kubernetes/kubeadm/issues/2890&#34;&gt;kubeadm issue #2890&lt;/a&gt; 中跟踪。&lt;/li&gt;
&lt;li&gt;Slack: &lt;a href=&#34;https://kubernetes.slack.com/messages/kubeadm&#34;&gt;#kubeadm&lt;/a&gt; 或
&lt;a href=&#34;https://kubernetes.slack.com/messages/sig-cluster-lifecycle&#34;&gt;#sig-cluster-lifecycle&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle&#34;&gt;邮件列表&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.31：通过 VolumeAttributesClass 修改卷进阶至 Beta</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/08/15/kubernetes-1-31-volume-attributes-class/</link>
      <pubDate>Thu, 15 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/08/15/kubernetes-1-31-volume-attributes-class/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Kubernetes 1.31: VolumeAttributesClass for Volume Modification Beta&#34;
date: 2024-08-15
slug: kubernetes-1-31-volume-attributes-class
author: &gt;
  Sunny Song (Google)
  Matthew Cary (Google)
--&gt;
&lt;!--
Volumes in Kubernetes have been described by two attributes: their storage class, and
their capacity. The storage class is an immutable property of the volume, while the
capacity can be changed dynamically with [volume
resize](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims).

This complicates vertical scaling of workloads with volumes. While cloud providers and
storage vendors often offer volumes which allow specifying IO quality of service
(Performance) parameters like IOPS or throughput and tuning them as workloads operate,
Kubernetes has no API which allows changing them.
--&gt;
&lt;p&gt;在 Kubernetes 中，卷由两个属性描述：存储类和容量。存储类是卷的不可变属性，
而容量可以通过&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims&#34;&gt;卷调整大小&lt;/a&gt;进行动态变更。&lt;/p&gt;
&lt;p&gt;这使得使用卷的工作负载的垂直扩缩容变得复杂。
虽然云厂商和存储供应商通常提供了一些允许指定注入 IOPS 或吞吐量等 IO
服务质量（性能）参数的卷，并允许在工作负载运行期间调整这些参数，但 Kubernetes
没有提供用来更改这些参数的 API。&lt;/p&gt;
&lt;!--
We are pleased to announce that the [VolumeAttributesClass
KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/3751-volume-attributes-class/README.md),
alpha since Kubernetes 1.29, will be beta in 1.31. This provides a generic,
Kubernetes-native API for modifying volume parameters like provisioned IO.
--&gt;
&lt;p&gt;我们很高兴地宣布，自 Kubernetes 1.29 起以 Alpha 引入的
&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/3751-volume-attributes-class/README.md&#34;&gt;VolumeAttributesClass KEP&lt;/a&gt;
将在 1.31 中进入 Beta 阶段。这一机制提供了一个通用的、Kubernetes 原生的 API，
可用来修改诸如所提供的 IO 能力这类卷参数。&lt;/p&gt;
&lt;!--
Like all new volume features in Kubernetes, this API is implemented via the [container
storage interface (CSI)](https://kubernetes-csi.github.io/docs/). In addition to the
VolumeAttributesClass feature gate, your provisioner-specific CSI driver must support the
new ModifyVolume API which is the CSI side of this feature.

See the [full
documentation](https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/)
for all details. Here we show the common workflow.
--&gt;
&lt;p&gt;类似于 Kubernetes 中所有新的卷特性，此 API 是通过&lt;a href=&#34;https://kubernetes-csi.github.io/docs/&#34;&gt;容器存储接口（CSI）&lt;/a&gt;实现的。
除了 VolumeAttributesClass 特性门控外，特定于制备器的 CSI 驱动还必须支持此特性在
CSI 一侧的全新的 ModifyVolume API。&lt;/p&gt;
&lt;p&gt;有关细节请参阅&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/storage/volume-attributes-classes/&#34;&gt;完整文档&lt;/a&gt;。
在这里，我们展示了常见的工作流程。&lt;/p&gt;
&lt;!--
### Dynamically modifying volume attributes.

A `VolumeAttributesClass` is a cluster-scoped resource that specifies provisioner-specific
attributes. These are created by the cluster administrator in the same way as storage
classes. For example, a series of gold, silver and bronze volume attribute classes can be
created for volumes with greater or lessor amounts of provisioned IO.
--&gt;
&lt;h3 id=&#34;dynamically-modifying-volume-attributes&#34;&gt;动态修改卷属性  &lt;/h3&gt;
&lt;p&gt;&lt;code&gt;VolumeAttributesClass&lt;/code&gt; 是一个集群范围的资源，用来指定特定于制备器的属性。
这些属性由集群管理员创建，方式上与存储类相同。
例如，你可以为卷创建一系列金、银和铜级别的卷属性类，以区隔不同级别的 IO 能力。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;storage.k8s.io/v1alpha1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;VolumeAttributesClass&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;silver&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;driverName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;your-csi-driver&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;parameters&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;provisioned-iops&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;500&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;provisioned-throughput&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;50MiB/s&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;---&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;storage.k8s.io/v1alpha1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;VolumeAttributesClass&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;gold&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;driverName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;your-csi-driver&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;parameters&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;provisioned-iops&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;10000&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;provisioned-throughput&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;500MiB/s&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
An attribute class is added to a PVC in much the same way as a storage class.
--&gt;
&lt;p&gt;属性类的添加方式与存储类类似。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;PersistentVolumeClaim&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;test-pv-claim&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storageClassName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;any-storage-class&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeAttributesClassName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;silver&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;accessModes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- ReadWriteOnce&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requests&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storage&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;64Gi&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Unlike a storage class, the volume attributes class can be changed:
--&gt;
&lt;p&gt;与存储类不同，卷属性类可以被更改：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl patch pvc test-pv-claim -p &lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;{&amp;#34;spec&amp;#34;: &amp;#34;volumeAttributesClassName&amp;#34;: &amp;#34;gold&amp;#34;}&amp;#39;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Kubernetes will work with the CSI driver to update the attributes of the
volume. The status of the PVC will track the current and desired attributes
class. The PV resource will also be updated with the new volume attributes class
which will be set to the currently active attributes of the PV.
--&gt;
&lt;p&gt;Kubernetes 将与 CSI 驱动协作来更新卷的属性。
PVC 的状态将跟踪当前和所需的属性类。
PV 资源也将依据新的卷属性类完成更新，卷属性类也会被依据 PV 当前活跃的属性完成设置。&lt;/p&gt;
&lt;!--
### Limitations with the beta

As a beta feature, there are still some features which are planned for GA but not yet
present. The largest is quota support, see the
[KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/3751-volume-attributes-class/README.md)
and discussion in
[sig-storage](https://github.com/kubernetes/community/tree/master/sig-storage) for details.

See the [Kubernetes CSI driver
list](https://kubernetes-csi.github.io/docs/drivers.html) for up-to-date
information of support for this feature in CSI drivers.
--&gt;
&lt;h3 id=&#34;limitations-with-the-beta&#34;&gt;Beta 阶段的限制  &lt;/h3&gt;
&lt;p&gt;作为一个 Beta 特性，仍有一些特性计划在 GA 阶段推出，但尚未实现。最大的限制是配额支持，详见
&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/3751-volume-attributes-class/README.md&#34;&gt;KEP&lt;/a&gt;
和 &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;sig-storage&lt;/a&gt; 中的讨论。&lt;/p&gt;
&lt;p&gt;有关此特性在 CSI 驱动中的最新支持信息，请参阅 &lt;a href=&#34;https://kubernetes-csi.github.io/docs/drivers.html&#34;&gt;Kubernetes CSI 驱动列表&lt;/a&gt;。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>向 Client-Go 引入特性门控：增强灵活性和控制力</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/08/12/feature-gates-in-client-go/</link>
      <pubDate>Mon, 12 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/08/12/feature-gates-in-client-go/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#39;Introducing Feature Gates to Client-Go: Enhancing Flexibility and Control&#39;
date: 2024-08-12
slug: feature-gates-in-client-go
author: &gt;
 Ben Luddy (Red Hat),
 Lukasz Szaszkiewicz (Red Hat)
--&gt;
&lt;!--
Kubernetes components use on-off switches called _feature gates_ to manage the risk of adding a new feature.
The feature gate mechanism is what enables incremental graduation of a feature through the stages Alpha, Beta, and GA.
--&gt;
&lt;p&gt;Kubernetes 组件使用称为“特性门控（Feature Gates）”的开关来管理添加新特性的风险，
特性门控机制使特性能够通过 Alpha、Beta 和 GA 阶段逐步升级。&lt;/p&gt;
&lt;!--
Kubernetes components, such as kube-controller-manager and kube-scheduler, use the client-go library to interact with the API. 
The same library is used across the Kubernetes ecosystem to build controllers, tools, webhooks, and more. client-go now includes 
its own feature gating mechanism, giving developers and cluster administrators more control over how they adopt client features.
--&gt;
&lt;p&gt;Kubernetes 组件（例如 kube-controller-manager 和 kube-scheduler）使用 client-go 库与 API 交互，
整个 Kubernetes 生态系统使用相同的库来构建控制器、工具、webhook 等。
client-go 现在包含自己的特性门控机制，使开发人员和集群管理员能够更好地控制如何使用客户端特性。&lt;/p&gt;
&lt;!--
To learn more about feature gates in Kubernetes, visit [Feature Gates](/docs/reference/command-line-tools-reference/feature-gates/).
--&gt;
&lt;p&gt;要了解有关 Kubernetes 中特性门控的更多信息，请参阅&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;特性门控&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## Motivation

In the absence of client-go feature gates, each new feature separated feature availability from enablement in its own way, if at all. 
Some features were enabled by updating to a newer version of client-go. Others needed to be actively configured in each program that used them. 
A few were configurable at runtime using environment variables. Consuming a feature-gated functionality exposed by the kube-apiserver sometimes 
required a client-side fallback mechanism to remain compatible with servers that don’t support the functionality due to their age or configuration. 
In cases where issues were discovered in these fallback mechanisms, mitigation required updating to a fixed version of client-go or rolling back.
--&gt;
&lt;h2 id=&#34;动机&#34;&gt;动机&lt;/h2&gt;
&lt;p&gt;在没有 client-go 特性门控的情况下，每个新特性都以自己的方式（如果有的话）将特性可用性与特性的启用分开。
某些特性可通过更新到较新版本的 client-go 来启用，其他特性则需要在每个使用它们的程序中进行主动配置，
其中一些可在运行时使用环境变量进行配置。使用 kube-apiserver 公开的特性门控功能时，有时需要客户端回退机制，
以保持与由于版本新旧或配置不同而不支持该特性服务器的兼容性。
如果在这些回退机制中发现问题，则缓解措施需要更新到 client-go 的固定版本或回滚。&lt;/p&gt;
&lt;!--
None of these approaches offer good support for enabling a feature by default in some, but not all, programs that consume client-go. 
Instead of enabling a new feature at first only for a single component, a change in the default setting immediately affects the default 
for all Kubernetes components, which broadens the blast radius significantly.
--&gt;
&lt;p&gt;这些方法都无法很好地支持为某些（但不是全部）使用 client-go 的程序默认启用特性。
默认设置的更改不会首先仅为单个组件启用新特性，而是会立即影响所有 Kubernetes 组件的默认设置，从而大大扩大影响半径。&lt;/p&gt;
&lt;!--
## Feature gates in client-go

To address these challenges, substantial client-go features will be phased in using the new feature gate mechanism. 
It will allow developers and users to enable or disable features in a way that will be familiar to anyone who has experience 
with feature gates  in the Kubernetes components.
--&gt;
&lt;h2 id=&#34;client-go-中的特性门控&#34;&gt;client-go 中的特性门控&lt;/h2&gt;
&lt;p&gt;为了应对这些挑战，大量的 client-go 特性将使用新的特性门控机制来逐步引入。
这一机制将允许开发人员和用户以类似 Kubernetes 组件特性门控的管理方式启用或禁用特性。&lt;/p&gt;
&lt;!--
Out of the box, simply by using a recent version of client-go, this offers several benefits.

For people who use software built with client-go:
--&gt;
&lt;p&gt;作为一种开箱即用的能力，用户只需使用最新版本的 client-go。这种设计带来多种好处。&lt;/p&gt;
&lt;p&gt;对于使用通过 client-go 构建的软件的用户：&lt;/p&gt;
&lt;!--
* Early adopters can enable a default-off client-go feature on a per-process basis.
* Misbehaving features can be disabled without building a new binary.
* The state of all known client-go feature gates is logged, allowing users to inspect it.
--&gt;
&lt;ul&gt;
&lt;li&gt;早期采用者可以针对各个进程分别启用默认关闭的 client-go 特性。&lt;/li&gt;
&lt;li&gt;无需构建新的二进制文件即可禁用行为不当的特性。&lt;/li&gt;
&lt;li&gt;所有已知的 client-go 特性门控的状态都会被记录到日志中，允许用户检查。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
For people who develop software built with client-go:

* By default, client-go feature gate overrides are read from environment variables. 
  If a bug is found in a client-go feature, users will be able to disable it without waiting for a new release.
* Developers can replace the default environment-variable-based overrides in a program to change defaults, 
  read overrides from another source, or disable runtime overrides completely. 
  The Kubernetes components use this customizability to integrate client-go feature gates with 
  the existing `--feature-gates` command-line flag, feature enablement metrics, and logging.
--&gt;
&lt;p&gt;对于开发使用 client-go 构建的软件的人员：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;默认情况下，client-go 特性门控覆盖是从环境变量中读取的。
如果在 client-go 特性中发现错误，用户将能够禁用它，而无需等待新版本发布。&lt;/li&gt;
&lt;li&gt;开发人员可以替换程序中基于默认环境变量的覆盖值以更改默认值、从其他源读取覆盖值或完全禁用运行时覆盖值。
Kubernetes 组件使用这种可定制性将 client-go 特性门控与现有的 &lt;code&gt;--feature-gates&lt;/code&gt; 命令行标志、特性启用指标和日志记录集成在一起。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Overriding client-go feature gates

**Note**: This describes the default method for overriding client-go feature gates at runtime. 
It can be disabled or customized by the developer of a particular program. 
In Kubernetes components, client-go feature gate overrides are controlled by the `--feature-gates` flag.

Features of client-go can be enabled or disabled by setting environment variables prefixed with `KUBE_FEATURE`. 
For example, to enable a feature named `MyFeature`, set the environment variable as follows:
--&gt;
&lt;h2 id=&#34;覆盖-client-go-特性门控&#34;&gt;覆盖 client-go 特性门控&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;注意&lt;/strong&gt;：这描述了在运行时覆盖 client-go 特性门控的默认方法，它可以由特定程序的开发人员禁用或自定义。
在 Kubernetes 组件中，client-go 特性门控覆盖由 &lt;code&gt;--feature-gates&lt;/code&gt; 标志控制。&lt;/p&gt;
&lt;p&gt;可以通过设置以 &lt;code&gt;KUBE_FEATURE&lt;/code&gt; 为前缀的环境变量来启用或禁用 client-go 的特性。
例如，要启用名为 &lt;code&gt;MyFeature&lt;/code&gt; 的特性，请按如下方式设置环境变量：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b8860b&#34;&gt;KUBE_FEATURE_MyFeature&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#a2f&#34;&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
To disable the feature, set the environment variable to `false`:
--&gt;
&lt;p&gt;要禁用特性，可将环境变量设置为 &lt;code&gt;false&lt;/code&gt;：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b8860b&#34;&gt;KUBE_FEATURE_MyFeature&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#a2f&#34;&gt;false&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
**Note**: Environment variables are case-sensitive on some operating systems. 
Therefore, `KUBE_FEATURE_MyFeature` and `KUBE_FEATURE_MYFEATURE` would be considered two different variables.
--&gt;
&lt;p&gt;&lt;strong&gt;注意&lt;/strong&gt;：在某些操作系统上，环境变量区分大小写。
因此，&lt;code&gt;KUBE_FEATURE_MyFeature&lt;/code&gt; 和 &lt;code&gt;KUBE_FEATURE_MYFEATURE&lt;/code&gt; 将被视为两个不同的变量。&lt;/p&gt;
&lt;!--
## Customizing client-go feature gates

The default environment-variable based mechanism for feature gate overrides can be sufficient for many programs in the Kubernetes ecosystem, 
and requires no special integration. Programs that require different behavior can replace it with their own custom feature gate provider. 
This allows a program to do things like force-disable a feature that is known to work poorly, 
read feature gates directly from a remote configuration service, or accept feature gate overrides through command-line options.
--&gt;
&lt;h2 id=&#34;自定义-client-go-特性门控&#34;&gt;自定义 client-go 特性门控&lt;/h2&gt;
&lt;p&gt;基于环境变量的默认特性门控覆盖机制足以满足 Kubernetes 生态系统中许多程序的需求，无需特殊集成。
需要不同行为的程序可以用自己的自定义特性门控提供程序替换它。
这允许程序执行诸如强制禁用已知运行不良的特性、直接从远程配置服务读取特性门控或通过命令行选项接受特性门控覆盖等操作。&lt;/p&gt;
&lt;!--
The Kubernetes components replace client-go’s default feature gate provider with a shim to the existing Kubernetes feature gate provider. 
For all practical purposes, client-go feature gates are treated the same as other Kubernetes 
feature gates: they are wired to the `--feature-gates` command-line flag, included in feature enablement metrics, and logged on startup.
--&gt;
&lt;p&gt;Kubernetes 组件将 client-go 的默认特性门控提供程序替换为现有 Kubernetes 特性门控提供程序的转换层。
在所有实际应用场合中，client-go 特性门控与其他 Kubernetes 特性门控的处理方式相同：
它们连接到 &lt;code&gt;--feature-gates&lt;/code&gt; 命令行标志，包含在特性启用指标中，并在启动时记录。&lt;/p&gt;
&lt;!--
To replace the default feature gate provider, implement the Gates interface and call ReplaceFeatureGates 
at package initialization time, as in this simple example:
--&gt;
&lt;p&gt;要替换默认的特性门控提供程序，请实现 Gates 接口并在包初始化时调用 ReplaceFeatureGates，如以下简单示例所示：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-go&#34; data-lang=&#34;go&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;import&lt;/span&gt; (
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt; &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;k8s.io/client-go/features&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;type&lt;/span&gt; AlwaysEnabledGates &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;struct&lt;/span&gt;{}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;func&lt;/span&gt; (AlwaysEnabledGates) &lt;span style=&#34;color:#00a000&#34;&gt;Enabled&lt;/span&gt;(features.Feature) &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;bool&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt; &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;return&lt;/span&gt; &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;func&lt;/span&gt; &lt;span style=&#34;color:#00a000&#34;&gt;init&lt;/span&gt;() {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt; features.&lt;span style=&#34;color:#00a000&#34;&gt;ReplaceFeatureGates&lt;/span&gt;(AlwaysEnabledGates{})
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Implementations that need the complete list of defined client-go features can get it by implementing the Registry interface 
and calling `AddFeaturesToExistingFeatureGates`. 
For a complete example, refer to [the usage within Kubernetes](https://github.com/kubernetes/kubernetes/blob/64ba17c605a41700f7f4c4e27dca3684b593b2b9/pkg/features/kube_features.go#L990-L997).
--&gt;
&lt;p&gt;需要定义的 client-go 特性完整列表的实现可以通过实现 Registry 接口并调用 &lt;code&gt;AddFeaturesToExistingFeatureGates&lt;/code&gt; 来获取它。
完整示例请参考
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/64ba17c605a41700f7f4c4e27dca3684b593b2b9/pkg/features/kube_features.go#L990-L997&#34;&gt;Kubernetes 内部使用&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## Summary

With the introduction of feature gates in client-go v1.30, rolling out a new client-go feature has become safer and easier. 
Users and developers can control the pace of their own adoption of client-go features. 
The work of Kubernetes contributors is streamlined by having a common mechanism for graduating features that span both sides of the Kubernetes API boundary.
--&gt;
&lt;h2 id=&#34;总结&#34;&gt;总结&lt;/h2&gt;
&lt;p&gt;随着 client-go v1.30 中特性门控的引入，推出新的 client-go 特性变得更加安全、简单。
用户和开发人员可以控制自己采用 client-go 特性的步伐。
通过为跨 Kubernetes API 边界两侧的特性提供一种通用的培育机制，Kubernetes 贡献者的工作得到了简化。&lt;/p&gt;
&lt;!--
Special shoutout to [@sttts](https://github.com/sttts) and [@deads2k](https://github.com/deads2k) for their help in shaping this feature.
--&gt;
&lt;p&gt;特别感谢 &lt;a href=&#34;https://github.com/sttts&#34;&gt;@sttts&lt;/a&gt; 和 &lt;a href=&#34;https://github.com/deads2k&#34;&gt;@deads2k&lt;/a&gt; 对此特性提供的帮助。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>聚焦 SIG API Machinery</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/08/07/sig-api-machinery-spotlight-2024/</link>
      <pubDate>Wed, 07 Aug 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/08/07/sig-api-machinery-spotlight-2024/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Spotlight on SIG API Machinery&#34;
slug: sig-api-machinery-spotlight-2024
canonicalUrl: https://www.kubernetes.dev/blog/2024/08/07/sig-api-machinery-spotlight-2024
date: 2024-08-07
author: &#34;Frederico Muñoz (SAS Institute)&#34;
--&gt;
&lt;!--
We recently talked with [Federico Bongiovanni](https://github.com/fedebongio) (Google) and [David
Eads](https://github.com/deads2k) (Red Hat), Chairs of SIG API Machinery, to know a bit more about
this Kubernetes Special Interest Group.
--&gt;
&lt;p&gt;我们最近与 SIG API Machinery 的主席
&lt;a href=&#34;https://github.com/fedebongio&#34;&gt;Federico Bongiovanni&lt;/a&gt;（Google）和
&lt;a href=&#34;https://github.com/deads2k&#34;&gt;David Eads&lt;/a&gt;（Red Hat）进行了访谈，
了解一些有关这个 Kubernetes 特别兴趣小组的信息。&lt;/p&gt;
&lt;!--
## Introductions

**Frederico (FSM): Hello, and thank your for your time. To start with, could you tell us about
yourselves and how you got involved in Kubernetes?**
--&gt;
&lt;h2 id=&#34;introductions&#34;&gt;介绍  &lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Frederico (FSM)：你好，感谢你抽时间参与访谈。首先，你能做个自我介绍以及你是如何参与到 Kubernetes 的？&lt;/strong&gt;&lt;/p&gt;
&lt;!--
**David**: I started working on
[OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) (the Red Hat
distribution of Kubernetes) in the fall of 2014 and got involved pretty quickly in API Machinery. My
first PRs were fixing kube-apiserver error messages and from there I branched out to `kubectl`
(_kubeconfigs_ are my fault!), `auth` ([RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) and `*Review` APIs are ports
from OpenShift), `apps` (_workqueues_ and _sharedinformers_ for example). Don’t tell the others,
but API Machinery is still my favorite :)
--&gt;
&lt;p&gt;&lt;strong&gt;David&lt;/strong&gt;：我在 2014 年秋天开始在
&lt;a href=&#34;https://www.redhat.com/zh/technologies/cloud-computing/openshift&#34;&gt;OpenShift&lt;/a&gt;
（Red Hat 的 Kubernetes 发行版）工作，很快就参与到 API Machinery 的工作中。
我的第一个 PR 是修复 kube-apiserver 的错误消息，然后逐渐扩展到 &lt;code&gt;kubectl&lt;/code&gt;（&lt;em&gt;kubeconfigs&lt;/em&gt; 是我的杰作！），
&lt;code&gt;auth&lt;/code&gt;（&lt;a href=&#34;https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/rbac/&#34;&gt;RBAC&lt;/a&gt;
和 &lt;code&gt;*Review&lt;/code&gt; API 是从 OpenShift 移植过来的），&lt;code&gt;apps&lt;/code&gt;（例如 &lt;em&gt;workqueues&lt;/em&gt; 和 &lt;em&gt;sharedinformers&lt;/em&gt;）。
别告诉别人，但 API Machinery 仍然是我的最爱 :)&lt;/p&gt;
&lt;!--
**Federico**: I was not as early in Kubernetes as David, but now it&#39;s been more than six years. At
my previous company we were starting to use Kubernetes for our own products, and when I came across
the opportunity to work directly with Kubernetes I left everything and boarded the ship (no pun
intended). I joined Google and Kubernetes in early 2018, and have been involved since.
--&gt;
&lt;p&gt;&lt;strong&gt;Federico&lt;/strong&gt;：我加入 Kubernetes 没有 David 那么早，但现在也已经超过六年了。
在我之前的公司，我们开始为自己的产品使用 Kubernetes，当我有机会直接参与 Kubernetes 的工作时，
我放下了一切，登上了这艘船（无意双关）。我在 2018 年初加入 Google 从事 Kubernetes 的相关工作，
从那时起一直参与其中。&lt;/p&gt;
&lt;!--
## SIG Machinery&#39;s scope

**FSM: It only takes a quick look at the SIG API Machinery charter to see that it has quite a
significant scope, nothing less than the Kubernetes control plane. Could you describe this scope in
your own words?**
--&gt;
&lt;h2 id=&#34;sig-machinerys-scope&#34;&gt;SIG Machinery 的范围  &lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM：只需快速浏览一下 SIG API Machinery 的章程，就可以看到它的范围相当广泛，
不亚于 Kubernetes 的控制平面。你能用自己的话描述一下这个范围吗？&lt;/strong&gt;&lt;/p&gt;
&lt;!--
**David**: We own the `kube-apiserver` and how to efficiently use it. On the backend, that includes
its contract with backend storage and how it allows API schema evolution over time.  On the
frontend, that includes schema best practices, serialization, client patterns, and controller
patterns on top of all of it.

**Federico**: Kubernetes has a lot of different components, but the control plane has a really
critical mission: it&#39;s your communication layer with the cluster and also owns all the extensibility
mechanisms that make Kubernetes so powerful. We can&#39;t make mistakes like a regression, or an
incompatible change, because the blast radius is huge.
--&gt;
&lt;p&gt;&lt;strong&gt;David&lt;/strong&gt;：我们全权负责 &lt;code&gt;kube-apiserver&lt;/code&gt; 以及如何高效地使用它。
在后端，这包括它与后端存储的契约以及如何让 API 模式随时间演变。
在前端，这包括模式的最佳实践、序列化、客户端模式以及在其之上的控制器模式。&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Federico&lt;/strong&gt;：Kubernetes 有很多不同的组件，但控制平面有一个非常关键的任务：
它是你与集群的通信层，同时也拥有所有使 Kubernetes 如此强大的可扩展机制。
我们不能犯像回归或不兼容变更这样的错误，因为影响范围太大了。&lt;/p&gt;
&lt;!--
**FSM: Given this breadth, how do you manage the different aspects of it?**

**Federico**: We try to organize the large amount of work into smaller areas. The working groups and
subprojects are part of it. Different people on the SIG have their own areas of expertise, and if
everything fails, we are really lucky to have people like David, Joe, and Stefan who really are &#34;all
terrain&#34;, in a way that keeps impressing me even after all these years.  But on the other hand this
is the reason why we need more people to help us carry the quality and excellence of Kubernetes from
release to release.
--&gt;
&lt;p&gt;&lt;strong&gt;FSM：鉴于这个广度，你们如何管理它的不同方面？&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Federico&lt;/strong&gt;：我们尝试将大量工作组织成较小的领域。工作组和子项目是其中的一部分。
SIG 中的各位贡献者有各自的专长领域，如果一切都失败了，我们很幸运有像 David、Joe 和 Stefan 这样的人，
他们真的是“全能型”，这种方式让我在这些年里一直感到惊叹。但另一方面，
这也是为什么我们需要更多人来帮助我们在版本变迁之时保持 Kubernetes 的质量和卓越。&lt;/p&gt;
&lt;!--
## An evolving collaboration model

**FSM: Was the existing model always like this, or did it evolve with time - and if so, what would
you consider the main changes and the reason behind them?**

**David**: API Machinery has evolved over time both growing and contracting in scope.  When trying
to satisfy client access patterns it’s very easy to add scope both in terms of features and applying
them.
--&gt;
&lt;h2 id=&#34;an-evolving-collaboration-model&#34;&gt;不断演变的协作模式  &lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM：现有的模式一直是这样，还是随着时间的推移而演变的 - 如果是在演变的，你认为主要的变化以及背后的原因是什么？&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;David&lt;/strong&gt;：API Machinery 在随着时间的推移不断发展，既有扩展也有收缩。
在尝试满足客户端访问模式时，它很容易在特性和应用方面扩大范围。&lt;/p&gt;
&lt;!--
A good example of growing scope is the way that we identified a need to reduce memory utilization by
clients writing controllers and developed shared informers.  In developing shared informers and the
controller patterns use them (workqueues, error handling, and listers), we greatly reduced memory
utilization and eliminated many expensive lists.  The downside: we grew a new set of capability to
support and effectively took ownership of that area from sig-apps.
--&gt;
&lt;p&gt;一个范围扩大的好例子是我们认识到需要减少客户端写入控制器时的内存使用率而开发了共享通知器。
在开发共享通知器和使用它们的控制器模式（工作队列、错误处理和列举器）时，
我们大大减少了内存使用率，并消除了许多占用资源较多的列表。
缺点是：我们增加了一套新的权能来提供支持，并有效地从 sig-apps 接管了该领域的所有权。&lt;/p&gt;
&lt;!--
For an example of more shared ownership: building out cooperative resource management (the goal of
server-side apply), `kubectl` expanded to take ownership of leveraging the server-side apply
capability.  The transition isn’t yet complete, but [SIG
CLI](https://github.com/kubernetes/community/tree/master/sig-cli) manages that usage and owns it.
--&gt;
&lt;p&gt;一个更多共享所有权的例子是：构建出合作的资源管理（服务器端应用的目标），
&lt;code&gt;kubectl&lt;/code&gt; 扩展为负责利用服务器端应用的权能。这个过渡尚未完成，
但 &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-cli&#34;&gt;SIG CLI&lt;/a&gt; 管理其使用情况并拥有它。&lt;/p&gt;
&lt;!--
**FSM: And for the boundary between approaches, do you have any guidelines?**

**David**: I think much depends on the impact. If the impact is local in immediate effect, we advise
other SIGs and let them move at their own pace.  If the impact is global in immediate effect without
a natural incentive, we’ve found a need to press for adoption directly.

**FSM: Still on that note, SIG Architecture has an API Governance subproject, is it mostly
independent from SIG API Machinery or are there important connection points?**
--&gt;
&lt;p&gt;&lt;strong&gt;FSM：对于方法之间的权衡，你们有什么指导方针吗？&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;David&lt;/strong&gt;：我认为这很大程度上取决于影响。如果影响在立即见效中是局部的，
我们会给其他 SIG 提出建议并让他们以自己的节奏推进。
如果影响在立即见效中是全局的且没有自然的激励，我们发现需要直接推动采用。&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FSM：仍然在这个话题上，SIG Architecture 有一个 API Governance 子项目，
它与 SIG API Machinery 是否完全独立，还是有重要的连接点？&lt;/strong&gt;&lt;/p&gt;
&lt;!--
**David**: The projects have similar sounding names and carry some impacts on each other, but have
different missions and scopes.  API Machinery owns the how and API Governance owns the what.  API
conventions, the API approval process, and the final say on individual k8s.io APIs belong to API
Governance.  API Machinery owns the REST semantics and non-API specific behaviors.

**Federico**: I really like how David put it: *&#34;API Machinery owns the how and API Governance owns
the what&#34;*: we don&#39;t own the actual APIs, but the actual APIs live through us.
--&gt;
&lt;p&gt;&lt;strong&gt;David&lt;/strong&gt;：这些项目有相似的名称并对彼此产生一些影响，但有不同的使命和范围。
API Machinery 负责“如何做”，而 API Governance 负责“做什么”。
API 约定、API 审批过程以及对单个 k8s.io API 的最终决定权属于 API Governance。
API Machinery 负责 REST 语义和非 API 特定行为。&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Federico&lt;/strong&gt;：我真得很喜欢 David 的说法：
&lt;strong&gt;“API Machinery 负责‘如何做’，而 API Governance 负责‘做什么’”&lt;/strong&gt;：
我们并未拥有实际的 API，但实际的 API 依靠我们存在。&lt;/p&gt;
&lt;!--
## The challenges of Kubernetes popularity

**FSM: With the growth in Kubernetes adoption we have certainly seen increased demands from the
Control Plane: how is this felt and how does it influence the work of the SIG?**

**David**: It’s had a massive influence on API Machinery.  Over the years we have often responded to
and many times enabled the evolutionary stages of Kubernetes.  As the central orchestration hub of
nearly all capability on Kubernetes clusters, we both lead and follow the community.  In broad
strokes I see a few evolution stages for API Machinery over the years, with constantly high
activity.
--&gt;
&lt;h2 id=&#34;the-challenge-of-kubernetes-popularity&#34;&gt;Kubernetes 受欢迎的挑战  &lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM：随着 Kubernetes 的采用率上升，我们肯定看到了对控制平面的需求增加：你们对这点的感受如何，它如何影响 SIG 的工作？&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;David&lt;/strong&gt;：这对 API Machinery 产生了巨大的影响。多年来，我们经常响应并多次促进了 Kubernetes 的发展阶段。
作为几乎所有 Kubernetes 集群上权能的集中编排中心，我们既领导又跟随社区。
从广义上讲，我看到多年来 API Machinery 经历了一些发展阶段，活跃度一直很高。&lt;/p&gt;
&lt;!--
1. **Finding purpose**: `pre-1.0` up until `v1.3` (up to our first 1000+ nodes/namespaces) or
   so. This time was characterized by rapid change.  We went through five different versions of our
   schemas and rose to meet the need.  We optimized for quick, in-tree API evolution (sometimes to
   the detriment of longer term goals), and defined patterns for the first time.

2. **Scaling to meet the need**: `v1.3-1.9` (up to shared informers in controllers) or so.  When we
   started trying to meet customer needs as we gained adoption, we found severe scale limitations in
   terms of CPU and memory. This was where we broadened API machinery to include access patterns, but
   were still heavily focused on in-tree types.  We built the watch cache, protobuf serialization,
   and shared caches.
--&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;寻找目标&lt;/strong&gt;：从 &lt;code&gt;pre-1.0&lt;/code&gt; 到 &lt;code&gt;v1.3&lt;/code&gt;（我们达到了第一个 1000+ 节点/命名空间）。
这段时间以快速变化为特征。我们经历了五个不同版本的模式，并满足了需求。
我们优化了快速、树内 API 的演变（有时以牺牲长期目标为代价），并首次定义了模式。&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;满足需求的扩展&lt;/strong&gt;：&lt;code&gt;v1.3-1.9&lt;/code&gt;（直到控制器中的共享通知器）。
当我们开始尝试满足客户需求时，我们发现了严重的 CPU 和内存规模限制。
这也是为什么我们将 API Machinery 扩展到包含访问模式，但我们仍然非常关注树内类型。
我们构建了 watch 缓存、protobuf 序列化和共享缓存。&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
3. **Fostering the ecosystem**: `v1.8-1.21` (up to CRD v1) or so.  This was when we designed and wrote
   CRDs (the considered replacement for third-party-resources), the immediate needs we knew were
   coming (admission webhooks), and evolution to best practices we knew we needed (API schemas).
   This enabled an explosion of early adopters willing to work very carefully within the constraints
   to enable their use-cases for servicing pods.  The adoption was very fast, sometimes outpacing
   our capability, and creating new problems.
--&gt;
&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;&lt;strong&gt;培育生态系统&lt;/strong&gt;：&lt;code&gt;v1.8-1.21&lt;/code&gt;（直到 CRD v1）。这是我们设计和编写 CRD（视为第三方资源的替代品）的时间，
满足我们知道即将到来的即时需求（准入 Webhook），以及我们知道需要的最佳实践演变（API 模式）。
这促成了早期采用者的爆发式增长，他们愿意在约束内非常谨慎地工作，以实现服务 Pod 的用例。
采用速度非常快，有时超出了我们的权能，并形成了新的问题。&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
4. **Simplifying deployments**: `v1.22+`.  In the relatively recent past, we’ve been responding to
   pressures or running kube clusters at scale with large numbers of sometimes-conflicting ecosystem
   projects using our extensions mechanisms.  Lots of effort is now going into making platform
   extensions easier to write and safer to manage by people who don&#39;t hold PhDs in kubernetes.  This
   started with things like server-side-apply and continues today with features like webhook match
   conditions and validating admission policies.
--&gt;
&lt;ol start=&#34;4&#34;&gt;
&lt;li&gt;&lt;strong&gt;简化部署&lt;/strong&gt;：&lt;code&gt;v1.22+&lt;/code&gt;。在不久之前，
我们采用扩展机制来响应运行大规模的 Kubernetes 集群的压力，其中包含大量有时会发生冲突的生态系统项目。
我们投入了许多努力，使平台更易于扩展，管理更安全，就算不是很精通 Kubernetes 的人也能做到。
这些努力始于服务器端应用，并在如今延续到 Webhook 匹配状况和验证准入策略等特性。&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
Work in API Machinery has a broad impact across the project and the ecosystem.  It’s an exciting
area to work for those able to make a significant time investment on a long time horizon.

## The road ahead

**FSM: With those different evolutionary stages in mind, what would you pinpoint as the top
priorities for the SIG at this time?**
--&gt;
&lt;p&gt;API Machinery 的工作对整个项目和生态系统有广泛的影响。
对于那些能够长期投入大量时间的人来说，这是一个令人兴奋的工作领域。&lt;/p&gt;
&lt;h2 id=&#34;the-road-ahead&#34;&gt;未来发展  &lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM：考虑到这些不同的发展阶段，你能说说这个 SIG 的当前首要任务是什么吗？&lt;/strong&gt;&lt;/p&gt;
&lt;!--
**David:** **Reliability, efficiency, and capability** in roughly that order.

With the increased usage of our `kube-apiserver` and extensions mechanisms, we find that our first
set of extensions mechanisms, while fairly complete in terms of capability, carry significant risks
in terms of potential mis-use with large blast radius.  To mitigate these risks, we’re investing in
features that reduce the blast radius for accidents (webhook match conditions) and which provide
alternative mechanisms with lower risk profiles for most actions (validating admission policy).
--&gt;
&lt;p&gt;&lt;strong&gt;David&lt;/strong&gt;：大致的顺序为&lt;strong&gt;可靠性、效率和权能&lt;/strong&gt;。&lt;/p&gt;
&lt;p&gt;随着 &lt;code&gt;kube-apiserver&lt;/code&gt; 和扩展机制的使用增加，我们发现第一套扩展机制虽然在权能方面相当完整，
但在潜在误用方面存在重大风险，影响范围很大。为了减轻这些风险，我们正在致力于减少事故影响范围的特性
（Webhook 匹配状况）以及为大多数操作提供风险配置较低的替代机制（验证准入策略）。&lt;/p&gt;
&lt;!--
At the same time, the increased usage has made us more aware of scaling limitations that we can
improve both server and client-side.  Efforts here include more efficient serialization (CBOR),
reduced etcd load (consistent reads from cache), and reduced peak memory usage (streaming lists).

And finally, the increased usage has highlighted some long existing
gaps that we’re closing.  Things like field selectors for CRDs which
the [Batch Working Group](https://github.com/kubernetes/community/blob/master/wg-batch/README.md)
is eager to leverage and will eventually form the basis for a new way
to prevent trampoline pod attacks from exploited nodes.
--&gt;
&lt;p&gt;同时，使用量的增加使我们更加意识到我们可以同时改进服务器端和客户端的扩缩限制。
这里的努力包括更高效的序列化（CBOR），减少 etcd 负载（从缓存中一致读取）和减少峰值内存使用量（流式列表）。&lt;/p&gt;
&lt;p&gt;最后，使用量的增加突显了一些长期存在的、我们正在设法填补的差距。这些包括针对 CRD 的字段选择算符，
&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/wg-batch/README.md&#34;&gt;Batch Working Group&lt;/a&gt;
渴望利用这些选择算符，并最终构建一种新的方法以防止从有漏洞的节点实施“蹦床式”的 Pod 攻击。&lt;/p&gt;
&lt;!--
## Joining the fun

**FSM: For anyone wanting to start contributing, what&#39;s your suggestions?**

**Federico**: SIG API Machinery is not an exception to the Kubernetes motto: **Chop Wood and Carry
Water**. There are multiple weekly meetings that are open to everybody, and there is always more
work to be done than people to do it.
--&gt;
&lt;h2 id=&#34;joining-the-fun&#34;&gt;加入有趣的我们  &lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM：如果有人想要开始贡献，你有什么建议？&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Federico&lt;/strong&gt;：SIG API Machinery 毫不例外也遵循 Kubernetes 的风格：&lt;strong&gt;砍柴和挑水（踏实工作，注重细节）&lt;/strong&gt;。
有多个每周例会对所有人开放，总是有更多的工作要做，人手总是不够。&lt;/p&gt;
&lt;!--
I acknowledge that API Machinery is not easy, and the ramp up will be steep. The bar is high,
because of the reasons we&#39;ve been discussing: we carry a huge responsibility. But of course with
passion and perseverance many people has ramped up through the years, and we hope more will come.

In terms of concrete opportunities, there is the SIG meeting every two weeks. Everyone is welcome to
attend and listen, see what the group talks about, see what&#39;s going on in this release, etc.
--&gt;
&lt;p&gt;我承认 API Machinery 并不容易，入门的坡度会比较陡峭。门槛较高，就像我们所讨论的原因：我们肩负着巨大的责任。
当然凭借激情和毅力，多年来有许多人已经跟了上来，我们希望更多的人加入。&lt;/p&gt;
&lt;p&gt;具体的机会方面，每两周有一次 SIG 会议。欢迎所有人参会和听会，了解小组在讨论什么，了解这个版本中发生了什么等等。&lt;/p&gt;
&lt;!--
Also two times a week, Tuesday and Thursday, we have the public Bug Triage, where we go through
everything new from the last meeting. We&#39;ve been keeping this practice for more than 7 years
now. It&#39;s a great opportunity to volunteer to review code, fix bugs, improve documentation,
etc. Tuesday&#39;s it&#39;s at 1 PM (PST) and Thursday is on an EMEA friendly time (9:30 AM PST).  We are
always looking to improve, and we hope to be able to provide more concrete opportunities to join and
participate in the future.
--&gt;
&lt;p&gt;此外，每周两次，周二和周四，我们有公开的 Bug 分类管理会，在会上我们会讨论上次会议以来的所有新内容。
我们已经保持这种做法 7 年多了。这是一个很好的机会，你可以志愿审查代码、修复 Bug、改进文档等。
周二的会议在下午 1 点（PST），周四是在 EMEA 友好时间（上午 9:30 PST）。
我们总是在寻找改进的机会，希望能够在未来提供更多具体的参与机会。&lt;/p&gt;
&lt;!--
**FSM: Excellent, thank you! Any final comments you would like to share with our readers?**

**Federico**: As I mentioned, the first steps might be hard, but the reward is also larger. Working
on API Machinery is working on an area of huge impact (millions of users?), and your contributions
will have a direct outcome in the way that Kubernetes works and the way that it&#39;s used. For me
that&#39;s enough reward and motivation!
--&gt;
&lt;p&gt;&lt;strong&gt;FSM：太好了，谢谢！你们还有什么想与我们的读者分享吗？&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Federico&lt;/strong&gt;：正如我提到的，第一步可能较难，但回报也更大。
参与 API Machinery 的工作就是在加入一个影响巨大（百万用户？）的领域，
你的贡献将直接影响 Kubernetes 的工作方式和使用方式。对我来说，这已经足够作为回报和动力了！&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes v1.31 中的移除和主要变更</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/07/19/kubernetes-1-31-upcoming-changes/</link>
      <pubDate>Fri, 19 Jul 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/07/19/kubernetes-1-31-upcoming-changes/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#39;Kubernetes Removals and Major Changes In v1.31&#39;
date: 2024-07-19
slug: kubernetes-1-31-upcoming-changes
author: &gt;
  Abigail McCarthy,
  Edith Puclla,
  Matteo Bianchi,
  Rashan Smith,
  Yigit Demirbas 
--&gt;
&lt;!--
As Kubernetes develops and matures, features may be deprecated, removed, or replaced with better ones for the project&#39;s overall health. 
This article outlines some planned changes for the Kubernetes v1.31 release that the release team feels you should be aware of for the continued maintenance of your Kubernetes environment. 
The information listed below is based on the current status of the v1.31 release. 
It may change before the actual release date. 
--&gt;
&lt;p&gt;随着 Kubernetes 的发展和成熟，为了项目的整体健康，某些特性可能会被弃用、删除或替换为更好的特性。
本文阐述了 Kubernetes v1.31 版本的一些更改计划，发行团队认为你应当了解这些更改，
以便持续维护 Kubernetes 环境。
下面列出的信息基于 v1.31 版本的当前状态；这些状态可能会在实际发布日期之前发生变化。&lt;/p&gt;
&lt;!--
## The Kubernetes API removal and deprecation process
The Kubernetes project has a well-documented [deprecation policy](/docs/reference/using-api/deprecation-policy/) for features. 
This policy states that stable APIs may only be deprecated when a newer, stable version of that API is available and that APIs have a minimum lifetime for each stability level.
A deprecated API has been marked for removal in a future Kubernetes release. 
It will continue to function until removal (at least one year from the deprecation), but usage will display a warning. 
Removed APIs are no longer available in the current version, so you must migrate to using the replacement.
--&gt;
&lt;h2 id=&#34;kubernetes-api-删除和弃用流程&#34;&gt;Kubernetes API 删除和弃用流程&lt;/h2&gt;
&lt;p&gt;Kubernetes 项目针对其功能特性有一个详细说明的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/using-api/deprecation-policy/&#34;&gt;弃用策略&lt;/a&gt;。
此策略规定，只有当某稳定 API 的更新、稳定版本可用时，才可以弃用该 API，并且 API
的各个稳定性级别都有对应的生命周期下限。
已弃用的 API 标记为在未来的 Kubernetes 版本中删除，
这类 API 将继续发挥作用，直至被删除（从弃用起至少一年），但使用时会显示警告。
已删除的 API 在当前版本中不再可用，因此你必须将其迁移到替换版本。&lt;/p&gt;
&lt;!--
* Generally available (GA) or stable API versions may be marked as deprecated but must not be removed within a major version of Kubernetes.

* Beta or pre-release API versions must be supported for 3 releases after the deprecation.

* Alpha or experimental API versions may be removed in any release without prior deprecation notice.
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;正式发布的（GA）或稳定的 API 版本可被标记为已弃用，但不得在 Kubernetes 主要版本未变时删除。&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Beta 或预发布 API 版本在被弃用后，必须保持 3 个发布版本中仍然可用。&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Alpha 或实验性 API 版本可以在任何版本中删除，不必提前通知。&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
Whether an API is removed because a feature graduated from beta to stable or because that API did not succeed, all removals comply with this deprecation policy. 
Whenever an API is removed, migration options are communicated in the [documentation](/docs/reference/using-api/deprecation-guide/).
--&gt;
&lt;p&gt;无论 API 是因为某个特性从 Beta 版升级到稳定版，还是因为此 API 未成功而被删除，所有删除都将符合此弃用策略。
每当删除 API 时，迁移选项都会在&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/using-api/deprecation-guide/&#34;&gt;文档&lt;/a&gt;中传达。&lt;/p&gt;
&lt;!--
## A note about SHA-1 signature support

In [go1.18](https://go.dev/doc/go1.18#sha1) (released in March 2022), the crypto/x509 library started to reject certificates signed with a SHA-1 hash function. 
While SHA-1 is established to be unsafe and publicly trusted Certificate Authorities have not issued SHA-1 certificates since 2015, there might still be cases in the context of Kubernetes where user-provided certificates are signed using a SHA-1 hash function through private authorities with them being used for Aggregated API Servers or webhooks. 
If you have relied on SHA-1 based certificates, you must explicitly opt back into its support by setting `GODEBUG=x509sha1=1` in your environment.
--&gt;
&lt;h2 id=&#34;关于-sha-1-签名支持的说明&#34;&gt;关于 SHA-1 签名支持的说明&lt;/h2&gt;
&lt;p&gt;在 &lt;a href=&#34;https://go.dev/doc/go1.18#sha1&#34;&gt;go1.18&lt;/a&gt;（2022 年 3 月发布）中，crypto/x509
库开始拒绝使用 SHA-1 哈希函数签名的证书。
虽然 SHA-1 被确定为不安全，并且公众信任的证书颁发机构自 2015 年以来就没有颁发过 SHA-1 证书，
但在 Kubernetes 环境中，仍可能存在用户提供的证书通过私人颁发机构使用 SHA-1 哈希函数签名的情况，
这些证书用于聚合 API 服务器或 Webhook。
如果你依赖基于 SHA-1 的证书，则必须通过在环境中设置 &lt;code&gt;GODEBUG=x509sha1=1&lt;/code&gt; 以明确选择重新支持这种证书。&lt;/p&gt;
&lt;!--
Given Go&#39;s [compatibility policy for GODEBUGs](https://go.dev/blog/compat), the `x509sha1` GODEBUG and the support for SHA-1 certificates will [fully go away in go1.24](https://tip.golang.org/doc/go1.23) which will be released in the first half of 2025. 
If you rely on SHA-1 certificates, please start moving off them.

Please see [Kubernetes issue #125689](https://github.com/kubernetes/kubernetes/issues/125689) to get a better idea of timelines around the support for SHA-1 going away, when Kubernetes releases plans to adopt go1.24, and for more details on how to detect usage of SHA-1 certificates via metrics and audit logging. 
--&gt;
&lt;p&gt;鉴于 Go 的 &lt;a href=&#34;https://go.dev/blog/compat&#34;&gt;GODEBUG 兼容性策略&lt;/a&gt;，&lt;code&gt;x509sha1&lt;/code&gt; GODEBUG
和对 SHA-1 证书的支持将 &lt;a href=&#34;https://tip.golang.org/doc/go1.23&#34;&gt;在 2025 年上半年发布的 go1.24&lt;/a&gt;
中完全消失。
如果你依赖 SHA-1 证书，请开始放弃使用它们。&lt;/p&gt;
&lt;p&gt;请参阅 &lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/125689&#34;&gt;Kubernetes 问题 #125689&lt;/a&gt;，
以更好地了解对 SHA-1 支持的时间表，以及 Kubernetes 发布采用 go1.24
的计划时间、如何通过指标和审计日志检测 SHA-1 证书使用情况的更多详细信息。&lt;/p&gt;
&lt;!--
## Deprecations and removals in Kubernetes 1.31

### Deprecation of `status.nodeInfo.kubeProxyVersion` field for Nodes ([KEP 4004](https://github.com/kubernetes/enhancements/issues/4004))
--&gt;
&lt;h2 id=&#34;kubernetes-1-31-中的弃用和删除&#34;&gt;Kubernetes 1.31 中的弃用和删除&lt;/h2&gt;
&lt;h3 id=&#34;弃用节点的-status-nodeinfo-kubeproxyversion-字段-kep-4004-https-github-com-kubernetes-enhancements-issues-4004&#34;&gt;弃用节点的 &lt;code&gt;status.nodeInfo.kubeProxyVersion&lt;/code&gt; 字段（&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4004&#34;&gt;KEP 4004&lt;/a&gt;）&lt;/h3&gt;
&lt;!--
The `.status.nodeInfo.kubeProxyVersion` field of Nodes is being deprecated in Kubernetes v1.31,and will be removed in a later release.
It&#39;s being deprecated because the value of this field wasn&#39;t (and isn&#39;t) accurate.
This field is set by the kubelet, which does not have reliable information about the kube-proxy version or whether kube-proxy is running.

The `DisableNodeKubeProxyVersion` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) will be set to `true` in by default in v1.31 and the kubelet will no longer attempt to set the `.status.kubeProxyVersion` field for its associated Node.
--&gt;
&lt;p&gt;Node 的 &lt;code&gt;.status.nodeInfo.kubeProxyVersion&lt;/code&gt; 字段在 Kubernetes v1.31 中将被弃用，
并将在后续版本中删除。该字段被弃用是因为其取值原来不准确，并且现在也不准确。
该字段由 kubelet 设置，而 kubelet 没有关于 kube-proxy 版本或 kube-proxy 是否正在运行的可靠信息。&lt;/p&gt;
&lt;p&gt;在 v1.31 中，&lt;code&gt;DisableNodeKubeProxyVersion&lt;/code&gt;
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;特性门控&lt;/a&gt;将默认设置为 &lt;code&gt;true&lt;/code&gt;，
并且 kubelet 将不再尝试为其关联的 Node 设置 &lt;code&gt;.status.kubeProxyVersion&lt;/code&gt; 字段。&lt;/p&gt;
&lt;!--
### Removal of all in-tree integrations with cloud providers

As highlighted in a [previous article](/blog/2024/05/20/completing-cloud-provider-migration/), the last remaining in-tree support for cloud provider integration will be removed as part of the v1.31 release.
This doesn&#39;t mean you can&#39;t integrate with a cloud provider, however you now **must** use the
recommended approach using an external integration. Some integrations are part of the Kubernetes
project and others are third party software.
--&gt;
&lt;h3 id=&#34;删除所有云驱动的树内集成组件&#34;&gt;删除所有云驱动的树内集成组件&lt;/h3&gt;
&lt;p&gt;正如&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2024/05/20/completing-cloud-provider-migration/&#34;&gt;之前一篇文章&lt;/a&gt;中所强调的，
v1.31 版本将删除云驱动集成的树内支持的最后剩余部分。
这并不意味着你无法与某云驱动集成，只是你现在&lt;strong&gt;必须&lt;/strong&gt;使用推荐的外部集成方法。
一些集成组件是 Kubernetes 项目的一部分，其余集成组件则是第三方软件。&lt;/p&gt;
&lt;!--
This milestone marks the completion of the externalization process for all cloud providers&#39; integrations from the Kubernetes core ([KEP-2395](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers/README.md)), a process started with Kubernetes v1.26. 
This change helps Kubernetes to get closer to being a truly vendor-neutral platform.

For further details on the cloud provider integrations, read our [v1.29 Cloud Provider Integrations feature blog](/blog/2023/12/14/cloud-provider-integration-changes/). 
For additional context about the in-tree code removal, we invite you to check the ([v1.29 deprecation blog](/blog/2023/11/16/kubernetes-1-29-upcoming-changes/#removal-of-in-tree-integrations-with-cloud-providers-kep-2395-https-kep-k8s-io-2395)).

The latter blog also contains useful information for users who need to migrate to version v1.29 and later.
--&gt;
&lt;p&gt;这一里程碑标志着将所有云驱动集成组件从 Kubernetes 核心外部化的过程已经完成
（&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers/README.md&#34;&gt;KEP-2395&lt;/a&gt;），
该过程从 Kubernetes v1.26 开始。
这一变化有助于 Kubernetes 进一步成为真正的供应商中立平台。&lt;/p&gt;
&lt;p&gt;有关云驱动集成的更多详细信息，请阅读我们的 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/12/14/cloud-provider-integration-changes/&#34;&gt;v1.29 云驱动集成特性的博客&lt;/a&gt;。
有关树内代码删除的更多背景信息，请阅读
（&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/11/16/kubernetes-1-29-upcoming-changes/#removal-of-in-tree-integrations-with-cloud-providers-kep-2395-https-kep-k8s-io-2395&#34;&gt;v1.29 弃用博客&lt;/a&gt;）。&lt;/p&gt;
&lt;p&gt;后一个博客还包含对需要迁移到 v1.29 及更高版本的用户有用的信息。&lt;/p&gt;
&lt;!--
### Removal of kubelet `--keep-terminated-pod-volumes` command line flag

The kubelet flag `--keep-terminated-pod-volumes`, which was deprecated in 2017, will be removed as
part of the v1.31 release.

You can find more details in the pull request [#122082](https://github.com/kubernetes/kubernetes/pull/122082).
--&gt;
&lt;h3 id=&#34;删除-kubelet-keep-terminated-pod-volumes-命令行标志&#34;&gt;删除 kubelet &lt;code&gt;--keep-terminated-pod-volumes&lt;/code&gt; 命令行标志&lt;/h3&gt;
&lt;p&gt;kubelet 标志 &lt;code&gt;--keep-terminated-pod-volumes&lt;/code&gt; 已于 2017 年弃用，将在 v1.31 版本中被删除。&lt;/p&gt;
&lt;p&gt;你可以在拉取请求 &lt;a href=&#34;https://github.com/kubernetes/kubernetes/pull/122082&#34;&gt;#122082&lt;/a&gt;
中找到更多详细信息。&lt;/p&gt;
&lt;!--
### Removal of CephFS volume plugin 

[CephFS volume plugin](/docs/concepts/storage/volumes/#cephfs) was removed in this release and the `cephfs` volume type became non-functional. 

It is recommended that you use the [CephFS CSI driver](https://github.com/ceph/ceph-csi/) as a third-party storage driver instead. If you were using the CephFS volume plugin before upgrading the cluster version to v1.31, you must re-deploy your application to use the new driver.

CephFS volume plugin was formally marked as deprecated in v1.28.
--&gt;
&lt;h3 id=&#34;删除-cephfs-卷插件&#34;&gt;删除 CephFS 卷插件&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/storage/volumes/#cephfs&#34;&gt;CephFS 卷插件&lt;/a&gt;已在此版本中删除，
并且 &lt;code&gt;cephfs&lt;/code&gt; 卷类型已无法使用。&lt;/p&gt;
&lt;p&gt;建议你改用 &lt;a href=&#34;https://github.com/ceph/ceph-csi/&#34;&gt;CephFS CSI 驱动程序&lt;/a&gt; 作为第三方存储驱动程序。
如果你在将集群版本升级到 v1.31 之前在使用 CephFS 卷插件，则必须重新部署应用才能使用新驱动。&lt;/p&gt;
&lt;p&gt;CephFS 卷插件在 v1.28 中正式标记为已弃用。&lt;/p&gt;
&lt;!--
### Removal of Ceph RBD volume plugin

The v1.31 release will remove the [Ceph RBD volume plugin](/docs/concepts/storage/volumes/#rbd) and its CSI migration support, making the `rbd` volume type non-functional.

It&#39;s recommended that you use the [RBD CSI driver](https://github.com/ceph/ceph-csi/) in your clusters instead. 
If you were using Ceph RBD volume plugin before upgrading the cluster version to v1.31, you must re-deploy your application to use the new driver.

The Ceph RBD volume plugin was formally marked as deprecated in v1.28.
--&gt;
&lt;h3 id=&#34;删除-ceph-rbd-卷插件&#34;&gt;删除 Ceph RBD 卷插件&lt;/h3&gt;
&lt;p&gt;v1.31 版本将删除 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/storage/volumes/#rbd&#34;&gt;Ceph RBD 卷插件&lt;/a&gt;及其 CSI 迁移支持，
&lt;code&gt;rbd&lt;/code&gt; 卷类型将无法继续使用。&lt;/p&gt;
&lt;p&gt;建议你在集群中使用 &lt;a href=&#34;https://github.com/ceph/ceph-csi/&#34;&gt;RBD CSI 驱动&lt;/a&gt;。
如果你在将集群版本升级到 v1.31 之前在使用 Ceph RBD 卷插件，则必须重新部署应用以使用新驱动。&lt;/p&gt;
&lt;p&gt;Ceph RBD 卷插件在 v1.28 中正式标记为已弃用。&lt;/p&gt;
&lt;!--
### Deprecation of non-CSI volume limit plugins in kube-scheduler

The v1.31 release will deprecate all non-CSI volume limit scheduler plugins, and will remove some
already deprected plugins from the [default plugins](/docs/reference/scheduling/config/), including:
--&gt;
&lt;h3 id=&#34;kube-scheduler-中非-csi-卷限制插件的弃用&#34;&gt;kube-scheduler 中非 CSI 卷限制插件的弃用&lt;/h3&gt;
&lt;p&gt;v1.31 版本将弃用所有非 CSI 卷限制调度程序插件，
并将从&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/scheduling/config/&#34;&gt;默认插件&lt;/a&gt;中删除一些已弃用的插件，包括：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;AzureDiskLimits&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;CinderLimits&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;EBSLimits&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;GCEPDLimits&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
It&#39;s recommended that you use the `NodeVolumeLimits` plugin instead because it can handle the same functionality as the removed plugins since those volume types have been migrated to CSI. 
Please replace the deprecated plugins with the `NodeVolumeLimits` plugin if you explicitly use them in the [scheduler config](/docs/reference/scheduling/config/). 
The `AzureDiskLimits`, `CinderLimits`, `EBSLimits`, and `GCEPDLimits` plugins will be removed in a future release.

These plugins will be removed from the default scheduler plugins list as they have been deprecated since Kubernetes v1.14.
--&gt;
&lt;p&gt;建议你改用 &lt;code&gt;NodeVolumeLimits&lt;/code&gt; 插件，因为它可以处理与已删除插件相同的功能，因为这些卷类型已迁移到 CSI。
如果你在&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/scheduling/config/&#34;&gt;调度器配置&lt;/a&gt;中显式使用已弃用的插件，
请用 &lt;code&gt;NodeVolumeLimits&lt;/code&gt; 插件替换它们。
&lt;code&gt;AzureDiskLimits&lt;/code&gt;、&lt;code&gt;CinderLimits&lt;/code&gt;、&lt;code&gt;EBSLimits&lt;/code&gt; 和 &lt;code&gt;GCEPDLimits&lt;/code&gt; 插件将在未来的版本中被删除。&lt;/p&gt;
&lt;p&gt;这些插件将从默认调度程序插件列表中删除，因为它们自 Kubernetes v1.14 以来已被弃用。&lt;/p&gt;
&lt;!--
## Looking ahead
The official list of API removals planned for [Kubernetes v1.32](/docs/reference/using-api/deprecation-guide/#v1-32) include:

* The `flowcontrol.apiserver.k8s.io/v1beta3` API version of FlowSchema and PriorityLevelConfiguration will be removed. 
To prepare for this, you can edit your existing manifests and rewrite client software to use the `flowcontrol.apiserver.k8s.io/v1 API` version, available since v1.29. 
All existing persisted objects are accessible via the new API. Notable changes in flowcontrol.apiserver.k8s.io/v1beta3 include that the PriorityLevelConfiguration `spec.limited.nominalConcurrencyShares` field only defaults to 30 when unspecified, and an explicit value of 0 is not changed to 30.

For more information, please refer to the [API deprecation guide](/docs/reference/using-api/deprecation-guide/#v1-32).
--&gt;
&lt;h2 id=&#34;展望未来&#34;&gt;展望未来&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/using-api/deprecation-guide/#v1-32&#34;&gt;Kubernetes v1.32&lt;/a&gt; 计划删除的官方 API 包括：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;将删除 &lt;code&gt;flowcontrol.apiserver.k8s.io/v1beta3&lt;/code&gt; API 版本的 FlowSchema 和 PriorityLevelConfiguration。
为了做好准备，你可以编辑现有清单并重写客户端软件以使用自 v1.29 起可用的 &lt;code&gt;flowcontrol.apiserver.k8s.io/v1 API&lt;/code&gt; 版本。
所有现有的持久化对象都可以通过新 API 访问。&lt;code&gt;flowcontrol.apiserver.k8s.io/v1beta3&lt;/code&gt; 中需要注意的变化包括优先级配置
&lt;code&gt;spec.limited.nominalConcurrencyShares&lt;/code&gt; 字段仅在未指定时默认为 30，并且显式设置为 0 的话不会被更改为 30。&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;有关更多信息，请参阅 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/using-api/deprecation-guide/#v1-32&#34;&gt;API 弃用指南&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## Want to know more?
The Kubernetes release notes announce deprecations. 
We will formally announce the deprecations in [Kubernetes v1.31](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#deprecation) as part of the CHANGELOG for that release.

You can see the announcements of pending deprecations in the release notes for:
--&gt;
&lt;h2 id=&#34;想要了解更多&#34;&gt;想要了解更多？&lt;/h2&gt;
&lt;p&gt;Kubernetes 发行说明中会宣布弃用信息。
我们将在 &lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#deprecation&#34;&gt;Kubernetes v1.31&lt;/a&gt;
中正式宣布弃用信息，作为该版本的 CHANGELOG 的一部分。&lt;/p&gt;
&lt;p&gt;你可以在发行说明中看到待弃用的公告：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#deprecation&#34;&gt;Kubernetes v1.30&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#deprecation&#34;&gt;Kubernetes v1.29&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#deprecation&#34;&gt;Kubernetes v1.28&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#deprecation&#34;&gt;Kubernetes v1.27&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 的十年</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/06/06/10-years-of-kubernetes/</link>
      <pubDate>Thu, 06 Jun 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/06/06/10-years-of-kubernetes/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;10 Years of Kubernetes&#34;
date: 2024-06-06
slug: 10-years-of-kubernetes
author: &gt;
  [Bob Killen](https://github.com/mybobbytables) (CNCF),
  [Chris Short](https://github.com/chris-short) (AWS),
  [Frederico Muñoz](https://github.com/fsmunoz) (SAS),
  [Kaslin Fields](https://github.com/kaslin) (Google),
  [Tim Bannister](https://github.com/sftim) (The Scale Factory),
  and every contributor across the globe
--&gt;
&lt;!--
![KCSEU 2024 group photo](kcseu2024.jpg)

Ten (10) years ago, on June 6th, 2014, the
[first commit](https://github.com/kubernetes/kubernetes/commit/2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56)
of Kubernetes was pushed to GitHub. That first commit with 250 files and 47,501 lines of go, bash
and markdown kicked off the project we have today. Who could have predicted that 10 years later,
Kubernetes would grow to become one of the largest Open Source projects to date with over
[88,000 contributors](https://k8s.devstats.cncf.io/d/24/overall-project-statistics?orgId=1) from
more than [8,000 companies](https://www.cncf.io/reports/kubernetes-project-journey-report/), across
44 countries.
--&gt;
&lt;p&gt;&lt;img src=&#34;kcseu2024.jpg&#34; alt=&#34;KCSEU 2024 团体照片&#34;&gt;&lt;/p&gt;
&lt;p&gt;十年前的 2014 年 6 月 6 日，Kubernetes
的&lt;a href=&#34;https://github.com/kubernetes/kubernetes/commit/2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56&#34;&gt;第一次提交&lt;/a&gt;被推送到 GitHub。
第一次提交包含了 250 个文件和 47,501 行的 Go、Bash 和 Markdown 代码，
开启了我们今天所拥有的项目。谁能预测到 10 年后，Kubernetes 会成长为迄今为止最大的开源项目之一，
拥有来自超过 8,000 家公司、来自 44 个国家的
&lt;a href=&#34;https://k8s.devstats.cncf.io/d/24/overall-project-statistics?orgId=1&#34;&gt;88,000 名贡献者&lt;/a&gt;。&lt;/p&gt;
&lt;img src=&#34;kcscn2019.jpg&#34; alt=&#34;KCSCN 2019&#34; class=&#34;left&#34; style=&#34;max-width: 20em; margin: 1em&#34; &gt;
&lt;!--
This milestone isn&#39;t just for Kubernetes but for the Cloud Native ecosystem that blossomed from
it. There are close to [200 projects](https://all.devstats.cncf.io/d/18/overall-project-statistics-table?orgId=1)
within the CNCF itself, with contributions from
[240,000+ individual contributors](https://all.devstats.cncf.io/d/18/overall-project-statistics-table?orgId=1) and
thousands more in the greater ecosystem. Kubernetes would not be where it is today without them, the
[7M+ Developers](https://www.cncf.io/blog/2022/05/18/slashdata-cloud-native-continues-to-grow-with-more-than-7-million-developers-worldwide/),
and the even larger user community that have all helped shape the ecosystem that it is today.
--&gt;
&lt;p&gt;这一里程碑不仅属于 Kubernetes，也属于由此蓬勃发展的云原生生态系统。
在 CNCF 本身就有近 &lt;a href=&#34;https://all.devstats.cncf.io/d/18/overall-project-statistics-table?orgId=1&#34;&gt;200 个项目&lt;/a&gt;，有来自
&lt;a href=&#34;https://all.devstats.cncf.io/d/18/overall-project-statistics-table?orgId=1&#34;&gt;240,000 多名个人贡献者&lt;/a&gt;，
还有数千名来自更大的生态系统的贡献者的贡献。
如果没有 &lt;a href=&#34;https://www.cncf.io/blog/2022/05/18/slashdata-cloud-native-continues-to-grow-with-more-than-7-million-developers-worldwide/&#34;&gt;700 多万开发者&lt;/a&gt;和更庞大的用户社区，
Kubernetes 就不会达到今天的成就，他们一起帮助塑造了今天的生态系统。&lt;/p&gt;
&lt;!--
## Kubernetes&#39; beginnings - a converging of technologies

The ideas underlying Kubernetes started well before the first commit, or even the first prototype
([which came about in 2013](/blog/2018/07/20/the-history-of-kubernetes-the-community-behind-it/)).
In the early 2000s, Moore&#39;s Law was well in effect. Computing hardware was becoming more and more
powerful at an incredibly fast rate. Correspondingly, applications were growing more and more
complex. This combination of hardware commoditization and application complexity pointed to a need
to further abstract software from hardware, and solutions started to emerge.
--&gt;
&lt;h2 id=&#34;kubernetes-的起源-技术的融合&#34;&gt;Kubernetes 的起源 - 技术的融合&lt;/h2&gt;
&lt;p&gt;Kubernetes 背后的理念早在第一次提交之前，
甚至第一个原型（&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2018/07/20/the-history-of-kubernetes-the-community-behind-it/&#34;&gt;在 2013 年问世&lt;/a&gt;之前就已经存在。
在 21 世纪初，摩尔定律仍然成立。计算硬件正以惊人的速度变得越来越强大。
相应地，应用程序变得越来越复杂。硬件商品化和应用程序复杂性的结合表明需要进一步将软件从硬件中抽象出来，
因此解决方案开始出现。&lt;/p&gt;
&lt;!--
Like many companies at the time, Google was scaling rapidly, and its engineers were interested in
the idea of creating a form of isolation in the Linux kernel. Google engineer Rohit Seth described
the concept in an [email in 2006](https://lwn.net/Articles/199643/):
--&gt;
&lt;p&gt;像当时的许多公司一样，Google 正在快速扩张，其工程师对在 Linux 内核中创建一种隔离形式的想法很感兴趣。
Google 工程师 Rohit Seth 在 &lt;a href=&#34;https://lwn.net/Articles/199643/&#34;&gt;2006 年的一封电子邮件&lt;/a&gt;中描述了这个概念：&lt;/p&gt;
&lt;!--
&gt; We use the term container to indicate a structure against which we track and charge utilization of
system resources like memory, tasks, etc. for a Workload.
--&gt;
&lt;blockquote&gt;
&lt;p&gt;我们使用术语 “容器” 来表示一种结构，通过该结构我们可以对负载的系统资源（如内存、任务等）利用情况进行跟踪和计费。&lt;/p&gt;
&lt;/blockquote&gt;
&lt;img src=&#34;future.png&#34; alt=&#34;The future of Linux containers&#34; class=&#34;right&#34; style=&#34;max-width: 20em; margin: 1em&#34;&gt;
&lt;!--
In March of 2013, a 5-minute lightning talk called
[&#34;The future of Linux Containers,&#34; presented by Solomon Hykes at PyCon](https://youtu.be/wW9CAH9nSLs?si=VtK_VFQHymOT7BIB),
introduced an upcoming open source tool called &#34;Docker&#34; for creating and using Linux
Containers. Docker introduced a level of usability to Linux Containers that made them accessible to
more users than ever before, and the popularity of Docker, and thus of Linux Containers,
skyrocketed. With Docker making the abstraction of Linux Containers accessible to all, running
applications in much more portable and repeatable ways was suddenly possible, but the question of
scale remained.
--&gt;
&lt;p&gt;2013 年 3 月，&lt;a href=&#34;https://youtu.be/wW9CAH9nSLs?si=VtK_VFQHymOT7BIB&#34;&gt;Solomon Hykes 在 PyCon 上进行了一场名为 “Linux容器的未来”&lt;/a&gt;的
5 分钟闪电演讲，介绍了名为 “Docker” 的一款即将被推出的开源工具，用于创建和使用 Linux 容器。Docker
提升了 Linux 容器的可用性，使其比以往更容易被更多用户使用，从而使
Docker 和Linux 容器的流行度飙升。随着 Docker 使 Linux 容器的抽象概念可供所有人使用，
以更便于移植且可重复的方式运行应用突然成为可能，但大规模使用的问题仍然存在。&lt;/p&gt;
&lt;!--
Google&#39;s Borg system for managing application orchestration at scale had adopted Linux containers as
they were developed in the mid-2000s. Since then, the company had also started working on a new
version of the system called &#34;Omega.&#34; Engineers at Google who were familiar with the Borg and Omega
systems saw the popularity of containerization driven by Docker. They recognized not only the need
for an open source container orchestration system but its &#34;inevitability,&#34; as described by Brendan
Burns in this [blog post](/blog/2018/07/20/the-history-of-kubernetes-the-community-behind-it/). That
realization in the fall of 2013 inspired a small team to start working on a project that would later
become **Kubernetes**. That team included Joe Beda, Brendan Burns, Craig McLuckie, Ville Aikas, Tim
Hockin, Dawn Chen, Brian Grant, and Daniel Smith.
--&gt;
&lt;p&gt;Google 用来管理大规模应用编排的 Borg 系统在 2000 年代中期采用当时所开发的 Linux 容器技术。
此后，该公司还开始研发该系统的一个新版本，名为 “Omega”。
熟悉 Borg 和 Omega 系统的 Google 工程师们看到了 Docker 所推动的容器化技术的流行。
他们意识到对一个开源的容器编排系统的需求，而且意识到这一系统的“必然性”，正如
Brendan Burns 在这篇&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2018/07/20/the-history-of-kubernetes-the-community-behind-it/&#34;&gt;博文&lt;/a&gt;中所描述的。
这一认识在 2013 年秋天激发了一个小团队开始着手一个后来成为 &lt;strong&gt;Kubernetes&lt;/strong&gt;
的项目。该团队包括 Joe Beda、Brendan Burns、Craig McLuckie、Ville Aikas、Tim Hockin、Dawn Chen、Brian Grant
和 Daniel Smith。&lt;/p&gt;
&lt;!--
## A decade of Kubernetes

&lt;img src=&#34;kubeconeu2017.jpg&#34; alt=&#34;KubeCon EU 2017&#34; class=&#34;left&#34; style=&#34;max-width: 20em; margin: 1em&#34;&gt;

Kubernetes&#39; history begins with that historic commit on June 6th, 2014, and the subsequent
announcement of the project in a June 10th
[keynote by Google engineer Eric Brewer at DockerCon 2014](https://youtu.be/YrxnVKZeqK8?si=Q_wYBFn7dsS9H3k3)
(and its corresponding [Google blog](https://cloudplatform.googleblog.com/2014/06/an-update-on-container-support-on-google-cloud-platform.html)).
--&gt;
&lt;h2 id=&#34;kubernetes-十年回顾&#34;&gt;Kubernetes 十年回顾&lt;/h2&gt;
&lt;img src=&#34;kubeconeu2017.jpg&#34; alt=&#34;KubeCon EU 2017&#34; class=&#34;left&#34; style=&#34;max-width: 20em; margin: 1em&#34;&gt;
&lt;p&gt;Kubernetes 的历史始于 2014 年 6 月 6 日的那次历史性提交，随后，
&lt;a href=&#34;https://youtu.be/YrxnVKZeqK8?si=Q_wYBFn7dsS9H3k3&#34;&gt;Google 工程师 Eric Brewer 在 2014 年 6 月 10 日的 DockerCon 2014
上的主题演讲&lt;/a&gt;(&lt;a href=&#34;https://cloudplatform.googleblog.com/2014/06/an-update-on-container-support-on-google-cloud-platform.html&#34;&gt;及其相应的 Google 博客&lt;/a&gt;)中由宣布了该项目。&lt;/p&gt;
&lt;!--
Over the next year, a small community of
[contributors, largely from Google and Red Hat](https://k8s.devstats.cncf.io/d/9/companies-table?orgId=1&amp;var-period_name=Before%20joining%20CNCF&amp;var-metric=contributors),
worked hard on the project, culminating in a [version 1.0 release on July 21st, 2015](https://cloudplatform.googleblog.com/2015/07/Kubernetes-V1-Released.html).
Alongside 1.0, Google announced that Kubernetes would be donated to a newly formed branch of the
Linux Foundation called the
[Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/announcements/2015/06/21/new-cloud-native-computing-foundation-to-drive-alignment-among-container-technologies/).
--&gt;
&lt;p&gt;在接下来的一年里，一个由&lt;a href=&#34;https://k8s.devstats.cncf.io/d/9/companies-table?orgId=1&amp;amp;var-period_name=Before%20joining%20CNCF&amp;amp;var-metric=contributors&#34;&gt;主要来自 Google 和 Red Hat 等公司的贡献者&lt;/a&gt;组成的小型社区为该项目付出了辛勤的努力，最终在
2015 年 7 月 21 日发布了 &lt;a href=&#34;https://cloudplatform.googleblog.com/2015/07/Kubernetes-V1-Released.html&#34;&gt;1.0 版本&lt;/a&gt;。
在发布 1.0 版本的同时，Google 宣布将 Kubernetes 捐赠给 Linux 基金会下的一个新成立的分支，
即&lt;a href=&#34;https://www.cncf.io/announcements/2015/06/21/new-cloud-native-computing-foundation-to-drive-alignment-among-container-technologies/&#34;&gt;云原生计算基金会 (Cloud Native Computing Foundation，CNCF)&lt;/a&gt;。&lt;/p&gt;
&lt;!--
Despite reaching 1.0, the Kubernetes project was still very challenging to use and
understand. Kubernetes contributor Kelsey Hightower took special note of the project&#39;s shortcomings
in ease of use and on July 7, 2016, he pushed the
[first commit of his famed &#34;Kubernetes the Hard Way&#34; guide](https://github.com/kelseyhightower/kubernetes-the-hard-way/commit/9d7ace8b186f6ebd2e93e08265f3530ec2fba81c).
--&gt;
&lt;p&gt;尽管到了 1.0 版本，但 Kubernetes 项目的使用和理解仍然很困难。Kubernetes
贡献者 Kelsey Hightower 特别注意到了该项目在易用性方面的不足，并于 2016 年 7 月 7 日推出了他著名的
“Kubernetes the Hard Way” 指南的&lt;a href=&#34;https://github.com/kelseyhightower/kubernetes-the-hard-way/commit/9d7ace8b186f6ebd2e93e08265f3530ec2fba81c&#34;&gt;第一次提交&lt;/a&gt;。&lt;/p&gt;
&lt;!--
The project has changed enormously since its original 1.0 release; experiencing a number of big wins
such as
[Custom Resource Definitions (CRD) going GA in 1.16](/blog/2019/09/18/kubernetes-1-16-release-announcement/)
or [full dual stack support launching in 1.23](/blog/2021/12/08/dual-stack-networking-ga/) and
community &#34;lessons learned&#34; from the [removal of widely used beta APIs in 1.22](/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/)
or the deprecation of [Dockershim](/blog/2020/12/02/dockershim-faq/
--&gt;
&lt;p&gt;自从最初的 1.0 版本发布以来，项目经历了巨大的变化，取得了许多重大的成就，例如
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2019/09/18/kubernetes-1-16-release-announcement/&#34;&gt;在 1.16 版本中正式发布的 Custom Resource Definitions (CRD) &lt;/a&gt;，
或者&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2021/12/08/dual-stack-networking-ga/&#34;&gt;在 1.23 版本中推出的全面双栈支持&lt;/a&gt;，以及社区从
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/&#34;&gt;1.22 版本中移除广泛使用的 Beta API&lt;/a&gt;
和&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2020/12/02/dockershim-faq/&#34;&gt;弃用 Dockershim&lt;/a&gt; 中吸取的“教训”。&lt;/p&gt;
&lt;!--
Some notable updates, milestones and events since 1.0 include:

* December 2016 - [Kubernetes 1.5](/blog/2016/12/kubernetes-1-5-supporting-production-workloads/) introduces runtime pluggability with initial CRI support and alpha Windows node support. OpenAPI also appears for the first time, paving the way for clients to be able to discover extension APIs.
  * This release also introduced StatefulSets and PodDisruptionBudgets in Beta.
--&gt;
&lt;p&gt;自 1.0 版本以来的一些值得注意的更新、里程碑和事件包括：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;2016 年 12 月 - &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2016/12/kubernetes-1-5-supporting-production-workloads/&#34;&gt;Kubernetes 1.5&lt;/a&gt;
引入了运行时可插拔性，初步支持 CRI 和 Alpha 版 Windows 节点支持。
OpenAPI 也首次出现，为客户端能够发现扩展 API 铺平了道路。
&lt;ul&gt;
&lt;li&gt;此版本还引入了 Beta 版的 StatefulSet 和 PodDisruptionBudget。&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
* April 2017 — [Introduction of Role-Based Access Controls or RBAC](/blog/2017/04/rbac-support-in-kubernetes/).
* June 2017 — In [Kubernetes 1.7](/blog/2017/06/kubernetes-1-7-security-hardening-stateful-application-extensibility-updates/), ThirdPartyResources or &#34;TPRs&#34; are replaced with CustomResourceDefinitions (CRDs).
* December 2017 — [Kubernetes 1.9](/blog/2017/12/kubernetes-19-workloads-expanded-ecosystem/) sees the Workloads API becoming GA (Generally Available). The release blog states: _&#34;Deployment and ReplicaSet, two of the most commonly used objects in Kubernetes, are now stabilized after more than a year of real-world use and feedback.&#34;_
--&gt;
&lt;ul&gt;
&lt;li&gt;2017 年 4 月 — &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2017/04/rbac-support-in-kubernetes/&#34;&gt;引入基于角色的访问控制（RBAC）&lt;/a&gt;。&lt;/li&gt;
&lt;li&gt;2017 年 6 月 — 在 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2017/06/kubernetes-1-7-security-hardening-stateful-application-extensibility-updates/&#34;&gt;Kubernetes 1.7&lt;/a&gt;
中，ThirdPartyResources 或 &amp;quot;TPRs&amp;quot; 被 CustomResourceDefinitions（CRD）取代。&lt;/li&gt;
&lt;li&gt;2017 年 12 月 — &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2017/12/kubernetes-19-workloads-expanded-ecosystem/&#34;&gt;Kubernetes 1.9&lt;/a&gt; 中，
工作负载 API 成为 GA（正式可用）。发布博客中指出：“Deployment 和 ReplicaSet 是 Kubernetes 中最常用的两个对象，
在经过一年多的实际使用和反馈后，现在已经稳定下来。”&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
* December 2018 — In 1.13, the Container Storage Interface (CSI) reaches GA, kubeadm tool for bootstrapping minimum viable clusters reaches GA, and CoreDNS becomes the default DNS server.
* September 2019 — [Custom Resource Definitions go GA](/blog/2019/09/18/kubernetes-1-16-release-announcement/) in Kubernetes 1.16.
* August 2020 — [Kubernetes 1.19](/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/) increases the support window for releases to 1 year.
* December 2020 — [Dockershim is deprecated](/blog/2020/12/18/kubernetes-1.20-pod-impersonation-short-lived-volumes-in-csi/)  in 1.20
--&gt;
&lt;ul&gt;
&lt;li&gt;2018 年 12 月 — 在 1.13 版本中，容器存储接口（CSI）达到 GA，用于引导最小可用集群的 kubeadm 工具达到 GA，并且 CoreDNS 成为默认的 DNS 服务器。&lt;/li&gt;
&lt;li&gt;2019 年 9 月 — &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2019/09/18/kubernetes-1-16-release-announcement/&#34;&gt;自定义资源定义（Custom Resource Definition）在 Kubernetes 1.16 中正式发布&lt;/a&gt;。&lt;/li&gt;
&lt;li&gt;2020 年 8 月 — &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/&#34;&gt;Kubernetes 1.19&lt;/a&gt; 将发布支持窗口增加到 1 年。&lt;/li&gt;
&lt;li&gt;2020 年 12 月 — &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2020/12/18/kubernetes-1.20-pod-impersonation-short-lived-volumes-in-csi/&#34;&gt;Dockershim 在 1.20 版本中被弃用&lt;/a&gt;。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
* April 2021 — the [Kubernetes release cadence changes](/blog/2021/07/20/new-kubernetes-release-cadence/#:~:text=On%20April%2023%2C%202021%2C%20the,Kubernetes%20community&#39;s%20contributors%20and%20maintainers.) from 4 releases per year to 3 releases per year.
* July 2021 — Widely used beta APIs are [removed](/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/)  in Kubernetes 1.22.
* May 2022 — Kubernetes 1.24 sees  [beta APIs become disabled by default](/blog/2022/05/03/kubernetes-1-24-release-announcement/) to reduce upgrade conflicts and removal of [Dockershim](/dockershim), leading to [widespread user confusion](https://www.youtube.com/watch?v=a03Hh1kd6KE) (we&#39;ve since [improved our communication!](https://github.com/kubernetes/community/tree/master/communication/contributor-comms))
* December 2022 — In 1.26, there was a significant batch and  [Job API overhaul](/blog/2022/12/29/scalable-job-tracking-ga/) that paved the way for better support for AI  /ML / batch workloads.
--&gt;
&lt;ul&gt;
&lt;li&gt;2021 年 4 月 - &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2021/07/20/new-kubernetes-release-cadence/#:~:text=On%20April%2023%2C%202021%2C%20the,Kubernetes%20community&#39;s%20contributors%20and%20maintainers.&#34;&gt;Kubernetes 发布节奏变更&lt;/a&gt;，从每年发布 4 个版本变为每年发布 3 个版本。&lt;/li&gt;
&lt;li&gt;2021 年 7 月 - 在 Kubernetes 1.22 中&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/&#34;&gt;移除了广泛使用的 Beta API&lt;/a&gt;。&lt;/li&gt;
&lt;li&gt;2022 年 5 月 - 在 Kubernetes 1.24 中，&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2022/05/03/kubernetes-1-24-release-announcement/&#34;&gt;Beta API 默认被禁用&lt;/a&gt;，
以减少升级冲突，并移除了 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/dockershim&#34;&gt;Dockershim&lt;/a&gt;，导致&lt;a href=&#34;https://www.youtube.com/watch?v=a03Hh1kd6KE&#34;&gt;用户普遍感到困惑&lt;/a&gt;
（我们已经&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/communication/contributor-comms&#34;&gt;改进了我们的沟通方式！&lt;/a&gt;）&lt;/li&gt;
&lt;li&gt;2022 年 12 月 - 在 1.26 版本中，进行了重大的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2022/12/29/scalable-job-tracking-ga/&#34;&gt;批处理和作业 API 改进&lt;/a&gt;，
为更好地支持 AI/ML/批处理工作负载铺平了道路。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
**PS:** Curious to see how far the project has come for yourself? Check out this [tutorial for spinning up a Kubernetes 1.0 cluster](https://github.com/spurin/kubernetes-v1.0-lab) created by community members Carlos Santana, Amim Moises Salum Knabben, and James Spurin.
--&gt;
&lt;p&gt;&lt;strong&gt;附言:&lt;/strong&gt; 想亲自体会一下这个项目的进展么？可以查看由社区成员 Carlos Santana、Amim Moises Salum Knabben 和 James Spurin
创建的 &lt;a href=&#34;https://github.com/spurin/kubernetes-v1.0-lab&#34;&gt;Kubernetes 1.0 集群搭建教程&lt;/a&gt;。&lt;/p&gt;
&lt;hr&gt;
&lt;!--
Kubernetes offers more extension points than we can count. Originally designed to work with Docker
and only Docker, now you can plug in any container runtime that adheres to the CRI standard. There
are other similar interfaces: CSI for storage and CNI for networking. And that&#39;s far from all you
can do. In the last decade, whole new patterns have emerged, such as using

[Custom Resource Definitions](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
(CRDs) to support third-party controllers - now a huge part of the Kubernetes ecosystem.
--&gt;
&lt;p&gt;Kubernetes 提供的扩展点多得数不胜数。最初设计用于与 Docker 一起工作，现在你可以插入任何符合
CRI 标准的容器运行时。还有其他类似的接口：用于存储的 CSI 和用于网络的 CNI。
而且这还远远不是全部。在过去的十年中，出现了全新的模式，例如使用&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/extend-kubernetes/api-extension/custom-resources/&#34;&gt;自定义资源定义&lt;/a&gt;（CRD）
来支持第三方控制器 - 这现在是 Kubernetes 生态系统的重要组成部分。&lt;/p&gt;
&lt;!--
The community building the project has also expanded immensely over the last decade. Using
[DevStats](https://k8s.devstats.cncf.io/d/24/overall-project-statistics?orgId=1), we can see the
incredible volume of contribution over the last decade that has made Kubernetes the
[second-largest open source project in the world](https://www.cncf.io/reports/kubernetes-project-journey-report/):

* **88,474** contributors
* **15,121** code committers
* **4,228,347** contributions
* **158,530** issues
* **311,787** pull requests
--&gt;
&lt;p&gt;在过去十年间，参与构建该项目的社区也得到了巨大的扩展。通过使用
&lt;a href=&#34;https://k8s.devstats.cncf.io/d/24/overall-project-statistics?orgId=1&#34;&gt;DevStats&lt;/a&gt;，我们可以看到过去十年中令人难以置信的贡献量，这使得
Kubernetes 成为了&lt;a href=&#34;https://www.cncf.io/reports/kubernetes-project-journey-report/&#34;&gt;全球第二大开源项目&lt;/a&gt;：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;88,474&lt;/strong&gt; 位贡献者&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;15,121&lt;/strong&gt; 位代码提交者&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;4,228,347&lt;/strong&gt; 次贡献&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;158,530&lt;/strong&gt; 个问题&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;311,787&lt;/strong&gt; 个拉取请求&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Kubernetes today

&lt;img src=&#34;welcome.jpg&#34; alt=&#34;KubeCon NA 2023&#34; class=&#34;left&#34; style=&#34;max-width: 20em; margin: 1em&#34;&gt;

Since its early days, the project has seen enormous growth in technical capability, usage, and
contribution. The project is still actively working to improve and better serve its users.
--&gt;
&lt;h2 id=&#34;kubernetes-现状&#34;&gt;Kubernetes 现状&lt;/h2&gt;
&lt;img src=&#34;welcome.jpg&#34; alt=&#34;KubeCon NA 2023&#34; class=&#34;left&#34; style=&#34;max-width: 20em; margin: 1em&#34;&gt;
&lt;p&gt;自项目初期以来，项目在技术能力、使用率和贡献方面取得了巨大的增长。
项目仍在积极努力改进并更好地为用户服务。&lt;/p&gt;
&lt;!--
In the upcoming 1.31 release, the project will celebrate the culmination of an important long-term
project: the removal of in-tree cloud provider code. In this
[largest migration in Kubernetes history](/blog/2024/05/20/completing-cloud-provider-migration/),
roughly 1.5 million lines of code have been removed, reducing the binary sizes of core components
by approximately 40%. In the project&#39;s early days, it was clear that extensibility would be key to
success. However, it wasn&#39;t always clear how that extensibility should be achieved. This migration
removes a variety of vendor-specific capabilities from the core Kubernetes code
base. Vendor-specific capabilities can now be better served by other pluggable extensibility
features or patterns, such as
[Custom Resource Definitions (CRDs)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
or API standards like the [Gateway API](https://gateway-api.sigs.k8s.io/).
Kubernetes also faces new challenges in serving its vast user base, and the community is adapting
accordingly. One example of this is the migration of image hosting to the new, community-owned
registry.k8s.io. The egress bandwidth and costs of providing pre-compiled binary images for user
consumption have become immense. This new registry change enables the community to continue
providing these convenient images in more cost- and performance-efficient ways. Make sure you check
out the [blog post](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/) and
update any automation you have to use registry.k8s.io!
--&gt;
&lt;p&gt;在即将发布的 1.31 版本中，该项目将庆祝一个重要的长期项目的完成：移除内部云提供商代码。在这个
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/05/20/completing-cloud-provider-migration/&#34;&gt;Kubernetes 历史上最大的迁移&lt;/a&gt;中，大约删除了
150 万行代码，将核心组件的二进制文件大小减小了约 40%。在项目早期，很明显可扩展性是成功的关键。
然而，如何实现这种可扩展性并不总是很清楚。此次迁移从核心 Kubernetes 代码库中删除了各种特定于供应商的功能。
现在，特定于供应商的功能可以通过其他可插拔的扩展功能或模式更好地提供，例如&lt;a href=&#34;https://https://kubernetes.io/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/&#34;&gt;自定义资源定义（CRD）&lt;/a&gt;
或 &lt;a href=&#34;https://gateway-api.sigs.k8s.io/&#34;&gt;Gateway API&lt;/a&gt; 等 API 标准。
Kubernetes 在为其庞大的用户群体提供服务时也面临着新的挑战，社区正在相应地进行调整。其中一个例子是将镜像托管迁移到新的、由社区拥有的
registry.k8s.io。为用户提供预编译二进制镜像的出口带宽和成本已经变得非常巨大。这一新的仓库变更使社区能够以更具成本效益和性能高效的方式继续提供这些便利的镜像。
请务必查看&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/&#34;&gt;此博客文章&lt;/a&gt;并更新你必须使用 registry.k8s.io 仓库的任何自动化设施！&lt;/p&gt;
&lt;!--
## The future of Kubernetes

&lt;img src=&#34;lts.jpg&#34; alt=&#34;&#34; class=&#34;right&#34; width=&#34;300px&#34; style=&#34;max-width: 20em; margin: 1em&#34;&gt;

A decade in, the future of Kubernetes still looks bright. The community is prioritizing changes that
both improve the user experiences, and enhance the sustainability of the project. The world of
application development continues to evolve, and Kubernetes is poised to change along with it.
--&gt;
&lt;h2 id=&#34;kubernetes-的未来&#34;&gt;Kubernetes 的未来&lt;/h2&gt;
&lt;img src=&#34;lts.jpg&#34; alt=&#34;&#34; class=&#34;right&#34; width=&#34;300px&#34; style=&#34;max-width: 20em; margin: 1em&#34;&gt;
&lt;p&gt;十年过去了，Kubernetes 的未来依然光明。社区正在优先考虑改进用户体验和增强项目可持续性的变革。
应用程序开发的世界不断演变，Kubernetes 正准备随之变化。&lt;/p&gt;
&lt;!--
In 2024, the advent of AI changed a once-niche workload type into one of prominent
importance. Distributed computing and workload scheduling has always gone hand-in-hand with the
resource-intensive needs of Artificial Intelligence, Machine Learning, and High Performance
Computing workloads. Contributors are paying close attention to the needs of newly developed
workloads and how Kubernetes can best serve them. The new
[Serving Working Group](https://github.com/kubernetes/community/tree/master/wg-serving) is one
example of how the community is organizing to address these workloads&#39; needs. It&#39;s likely that the
next few years will see improvements to Kubernetes&#39; ability to manage various types of hardware, and
its ability to manage the scheduling of large batch-style workloads which are run across hardware in
chunks.
--&gt;
&lt;p&gt;2024 年，人工智能的进展将一种曾经小众的工作负载类型变成了一种非常重要的工作负载类型。
分布式计算和工作负载调度一直与人工智能、机器学习和高性能计算工作负载的资源密集需求密切相关。
贡献者们密切关注新开发的工作负载的需求以及 Kubernetes 如何为它们提供最佳服务。新成立的
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/wg-serving&#34;&gt;Serving 工作组&lt;/a&gt;
就是社区组织来解决这些工作负载需求的一个例子。未来几年可能会看到
Kubernetes 在管理各种类型的硬件以及管理跨硬件运行的大型批处理工作负载的调度能力方面的改进。&lt;/p&gt;
&lt;!--
The ecosystem around Kubernetes will continue to grow and evolve. In the future, initiatives to
maintain the sustainability of the project, like the migration of in-tree vendor code and the
registry change, will be ever more important.
--&gt;
&lt;p&gt;Kubernetes 周围的生态系统将继续发展壮大。未来，为了保持项目的可持续性，
像内部供应商代码的迁移和仓库变更这样的举措将变得更加重要。&lt;/p&gt;
&lt;!--
The next 10 years of Kubernetes will be guided by its users and the ecosystem, but most of all, by
the people who contribute to it. The community remains open to new contributors. You can find more
information about contributing in our New Contributor Course at
[https://k8s.dev/docs/onboarding](https://k8s.dev/docs/onboarding).

We look forward to building the future of Kubernetes with you!



&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/06/06/10-years-of-kubernetes/kcsna2023.jpg&#34;
         alt=&#34;KCSNA 2023&#34;/&gt; 
&lt;/figure&gt;
--&gt;
&lt;p&gt;Kubernetes 的未来 10 年将由其用户和生态系统引领，但最重要的是，由为其做出贡献的人引领。
社区对新贡献者持开放态度。你可以在我们的新贡献者课程
&lt;a href=&#34;https://k8s.dev/docs/onboarding&#34;&gt;https://k8s.dev/docs/onboarding&lt;/a&gt; 中找到更多有关贡献的信息。&lt;/p&gt;
&lt;p&gt;我们期待与你一起构建 Kubernetes 的未来！&lt;/p&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/06/06/10-years-of-kubernetes/kcsna2023.jpg&#34;
         alt=&#34;KCSNA 2023&#34;/&gt; 
&lt;/figure&gt;

      </description>
    </item>
    
    <item>
      <title>完成 Kubernetes 史上最大规模迁移</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/05/20/completing-cloud-provider-migration/</link>
      <pubDate>Mon, 20 May 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/05/20/completing-cloud-provider-migration/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#39;Completing the largest migration in Kubernetes history&#39;
date: 2024-05-20
slug: completing-cloud-provider-migration
author: &gt;
  Andrew Sy Kim (Google),
  Michelle Au (Google),
  Walter Fender (Google),
  Michael McCune (Red Hat)
--&gt;
&lt;!--
Since as early as Kubernetes v1.7, the Kubernetes project has pursued the ambitious goal of removing built-in cloud provider integrations ([KEP-2395](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers/README.md)).
While these integrations were instrumental in Kubernetes&#39; early development and growth, their removal was driven by two key factors:
the growing complexity of maintaining native support for every cloud provider across millions of lines of Go code, and the desire to establish
Kubernetes as a truly vendor-neutral platform.
--&gt;
&lt;p&gt;早自 Kubernetes v1.7 起，Kubernetes 项目就开始追求取消集成内置云驱动
（&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers/README.md&#34;&gt;KEP-2395&lt;/a&gt;）。
虽然这些集成对于 Kubernetes 的早期发展和增长发挥了重要作用，但它们的移除是由两个关键因素驱动的：
为各云启动维护数百万行 Go 代码的原生支持所带来的日趋增长的复杂度，以及将 Kubernetes 打造为真正的供应商中立平台的愿景。&lt;/p&gt;
&lt;!--
After many releases, we&#39;re thrilled to announce that all cloud provider integrations have been successfully migrated from the core Kubernetes repository to external plugins.
In addition to achieving our initial objectives, we&#39;ve also significantly streamlined Kubernetes by removing roughly 1.5 million lines of code and reducing the binary sizes of core components by approximately 40%.
--&gt;
&lt;p&gt;历经很多发布版本之后，我们很高兴地宣布所有云驱动集成组件已被成功地从核心 Kubernetes 仓库迁移到外部插件中。
除了实现我们最初的目标之外，我们还通过删除大约 150 万行代码，将核心组件的可执行文件大小减少了大约 40%，
极大简化了 Kubernetes。&lt;/p&gt;
&lt;!--
This migration was a complex and long-running effort due to the numerous impacted components and the critical code paths that relied on the built-in integrations for the
five initial cloud providers: Google Cloud, AWS, Azure, OpenStack, and vSphere. To successfully complete this migration, we had to build four new subsystems from the ground up:
--&gt;
&lt;p&gt;由于受影响的组件众多，而且关键代码路径依赖于五个初始云驱动（Google Cloud、AWS、Azure、OpenStack 和 vSphere）
的内置集成，因此此次迁移是一项复杂且耗时的工作。
为了成功完成此迁移，我们必须从头开始构建四个新的子系统：&lt;/p&gt;
&lt;!--
1. **Cloud controller manager** ([KEP-2392](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/2392-cloud-controller-manager/README.md))
1. **API server network proxy** ([KEP-1281](https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1281-network-proxy))
1. **kubelet credential provider plugins** ([KEP-2133](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2133-kubelet-credential-providers))
1. **Storage migration to use [CSI](https://github.com/container-storage-interface/spec?tab=readme-ov-file#container-storage-interface-csi-specification-)** ([KEP-625](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/625-csi-migration/README.md))
--&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;云控制器管理器（Cloud controller manager）&lt;/strong&gt;（&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/2392-cloud-controller-manager/README.md&#34;&gt;KEP-2392&lt;/a&gt;）&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;API 服务器网络代理&lt;/strong&gt;（&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1281-network-proxy&#34;&gt;KEP-1281&lt;/a&gt;）&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;kubelet 凭证提供程序插件&lt;/strong&gt;（&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2133-kubelet-credential-providers&#34;&gt;KEP-2133&lt;/a&gt;）&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;存储迁移以使用 &lt;a href=&#34;https://github.com/container-storage-interface/spec?tab=readme-ov-file#container-storage-interface-csi-specification-&#34;&gt;CSI&lt;/a&gt;&lt;/strong&gt;（&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/625-csi-migration/README.md&#34;&gt;KEP-625&lt;/a&gt;）&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
Each subsystem was critical to achieve full feature parity with built-in capabilities and required several releases to bring each subsystem to GA-level maturity with a safe and
reliable migration path. More on each subsystem below.
--&gt;
&lt;p&gt;就与内置功能实现完全的特性等价而言，每个子系统都至关重要，
并且需要迭代多个版本才能使每个子系统达到 GA 级别并具有安全可靠的迁移路径。
下面详细介绍每个子系统。&lt;/p&gt;
&lt;!--
### Cloud controller manager

The cloud controller manager was the first external component introduced in this effort, replacing functionality within the kube-controller-manager and kubelet that directly interacted with cloud APIs.
This essential component is responsible for initializing nodes by applying metadata labels that indicate the cloud region and zone a Node is running on, as well as IP addresses that are only known to the cloud provider.
Additionally, it runs the service controller, which is responsible for provisioning cloud load balancers for Services of type LoadBalancer.
--&gt;
&lt;h3 id=&#34;云控制器管理器&#34;&gt;云控制器管理器&lt;/h3&gt;
&lt;p&gt;云控制器管理器是这项工作中引入的第一个外部组件，取代了 kube-controller-manager 和 kubelet 中直接与云 API 交互的功能。
这个基本组件负责通过施加元数据标签来初始化节点。所施加的元数据标签标示节点运行所在的云区域和可用区，
以及只有云驱动知道的 IP 地址。
此外，它还运行服务控制器，该控制器负责为 LoadBalancer 类型的 Service 配置云负载均衡器。&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/images/docs/components-of-kubernetes.svg&#34; alt=&#34;Kubernetes 组件&#34;&gt;&lt;/p&gt;
&lt;!--
To learn more, read [Cloud Controller Manager](/docs/concepts/architecture/cloud-controller/) in the Kubernetes documentation.
--&gt;
&lt;p&gt;要进一步了解相关信息，请阅读 Kubernetes 文档中的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/architecture/cloud-controller/&#34;&gt;云控制器管理器&lt;/a&gt;。&lt;/p&gt;
&lt;!--
### API server network proxy

The API Server Network Proxy project, initiated in 2018 in collaboration with SIG API Machinery, aimed to replace the SSH tunneler functionality within the kube-apiserver.
This tunneler had been used to securely proxy traffic between the Kubernetes control plane and nodes, but it heavily relied on provider-specific implementation details embedded in the kube-apiserver to establish these SSH tunnels.
--&gt;
&lt;h3 id=&#34;api-服务器网络代理&#34;&gt;API 服务器网络代理&lt;/h3&gt;
&lt;p&gt;API 服务器网络代理项目于 2018 年与 SIG API Machinery 合作启动，旨在取代 kube-apiserver 中的 SSH 隧道功能。
该隧道器原用于安全地代理 Kubernetes 控制平面和节点之间的流量，但它重度依赖于
kube-apiserver 中所嵌入的、特定于提供商的实现细节来建立这些 SSH 隧道。&lt;/p&gt;
&lt;!--
Now, the API Server Network Proxy is a GA-level extension point within the kube-apiserver. It offers a generic proxying mechanism that can route traffic from the API server to nodes through a secure proxy,
eliminating the need for the API server to have any knowledge of the specific cloud provider it is running on. This project also introduced the Konnectivity project, which has seen growing adoption in production environments.
--&gt;
&lt;p&gt;现在，API 服务器网络代理成为 kube-apiserver 中 GA 级别的扩展点。
提供了一种通用代理机制，可以通过一个安全的代理将流量从 API 服务器路由到节点，
从而使 API 服务器无需了解其运行所在的特定云驱动。
此项目还引入了 Konnectivity 项目，该项目在生产环境中的采用越来越多。&lt;/p&gt;
&lt;!--
You can learn more about the API Server Network Proxy from its [README](https://github.com/kubernetes-sigs/apiserver-network-proxy#readme).
--&gt;
&lt;p&gt;你可以在其 &lt;a href=&#34;https://github.com/kubernetes-sigs/apiserver-network-proxy#readme&#34;&gt;README&lt;/a&gt;
中了解有关 API 服务器网络代理的更多信息。&lt;/p&gt;
&lt;!--
### Credential provider plugins for the kubelet

The Kubelet credential provider plugin was developed to replace the kubelet&#39;s built-in functionality for dynamically fetching credentials for image registries hosted on Google Cloud, AWS, or Azure.
The legacy capability was convenient as it allowed the kubelet to seamlessly retrieve short-lived tokens for pulling images from GCR, ECR, or ACR. However, like other areas of Kubernetes, supporting
this required the kubelet to have specific knowledge of different cloud environments and APIs.
--&gt;
&lt;h3 id=&#34;kubelet-的凭据提供程序插件&#34;&gt;kubelet 的凭据提供程序插件&lt;/h3&gt;
&lt;p&gt;kubelet 凭据提供程序插件的开发是为了取代 kubelet 的内置功能，用于动态获取用于托管在
Google Cloud、AWS 或 Azure 上的镜像仓库的凭据。
原来所实现的功能很方便，因为它允许 kubelet 无缝地获取短期令牌以从 GCR、ECR 或 ACR 拉取镜像
然而，与 Kubernetes 的其他领域一样，支持这一点需要 kubelet 具有不同云环境和 API 的特定知识。&lt;/p&gt;
&lt;!--
Introduced in 2019, the credential provider plugin mechanism offers a generic extension point for the kubelet to execute plugin binaries that dynamically provide credentials for images hosted on various clouds.
This extensibility expands the kubelet&#39;s capabilities to fetch short-lived tokens beyond the initial three cloud providers.
--&gt;
&lt;p&gt;凭据驱动插件机制于 2019 年推出，为 kubelet 提供了一个通用扩展点用于执行插件的可执行文件，
进而为访问各种云上托管的镜像动态提供凭据。
可扩展性扩展了 kubelet 获取短期令牌的能力，且不受限于最初的三个云驱动。&lt;/p&gt;
&lt;!--
To learn more, read [kubelet credential provider for authenticated image pulls](/docs/concepts/containers/images/#kubelet-credential-provider).
--&gt;
&lt;p&gt;要了解更多信息，请阅读&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/containers/images/#kubelet-credential-provider&#34;&gt;用于认证镜像拉取的 kubelet 凭据提供程序&lt;/a&gt;。&lt;/p&gt;
&lt;!--
### Storage plugin migration from in-tree to CSI

The Container Storage Interface (CSI) is a control plane standard for managing block and file storage systems in Kubernetes and other container orchestrators that went GA in 1.13.
It was designed to replace the in-tree volume plugins built directly into Kubernetes with drivers that can run as Pods within the Kubernetes cluster.
These drivers communicate with kube-controller-manager storage controllers via the Kubernetes API, and with kubelet through a local gRPC endpoint.
Now there are over 100 CSI drivers available across all major cloud and storage vendors, making stateful workloads in Kubernetes a reality.
--&gt;
&lt;h3 id=&#34;存储插件从树内迁移到-csi&#34;&gt;存储插件从树内迁移到 CSI&lt;/h3&gt;
&lt;p&gt;容器存储接口（Container Storage Interface，CSI）是一种控制平面标准，用于管理 Kubernetes
和其他容器编排系统中的块和文件存储系统，已在 1.13 中进入正式发布状态。
它的设计目标是用可在 Kubernetes 集群中 Pod 内运行的驱动程序替换直接内置于 Kubernetes 中的树内卷插件。
这些驱动程序通过 Kubernetes API 与 kube-controller-manager 存储控制器通信，并通过本地 gRPC 端点与 kubelet 进行通信。
现在，所有主要云和存储供应商一起提供了 100 多个 CSI 驱动，使 Kubernetes 中运行有状态工作负载成为现实。&lt;/p&gt;
&lt;!--
However, a major challenge remained on how to handle all the existing users of in-tree volume APIs. To retain API backwards compatibility,
we built an API translation layer into our controllers that will convert the in-tree volume API into the equivalent CSI API. This allowed us to redirect all storage operations to the CSI driver,
paving the way for us to remove the code for the built-in volume plugins without removing the API.
--&gt;
&lt;p&gt;然而，如何处理树内卷 API 的所有现有用户仍然是一个重大挑战。
为了保持 API 向后兼容性，我们在控制器中构建了一个 API 转换层，把树内卷 API 转换为等效的 CSI API。
这使我们能够将所有存储操作重定向到 CSI 驱动程序，为我们在不删除 API 的情况下删除内置卷插件的代码铺平了道路。&lt;/p&gt;
&lt;!--
You can learn more about In-tree Storage migration in [Kubernetes In-Tree to CSI Volume Migration Moves to Beta](https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/).
--&gt;
&lt;p&gt;你可以在 &lt;a href=&#34;https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/&#34;&gt;Kubernetes 树内卷到 CSI 卷的迁移进入 Beta 阶段&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## What&#39;s next?

This migration has been the primary focus for SIG Cloud Provider over the past few years. With this significant milestone achieved, we will be shifting our efforts towards exploring new
and innovative ways for Kubernetes to better integrate with cloud providers, leveraging the external subsystems we&#39;ve built over the years. This includes making Kubernetes smarter in
hybrid environments where nodes in the cluster can run on both public and private clouds, as well as providing better tools and frameworks for developers of external providers to simplify and streamline their integration efforts.
--&gt;
&lt;h2 id=&#34;下一步是什么&#34;&gt;下一步是什么？&lt;/h2&gt;
&lt;p&gt;过去几年，这一迁移工程一直是 SIG Cloud Provider 的主要关注点。
随着这一重要里程碑的实现，我们将把努力转向探索新的创新方法，让 Kubernetes 更好地与云驱动集成，利用我们多年来构建的外部子系统。
这包括使 Kubernetes 在混合环境中变得更加智能，其集群中的节点可以运行在公共云和私有云上，
以及为外部驱动的开发人员提供更好的工具和框架，以简化他们的集成工作，提高效率。&lt;/p&gt;
&lt;!--
With all the new features, tools, and frameworks being planned, SIG Cloud Provider is not forgetting about the other side of the equation: testing. Another area of focus for the SIG&#39;s future activities is the improvement of
cloud controller testing to include more providers. The ultimate goal of this effort being to create a testing framework that will include as many providers as possible so that we give the Kubernetes community the highest
levels of confidence about their Kubernetes environments.
--&gt;
&lt;p&gt;在规划所有这些新特性、工具和框架的同时，SIG Cloud Provider 并没有忘记另一项同样重要的工作：测试。
SIG 未来活动的另一个重点领域是改进云控制器测试以涵盖更多的驱动。
这项工作的最终目标是创建一个包含尽可能多驱动的测试框架，以便我们让 Kubernetes 社区对其 Kubernetes 环境充满信心。&lt;/p&gt;
&lt;!--
If you&#39;re using a version of Kubernetes older than v1.29 and haven&#39;t migrated to an external cloud provider yet, we recommend checking out our previous blog post [Kubernetes 1.29: Cloud Provider Integrations Are Now Separate Components](/blog/2023/12/14/cloud-provider-integration-changes/).
It provides detailed information on the changes we&#39;ve made and offers guidance on how to migrate to an external provider.
Starting in v1.31, in-tree cloud providers will be permanently disabled and removed from core Kubernetes components.
--&gt;
&lt;p&gt;如果你使用的 Kubernetes 版本早于 v1.29 并且尚未迁移到外部云驱动，我们建议你查阅我们之前的博客文章
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/12/14/cloud-provider-integration-changes/&#34;&gt;Kubernetes 1.29：云驱动集成现在是单独的组件&lt;/a&gt;。
该博客包含与我们所作的变更相关的详细信息，并提供了有关如何迁移到外部驱动的指导。
从 v1.31 开始，树内云驱动将被永久禁用并从核心 Kubernetes 组件中删除。&lt;/p&gt;
&lt;!--
If you’re interested in contributing, come join our [bi-weekly SIG meetings](https://github.com/kubernetes/community/tree/master/sig-cloud-provider#meetings)!
--&gt;
&lt;p&gt;如果你有兴趣做出贡献，请参加我们的&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-cloud-provider#meetings&#34;&gt;每两周一次的 SIG 会议&lt;/a&gt;!&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Gateway API v1.1：服务网格、GRPCRoute 和更多变化</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/05/09/gateway-api-v1-1/</link>
      <pubDate>Thu, 09 May 2024 09:00:00 -0800</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/05/09/gateway-api-v1-1/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Gateway API v1.1: Service mesh, GRPCRoute, and a whole lot more&#34;
date: 2024-05-09T09:00:00-08:00
slug: gateway-api-v1-1
author: &gt;
  [Richard Belleville](https://github.com/gnossen) (Google),
  [Frank Budinsky](https://github.com/frankbu) (IBM),
  [Arko Dasgupta](https://github.com/arkodg) (Tetrate),
  [Flynn](https://github.com/kflynn) (Buoyant),
  [Candace Holman](https://github.com/candita) (Red Hat),
  [John Howard](https://github.com/howardjohn) (Solo.io),
  [Christine Kim](https://github.com/xtineskim) (Isovalent),
  [Mattia Lavacca](https://github.com/mlavacca) (Kong),
  [Keith Mattix](https://github.com/keithmattix) (Microsoft),
  [Mike Morris](https://github.com/mikemorris) (Microsoft),
  [Rob Scott](https://github.com/robscott) (Google),
  [Grant Spence](https://github.com/gcs278) (Red Hat),
  [Shane Utt](https://github.com/shaneutt) (Kong),
  [Gina Yeh](https://github.com/ginayeh) (Google),
  and other review and release note contributors
--&gt;
&lt;p&gt;&lt;img src=&#34;gateway-api-logo.svg&#34; alt=&#34;Gateway API logo&#34;&gt;&lt;/p&gt;
&lt;!--
Following the GA release of Gateway API last October, Kubernetes
SIG Network is pleased to announce the v1.1 release of
[Gateway API](https://gateway-api.sigs.k8s.io/). In this release, several features are graduating to
_Standard Channel_ (GA), notably including support for service mesh and
GRPCRoute. We&#39;re also introducing some new experimental features, including
session persistence and client certificate verification.
--&gt;
&lt;p&gt;继去年十月正式发布 Gateway API 之后，Kubernetes SIG Network 现在又很高兴地宣布
&lt;a href=&#34;https://gateway-api.sigs.k8s.io/&#34;&gt;Gateway API&lt;/a&gt; v1.1 版本发布。
在本次发布中，有几个特性已进阶至&lt;strong&gt;标准渠道&lt;/strong&gt;（GA），特别是对服务网格和 GRPCRoute 的支持也已进阶。
我们还引入了一些新的实验性特性，包括会话持久性和客户端证书验证。&lt;/p&gt;
&lt;!--
## What&#39;s new

### Graduation to Standard
--&gt;
&lt;h2 id=&#34;whats-new&#34;&gt;新内容  &lt;/h2&gt;
&lt;h3 id=&#34;graduation-to-standard&#34;&gt;进阶至标准渠道  &lt;/h3&gt;
&lt;!--
This release includes the graduation to Standard of four eagerly awaited features.
This means they are no longer experimental concepts; inclusion in the Standard
release channel denotes a high level of confidence in the API surface and
provides guarantees of backward compatibility. Of course, as with any other
Kubernetes API, Standard Channel features can continue to evolve with
backward-compatible additions over time, and we certainly expect further
refinements and improvements to these new features in the future.
For more information on how all of this works, refer to the
[Gateway API Versioning Policy](https://gateway-api.sigs.k8s.io/concepts/versioning/).
--&gt;
&lt;p&gt;本次发布有四个备受期待的特性进阶至标准渠道。这意味着它们不再是实验性的概念；
包含在标准发布渠道中的举措展现了大家对 API 接口的高度信心，并提供向后兼容的保证。
当然，与所有其他 Kubernetes API 一样，标准渠道的特性可以随着时间的推移通过向后兼容的方式演进，
我们当然期待未来对这些新特性有进一步的优化和改进。
有关细节请参阅 &lt;a href=&#34;https://gateway-api.sigs.k8s.io/concepts/versioning/&#34;&gt;Gateway API 版本控制政策&lt;/a&gt;。&lt;/p&gt;
&lt;!--
#### [Service Mesh Support](https://gateway-api.sigs.k8s.io/mesh/)

Service mesh support in Gateway API allows service mesh users to use the same
API to manage ingress traffic and mesh traffic, reusing the same policy and
routing interfaces. In Gateway API v1.1, routes (such as HTTPRoute) can now have
a Service as a `parentRef`, to control how traffic to specific services behave.
For more information, read the
[Gateway API service mesh documentation](https://gateway-api.sigs.k8s.io/mesh/)
or see the
[list of Gateway API implementations](https://gateway-api.sigs.k8s.io/implementations/#service-mesh-implementation-status).
--&gt;
&lt;h4 id=&#34;服务网格支持-https-gateway-api-sigs-k8s-io-mesh&#34;&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/mesh/&#34;&gt;服务网格支持&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;在 Gateway API 中支持服务网格意味着允许服务网格用户使用相同的 API 来管理 Ingress 流量和网格流量，
能够重用相同的策略和路由接口。在 Gateway API v1.1 中，路由（如 HTTPRoute）现在可以将一个 Service 作为 &lt;code&gt;parentRef&lt;/code&gt;，
以控制到特定服务的流量行为。有关细节请查阅
&lt;a href=&#34;https://gateway-api.sigs.k8s.io/mesh/&#34;&gt;Gateway API 服务网格文档&lt;/a&gt;或
&lt;a href=&#34;https://gateway-api.sigs.k8s.io/implementations/#service-mesh-implementation-status&#34;&gt;Gateway API 实现列表&lt;/a&gt;。&lt;/p&gt;
&lt;!--
As an example, one could do a canary deployment of a workload deep in an
application&#39;s call graph with an HTTPRoute as follows:
--&gt;
&lt;p&gt;例如，你可以使用如下 HTTPRoute 以金丝雀部署深入到应用调用图中的工作负载：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;gateway.networking.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;HTTPRoute&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;color-canary&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;namespace&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;faces&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;parentRefs&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;color&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Service&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;group&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;port&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;80&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;rules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;backendRefs&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;color&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;port&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;80&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;weight&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;50&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;color2&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;port&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;80&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;weight&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;50&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
This would split traffic sent to the `color` Service in the `faces` namespace
50/50 between the original `color` Service and the `color2` Service, using a
portable configuration that&#39;s easy to move from one mesh to another.
--&gt;
&lt;p&gt;通过使用一种便于从一个网格迁移到另一个网格的可移植配置，
此 HTTPRoute 对象将把发送到 &lt;code&gt;faces&lt;/code&gt; 命名空间中的 &lt;code&gt;color&lt;/code&gt; Service 的流量按 50/50
拆分到原始的 &lt;code&gt;color&lt;/code&gt; Service 和 &lt;code&gt;color2&lt;/code&gt; Service 上。&lt;/p&gt;
&lt;!--
#### [GRPCRoute](https://gateway-api.sigs.k8s.io/guides/grpc-routing/)

If you are already using the experimental version of GRPCRoute, we recommend holding
off on upgrading to the standard channel version of GRPCRoute until the
controllers you&#39;re using have been updated to support GRPCRoute v1. Until then,
it is safe to upgrade to the experimental channel version of GRPCRoute in v1.1
that includes both v1alpha2 and v1 API versions.
--&gt;
&lt;h4 id=&#34;grpcroute-https-gateway-api-sigs-k8s-io-guides-grpc-routing&#34;&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/guides/grpc-routing/&#34;&gt;GRPCRoute&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;如果你已经在使用实验性版本的 GRPCRoute，我们建议你暂时不要升级到标准渠道版本的 GRPCRoute，
除非你正使用的控制器已被更新为支持 GRPCRoute v1。
在此之后，你才可以安全地升级到实验性渠道版本的 GRPCRoute v1.1，这个版本同时包含了 v1alpha2 和 v1 的 API。&lt;/p&gt;
&lt;!--
#### [ParentReference Port](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io%2fv1.ParentReference)

The `port` field was added to ParentReference, allowing you to attach resources
to Gateway Listeners, Services, or other parent resources
(depending on the implementation). Binding to a port also allows you to attach
to multiple Listeners at once.
--&gt;
&lt;h4 id=&#34;parentreference-端口-https-gateway-api-sigs-k8s-io-reference-spec-gateway-networking-k8s-io-2fv1-parentreference&#34;&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io%2fv1.ParentReference&#34;&gt;ParentReference 端口&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;&lt;code&gt;port&lt;/code&gt; 字段已被添加到 ParentReference 中，
允许你将资源挂接到 Gateway 监听器、Service 或其他父资源（取决于实现）。
绑定到某个端口还允许你一次挂接到多个监听器。&lt;/p&gt;
&lt;!--
For example, you can attach an HTTPRoute to one or more specific Listeners of a
Gateway as specified by the Listener `port`, instead of the Listener `name` field.

For more information, see
[Attaching to Gateways](https://gateway-api.sigs.k8s.io/api-types/httproute/#attaching-to-gateways).
--&gt;
&lt;p&gt;例如，你可以将 HTTPRoute 挂接到由监听器 &lt;code&gt;port&lt;/code&gt; 而不是监听器 &lt;code&gt;name&lt;/code&gt; 字段所指定的一个或多个特定监听器。&lt;/p&gt;
&lt;p&gt;有关细节请参阅&lt;a href=&#34;https://gateway-api.sigs.k8s.io/api-types/httproute/#attaching-to-gateways&#34;&gt;挂接到 Gateways&lt;/a&gt;。&lt;/p&gt;
&lt;!--
#### [Conformance Profiles and Reports](https://gateway-api.sigs.k8s.io/concepts/conformance/#conformance-profiles)

The conformance report API has been expanded with the `mode` field (intended to
specify the working mode of the implementation), and the `gatewayAPIChannel`
(standard or experimental). The `gatewayAPIVersion` and `gatewayAPIChannel` are
now filled in automatically by the suite machinery, along with a brief
description of the testing outcome. The Reports have been reorganized in a more
structured way, and the implementations can now add information on how the tests
have been run and provide reproduction steps.
--&gt;
&lt;h4 id=&#34;合规性配置文件和报告-https-gateway-api-sigs-k8s-io-concepts-conformance-conformance-profiles&#34;&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/concepts/conformance/#conformance-profiles&#34;&gt;合规性配置文件和报告&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;合规性报告 API 被扩展了，添加了 &lt;code&gt;mode&lt;/code&gt; 字段（用于指定实现的工作模式）以及 &lt;code&gt;gatewayAPIChannel&lt;/code&gt;（标准或实验性）。
&lt;code&gt;gatewayAPIVersion&lt;/code&gt; 和 &lt;code&gt;gatewayAPIChannel&lt;/code&gt; 现在由套件机制自动填充，并附有测试结果的简要描述。
这些报告已通过更加结构化的方式进行重新组织，现在实现可以添加测试是如何运行的有关信息，还能提供复现步骤。&lt;/p&gt;
&lt;!--
### New additions to Experimental channel

#### [Gateway Client Certificate Verification](https://gateway-api.sigs.k8s.io/geps/gep-91/)

Gateways can now configure client cert verification for each Gateway Listener by
introducing a new `frontendValidation` field within `tls`. This field
supports configuring a list of CA Certificates that can be used as a trust
anchor to validate the certificates presented by the client.
--&gt;
&lt;h3 id=&#34;实验性渠道的新增内容&#34;&gt;实验性渠道的新增内容&lt;/h3&gt;
&lt;h4 id=&#34;gateway-客户端证书验证-https-gateway-api-sigs-k8s-io-geps-gep-91&#34;&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/geps/gep-91/&#34;&gt;Gateway 客户端证书验证&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;Gateway 现在可以通过在 &lt;code&gt;tls&lt;/code&gt; 内引入的新字段 &lt;code&gt;frontendValidation&lt;/code&gt; 来为每个
Gateway 监听器配置客户端证书验证。此字段支持配置可用作信任锚的 CA 证书列表，以验证客户端呈现的证书。&lt;/p&gt;
&lt;!--
The following example shows how the CACertificate stored in
the `foo-example-com-ca-cert` ConfigMap can be used to validate the certificates
presented by clients connecting to the `foo-https` Gateway Listener.
--&gt;
&lt;p&gt;以下示例显示了如何使用存储在 &lt;code&gt;foo-example-com-ca-cert&lt;/code&gt; ConfigMap 中的 CACertificate
来验证连接到 &lt;code&gt;foo-https&lt;/code&gt; Gateway 监听器的客户端所呈现的证书。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;gateway.networking.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Gateway&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;client-validation-basic&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;gatewayClassName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;acme-lb&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;listeners&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;foo-https&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;protocol&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;HTTPS&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;port&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;443&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;hostname&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;foo.example.com&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;tls&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;certificateRefs&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Secret&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;group&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;foo-example-com-cert&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;frontendValidation&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;caCertificateRefs&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ConfigMap&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;group&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;foo-example-com-ca-cert&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
#### [Session Persistence and BackendLBPolicy](https://gateway-api.sigs.k8s.io/geps/gep-1619/)

[Session Persistence](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io%2fv1.SessionPersistence)
is being introduced to Gateway API via a new policy
([BackendLBPolicy](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1alpha2.BackendLBPolicy))
for Service-level configuration and as fields within HTTPRoute
and GRPCRoute for route-level configuration. The BackendLBPolicy and route-level
APIs provide the same session persistence configuration, including session
timeouts, session name, session type, and cookie lifetime type.
--&gt;
&lt;h4 id=&#34;会话持久性和-backendlbpolicy-https-gateway-api-sigs-k8s-io-geps-gep-1619&#34;&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/geps/gep-1619/&#34;&gt;会话持久性和 BackendLBPolicy&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io%2fv1.SessionPersistence&#34;&gt;会话持久性&lt;/a&gt;
通过新的策略（&lt;a href=&#34;https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1alpha2.BackendLBPolicy&#34;&gt;BackendLBPolicy&lt;/a&gt;）
引入到 Gateway API 中用于服务级配置，在 HTTPRoute 和 GRPCRoute 内以字段的形式用于路由级配置。
BackendLBPolicy 和路由级 API 提供相同的会话持久性配置，包括会话超时、会话名称、会话类型和 cookie 生命周期类型。&lt;/p&gt;
&lt;!--
Below is an example configuration of `BackendLBPolicy` that enables cookie-based
session persistence for the `foo` service. It sets the session name to
`foo-session`, defines absolute and idle timeouts, and configures the cookie to
be a session cookie:
--&gt;
&lt;p&gt;以下是 &lt;code&gt;BackendLBPolicy&lt;/code&gt; 的示例配置，为 &lt;code&gt;foo&lt;/code&gt; 服务启用基于 Cookie 的会话持久性。
它将会话名称设置为 &lt;code&gt;foo-session&lt;/code&gt;，定义绝对超时时间和空闲超时时间，并将 Cookie 配置为会话 Cookie：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;gateway.networking.k8s.io/v1alpha2&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;BackendLBPolicy&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;lb-policy&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;namespace&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;foo-ns&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;targetRefs&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;group&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;core&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;service&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;foo&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;sessionPersistence&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;sessionName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;foo-session&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;absoluteTimeout&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;1h&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;idleTimeout&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;30m&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Cookie&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;cookieConfig&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;lifetimeType&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Session&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Everything else

#### [TLS Terminology Clarifications](https://gateway-api.sigs.k8s.io/geps/gep-2907/)
--&gt;
&lt;h3 id=&#34;其他更新&#34;&gt;其他更新&lt;/h3&gt;
&lt;h4 id=&#34;tls-术语阐述-https-gateway-api-sigs-k8s-io-geps-gep-2907&#34;&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/geps/gep-2907/&#34;&gt;TLS 术语阐述&lt;/a&gt;&lt;/h4&gt;
&lt;!--
As part of a broader goal of making our TLS terminology more consistent
throughout the API, we&#39;ve introduced some breaking changes to BackendTLSPolicy.
This has resulted in a new API version (v1alpha3) and will require any existing
implementations of this policy to properly handle the version upgrade, e.g.
by backing up data and uninstalling the v1alpha2 version before installing this
newer version.

Any references to v1alpha2 BackendTLSPolicy fields will need to be updated to
v1alpha3. Specific changes to fields include:
--&gt;
&lt;p&gt;为了在整个 API 中让我们的 TLS 术语更加一致以实现更广泛的目标，
我们对 BackendTLSPolicy 做了一些破坏性变更。
这就产生了新的 API 版本（v1alpha3），且将需要这个策略所有现有的实现来正确处理版本升级，
例如通过备份数据并在安装这个新版本之前卸载 v1alpha2 版本。&lt;/p&gt;
&lt;p&gt;所有引用了 v1alpha2 BackendTLSPolicy 的字段都将需要更新为 v1alpha3。这些字段的具体变更包括：&lt;/p&gt;
&lt;!--
- `targetRef` becomes `targetRefs` to allow a BackendTLSPolicy to attach to
  multiple targets
- `tls` becomes `validation`
- `tls.caCertRefs` becomes `validation.caCertificateRefs`
- `tls.wellKnownCACerts` becomes `validation.wellKnownCACertificates`
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;targetRef&lt;/code&gt; 变为 &lt;code&gt;targetRefs&lt;/code&gt; 以允许 BackendTLSPolicy 挂接到多个目标&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tls&lt;/code&gt; 变为 &lt;code&gt;validation&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tls.caCertRefs&lt;/code&gt; 变为 &lt;code&gt;validation.caCertificateRefs&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tls.wellKnownCACerts&lt;/code&gt; 变为 &lt;code&gt;validation.wellKnownCACertificates&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
For a full list of the changes included in this release, please refer to the
[v1.1.0 release notes](https://github.com/kubernetes-sigs/gateway-api/releases/tag/v1.1.0).
--&gt;
&lt;p&gt;有关本次发布包含的完整变更列表，请参阅
&lt;a href=&#34;https://github.com/kubernetes-sigs/gateway-api/releases/tag/v1.1.0&#34;&gt;v1.1.0 发布说明&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## Gateway API background

The idea of Gateway API was initially [proposed](https://youtu.be/Ne9UJL6irXY?si=wgtC9w8PMB5ZHil2)
at the 2019 KubeCon San Diego as the next generation
of Ingress API. Since then, an incredible community has formed to develop what
has likely become the
[most collaborative API in Kubernetes history](https://www.youtube.com/watch?v=V3Vu_FWb4l4).
Over 200 people have contributed to this API so far, and that number continues to grow.
--&gt;
&lt;h2 id=&#34;gateway-api-background&#34;&gt;Gateway API 背景  &lt;/h2&gt;
&lt;p&gt;Gateway API 的想法最初是在 2019 年 KubeCon San Diego 上作为下一代 Ingress API
&lt;a href=&#34;https://youtu.be/Ne9UJL6irXY?si=wgtC9w8PMB5ZHil2&#34;&gt;提出的&lt;/a&gt;。
从那时起，一个令人瞩目的社区逐渐形成，共同开发出了可能成为
&lt;a href=&#34;https://www.youtube.com/watch?v=V3Vu_FWb4l4&#34;&gt;Kubernetes 历史上最具合作精神的 API&lt;/a&gt;。
到目前为止，已有超过 200 人为该 API 做过贡献，而且这一数字还在不断攀升。&lt;/p&gt;
&lt;!--
The maintainers would like to thank _everyone_ who&#39;s contributed to Gateway API, whether in the
form of commits to the repo, discussion, ideas, or general support. We literally
couldn&#39;t have gotten this far without the support of this dedicated and active
community.
--&gt;
&lt;p&gt;维护者们要感谢为 Gateway API 做出贡献的&lt;strong&gt;每一个人&lt;/strong&gt;，
无论是提交代码、参与讨论、提供创意，还是给予常规支持，我们都在此表示诚挚的感谢。
没有这个专注且活跃的社区的支持，我们不可能走到这一步。&lt;/p&gt;
&lt;!--
## Try it out

Unlike other Kubernetes APIs, you don&#39;t need to upgrade to the latest version of
Kubernetes to get the latest version of Gateway API. As long as you&#39;re running
Kubernetes 1.26 or later, you&#39;ll be able to get up and running with this
version of Gateway API.
--&gt;
&lt;h2 id=&#34;try-it-out&#34;&gt;试用一下  &lt;/h2&gt;
&lt;p&gt;与其他 Kubernetes API 不同，你不需要升级到最新版本的 Kubernetes 即可获得最新版本的 Gateway API。
只要你运行的是 Kubernetes 1.26 或更高版本，你就可以使用这个版本的 Gateway API。&lt;/p&gt;
&lt;!--
To try out the API, follow our [Getting Started Guide](https://gateway-api.sigs.k8s.io/guides/).

## Get involved

There are lots of opportunities to get involved and help define the future of
Kubernetes routing APIs for both ingress and service mesh.
--&gt;
&lt;p&gt;要试用此 API，请参阅&lt;a href=&#34;https://gateway-api.sigs.k8s.io/guides/&#34;&gt;入门指南&lt;/a&gt;。&lt;/p&gt;
&lt;h2 id=&#34;get-involved&#34;&gt;参与进来  &lt;/h2&gt;
&lt;p&gt;你有很多机会可以参与进来并帮助为 Ingress 和服务网格定义 Kubernetes 路由 API 的未来。&lt;/p&gt;
&lt;!--
* Check out the [user guides](https://gateway-api.sigs.k8s.io/guides) to see what use-cases can be addressed.
* Try out one of the [existing Gateway controllers](https://gateway-api.sigs.k8s.io/implementations/).
* Or [join us in the community](https://gateway-api.sigs.k8s.io/contributing/)
  and help us build the future of Gateway API together!
--&gt;
&lt;ul&gt;
&lt;li&gt;查阅&lt;a href=&#34;https://gateway-api.sigs.k8s.io/guides&#34;&gt;用户指南&lt;/a&gt;以了解可以解决哪些用例。&lt;/li&gt;
&lt;li&gt;试用其中一个&lt;a href=&#34;https://gateway-api.sigs.k8s.io/implementations/&#34;&gt;现有的 Gateway 控制器&lt;/a&gt;。&lt;/li&gt;
&lt;li&gt;或者&lt;a href=&#34;https://gateway-api.sigs.k8s.io/contributing/&#34;&gt;加入我们的社区&lt;/a&gt;，帮助我们一起构建 Gateway API 的未来！&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Related Kubernetes blog articles

* [New Experimental Features in Gateway API v1.0](/blog/2023/11/28/gateway-api-ga/)
  11/2023
* [Gateway API v1.0: GA Release](/blog/2023/10/31/gateway-api-ga/)
  10/2023
* [Introducing ingress2gateway; Simplifying Upgrades to Gateway API](/blog/2023/10/25/introducing-ingress2gateway/)
  10/2023
* [Gateway API v0.8.0: Introducing Service Mesh Support](/blog/2023/08/29/gateway-api-v0-8/)
  08/2023
--&gt;
&lt;h2 id=&#34;related-kubernetes-blog-articles&#34;&gt;相关的 Kubernetes 博文  &lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;2023 年 11 月 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/11/28/gateway-api-ga/&#34;&gt;Gateway API v1.0 中的新实验性特性&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;2023 年 10 月 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/10/31/gateway-api-ga/&#34;&gt;Gateway API v1.0：正式发布（GA）&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;2023 年 10 月&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/10/25/introducing-ingress2gateway/&#34;&gt;介绍 ingress2gateway；简化 Gateway API 升级&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;2023 年 8 月 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/29/gateway-api-v0-8/&#34;&gt;Gateway API v0.8.0：引入服务网格支持&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.30：防止未经授权的卷模式转换进阶到 GA</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/04/30/prevent-unauthorized-volume-mode-conversion-ga/</link>
      <pubDate>Tue, 30 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/04/30/prevent-unauthorized-volume-mode-conversion-ga/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Kubernetes 1.30: Preventing unauthorized volume mode conversion moves to GA&#34;
date: 2024-04-30
slug: prevent-unauthorized-volume-mode-conversion-ga
author: &gt;
  Raunak Pradip Shah (Mirantis)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Raunak Pradip Shah (Mirantis)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
With the release of Kubernetes 1.30, the feature to prevent the modification of the volume mode
of a [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/) that was created from
an existing VolumeSnapshot in a Kubernetes cluster, has moved to GA!
--&gt;
&lt;p&gt;随着 Kubernetes 1.30 的发布，防止修改从 Kubernetes 集群中现有
VolumeSnapshot 创建的 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/&#34;&gt;PersistentVolumeClaim&lt;/a&gt;
的卷模式的特性已被升级至 GA！&lt;/p&gt;
&lt;!--
## The problem

The [Volume Mode](/docs/concepts/storage/persistent-volumes/#volume-mode) of a PersistentVolumeClaim 
refers to whether the underlying volume on the storage device is formatted into a filesystem or
presented as a raw block device to the Pod that uses it.

Users can leverage the VolumeSnapshot feature, which has been stable since Kubernetes v1.20,
to create a PersistentVolumeClaim (shortened as PVC) from an existing VolumeSnapshot in
the Kubernetes cluster. The PVC spec includes a dataSource field, which can point to an
existing VolumeSnapshot instance.
Visit [Create a PersistentVolumeClaim from a Volume Snapshot](/docs/concepts/storage/persistent-volumes/#create-persistent-volume-claim-from-volume-snapshot) 
for more details on how to create a PVC from an existing VolumeSnapshot in a Kubernetes cluster.
--&gt;
&lt;h2 id=&#34;问题&#34;&gt;问题&lt;/h2&gt;
&lt;p&gt;PersistentVolumeClaim 的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#volume-mode&#34;&gt;卷模式&lt;/a&gt;
是指存储设备上的底层卷是被格式化为某文件系统还是作为原始块设备呈现给使用它的 Pod。&lt;/p&gt;
&lt;p&gt;用户可以利用自 Kubernetes v1.20 以来一直稳定的 VolumeSnapshot 特性，基于
Kubernetes 集群中现有的 VolumeSnapshot 创建 PersistentVolumeClaim（简称 PVC）。
PVC 规约中包括一个 &lt;code&gt;dataSource&lt;/code&gt; 字段，它可以指向现有的 VolumeSnapshot 实例。
有关如何基于 Kubernetes 集群中现有 VolumeSnapshot 创建 PVC 的更多详细信息，
请访问&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#create-persistent-volume-claim-from-volume-snapshot&#34;&gt;使用卷快照创建 PersistentVolumeClaim&lt;/a&gt;。&lt;/p&gt;
&lt;!--
When leveraging the above capability, there is no logic that validates whether the mode of the
original volume, whose snapshot was taken, matches the mode of the newly created volume.

This presents a security gap that allows malicious users to potentially exploit an
as-yet-unknown vulnerability in the host operating system.

There is a valid use case to allow some users to perform such conversions. Typically, storage backup
vendors convert the volume mode during the course of a backup operation, to retrieve changed blocks 
for greater efficiency of operations. This prevents Kubernetes from blocking the operation completely
and presents a challenge in distinguishing trusted users from malicious ones.
--&gt;
&lt;p&gt;当利用上述特性时，没有逻辑来验证制作快照的原始卷的模式是否与新创建的卷的模式匹配。&lt;/p&gt;
&lt;p&gt;这带来了一个安全漏洞，允许恶意用户潜在地利用主机操作系统中未知的漏洞。&lt;/p&gt;
&lt;p&gt;有一个合法的场景允许某些用户执行此类转换。
通常，存储备份供应商会在备份操作过程中转换卷模式，通过检索已被更改的块来提高操作效率。
这使得 Kubernetes 无法完全阻止此类操作，但给区分可信用户和恶意用户带来了挑战。&lt;/p&gt;
&lt;!--
## Preventing unauthorized users from converting the volume mode

In this context, an authorized user is one who has access rights to perform **update**
or **patch** operations on VolumeSnapshotContents, which is a cluster-level resource.  
It is up to the cluster administrator to provide these rights only to trusted users
or applications, like backup vendors.
Users apart from such authorized ones will never be allowed to modify the volume mode
of a PVC when it is being created from a VolumeSnapshot.
--&gt;
&lt;h2 id=&#34;防止未经授权的用户转换卷模式&#34;&gt;防止未经授权的用户转换卷模式&lt;/h2&gt;
&lt;p&gt;在此上下文中，授权用户是有权对 VolumeSnapshotContents（集群级资源）执行
&lt;strong&gt;update&lt;/strong&gt; 或 &lt;strong&gt;patch&lt;/strong&gt; 操作的用户。
集群管理员应仅向受信任的用户或应用程序（例如备份供应商）赋予这些权限。
当从 VolumeSnapshot 创建 PVC 时，除了此类授权用户之外的用户将永远不会被允许修改 PVC 的卷模式。&lt;/p&gt;
&lt;!--
To convert the volume mode, an authorized user must do the following:

1. Identify the VolumeSnapshot that is to be used as the data source for a newly
   created PVC in the given namespace.
2. Identify the VolumeSnapshotContent bound to the above VolumeSnapshot.
--&gt;
&lt;p&gt;要转换卷模式，授权用户必须执行以下操作：&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;标识要用作给定命名空间中新创建的 PVC 的数据源的 VolumeSnapshot。&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;识别与上述 VolumeSnapshot 绑定的 VolumeSnapshotContent。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl describe volumesnapshot -n &amp;lt;namespace&amp;gt; &amp;lt;name&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
3. Add the annotation [`snapshot.storage.kubernetes.io/allow-volume-mode-change: &#34;true&#34;`](/docs/reference/labels-annotations-taints/#snapshot-storage-kubernetes-io-allowvolumemodechange)
   to the above VolumeSnapshotContent. The VolumeSnapshotContent annotations must include one similar to the following manifest fragment:
--&gt;
&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;
&lt;p&gt;在 VolumeSnapshotContent 上添加 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/labels-annotations-taints/#snapshot-storage-kubernetes-io-allowvolumemodechange&#34;&gt;&lt;code&gt;snapshot.storage.kubernetes.io/allow-volume-mode-change: &amp;quot;true&amp;quot;&lt;/code&gt;&lt;/a&gt;
注解，VolumeSnapshotContent 注解必须包含类似于以下清单片段：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;VolumeSnapshotContent&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;annotations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;snapshot.storage.kubernetes.io/allow-volume-mode-change&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;true&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;...&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
**Note**: For pre-provisioned VolumeSnapshotContents, you must take an extra
step of setting `spec.sourceVolumeMode` field to either `Filesystem` or `Block`,
depending on the mode of the volume from which this snapshot was taken.

An example is shown below:
--&gt;
&lt;p&gt;&lt;strong&gt;注意&lt;/strong&gt;：对于预配置的 VolumeSnapshotContents，你必须执行额外的步骤，将
&lt;code&gt;spec.sourceVolumeMode&lt;/code&gt; 字段设置为 &lt;code&gt;Filesystem&lt;/code&gt; 或 &lt;code&gt;Block&lt;/code&gt;，
具体取决于用来制作此快照的卷的模式。&lt;/p&gt;
&lt;p&gt;一个例子如下所示：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;snapshot.storage.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;VolumeSnapshotContent&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;annotations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;snapshot.storage.kubernetes.io/allow-volume-mode-change&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;true&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&amp;lt;volume-snapshot-content-name&amp;gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;deletionPolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Delete&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;driver&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;hostpath.csi.k8s.io&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;source&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;snapshotHandle&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&amp;lt;snapshot-handle&amp;gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;sourceVolumeMode&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Filesystem&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeSnapshotRef&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&amp;lt;volume-snapshot-name&amp;gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;namespace&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&amp;lt;namespace&amp;gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Repeat steps 1 to 3 for all VolumeSnapshotContents whose volume mode needs to be
converted during a backup or restore operation. This can be done either via software
with credentials of an authorized user or manually by the authorized user(s).

If the annotation shown above is present on a VolumeSnapshotContent object,
Kubernetes will not prevent the volume mode from being converted.
Users should keep this in mind before they attempt to add the annotation
to any VolumeSnapshotContent.
--&gt;
&lt;p&gt;对备份或恢复操作期间需要转换卷模式的所有 VolumeSnapshotContent 重复步骤 1 至 3。
这可以通过具有授权用户凭据的软件来完成，也可以由授权用户手动完成。&lt;/p&gt;
&lt;p&gt;如果 VolumeSnapshotContent 对象上存在上面显示的注解，Kubernetes 将不会阻止卷模式转换。
用户在尝试将注解添加到任何 VolumeSnapshotContent 之前应记住这一点。&lt;/p&gt;
&lt;!--
## Action required

The `prevent-volume-mode-conversion` feature flag is enabled by default in the 
external-provisioner `v4.0.0` and external-snapshotter `v7.0.0`. Volume mode change
will be rejected when creating a PVC from a VolumeSnapshot unless the steps
described above have been performed.
--&gt;
&lt;h2 id=&#34;需要采取的行动&#34;&gt;需要采取的行动&lt;/h2&gt;
&lt;p&gt;默认情况下，在 external-provisioner &lt;code&gt;v4.0.0&lt;/code&gt; 和 external-snapshotter &lt;code&gt;v7.0.0&lt;/code&gt;
中启用 &lt;code&gt;prevent-volume-mode-conversion&lt;/code&gt; 特性标志。
基于 VolumeSnapshot 来创建 PVC 时，卷模式更改将被拒绝，除非已执行上述步骤。&lt;/p&gt;
&lt;!--
## What&#39;s next

To determine which CSI external sidecar versions support this feature, please head
over to the [CSI docs page](https://kubernetes-csi.github.io/docs/).
For any queries or issues, join [Kubernetes on Slack](https://slack.k8s.io/) and
create a thread in the #csi or #sig-storage channel. Alternately, create an issue in the
CSI external-snapshotter [repository](https://github.com/kubernetes-csi/external-snapshotter).
--&gt;
&lt;h2 id=&#34;接下来&#34;&gt;接下来&lt;/h2&gt;
&lt;p&gt;要确定哪些 CSI 外部 sidecar 版本支持此功能，请前往 &lt;a href=&#34;https://kubernetes-csi.github.io/docs/&#34;&gt;CSI 文档页面&lt;/a&gt;。
对于任何疑问或问题，请加入 &lt;a href=&#34;https://slack.k8s.io/&#34;&gt;Slack 上的 Kubernetes&lt;/a&gt; 并在 #csi 或 #sig-storage 频道中发起讨论。
或者，在 CSI 外部快照&lt;a href=&#34;https://github.com/kubernetes-csi/external-snapshotter&#34;&gt;仓库&lt;/a&gt;中登记问题。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.30：结构化身份认证配置进阶至 Beta</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/04/25/structured-authentication-moves-to-beta/</link>
      <pubDate>Thu, 25 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/04/25/structured-authentication-moves-to-beta/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Kubernetes 1.30: Structured Authentication Configuration Moves to Beta&#34;
date: 2024-04-25
slug: structured-authentication-moves-to-beta
author: &gt;
  [Anish Ramasekar](https://github.com/aramase) (Microsoft)
--&gt;
&lt;!--
With Kubernetes 1.30, we (SIG Auth) are moving Structured Authentication Configuration to beta.

Today&#39;s article is about _authentication_: finding out who&#39;s performing a task, and checking
that they are who they say they are. Check back in tomorrow to find about what&#39;s new in
Kubernetes v1.30 around _authorization_ (deciding what someone can and can&#39;t access).
--&gt;
&lt;p&gt;在 Kubernetes 1.30 中，我们（SIG Auth）将结构化身份认证配置（Structured Authentication Configuration）进阶至 Beta。&lt;/p&gt;
&lt;p&gt;今天的文章是关于&lt;strong&gt;身份认证&lt;/strong&gt;：找出谁在执行任务，核查他们是否是自己所说的那个人。
本文还述及 Kubernetes v1.30 中关于 &lt;strong&gt;鉴权&lt;/strong&gt;（决定某些人能访问什么，不能访问什么）的新内容。&lt;/p&gt;
&lt;!--
## Motivation
Kubernetes has had a long-standing need for a more flexible and extensible
authentication system. The current system, while powerful, has some limitations
that make it difficult to use in certain scenarios. For example, it is not
possible to use multiple authenticators of the same type (e.g., multiple JWT
authenticators) or to change the configuration without restarting the API server. The
Structured Authentication Configuration feature is the first step towards
addressing these limitations and providing a more flexible and extensible way
to configure authentication in Kubernetes.
--&gt;
&lt;h2 id=&#34;motivation&#34;&gt;动机  &lt;/h2&gt;
&lt;p&gt;Kubernetes 长期以来都需要一个更灵活、更好扩展的身份认证系统。
当前的系统虽然强大，但有一些限制，使其难以用在某些场景下。
例如，不可能同时使用多个相同类型的认证组件（例如，多个 JWT 认证组件），
也不可能在不重启 API 服务器的情况下更改身份认证配置。
结构化身份认证配置特性是解决这些限制并提供一种更灵活、更好扩展的方式来配置 Kubernetes 中身份认证的第一步。&lt;/p&gt;
&lt;!--
## What is structured authentication configuration?
Kubernetes v1.30 builds on the experimental support for configurating authentication based on
a file, that was added as alpha in Kubernetes v1.30. At this beta stage, Kubernetes only supports configuring JWT
authenticators, which serve as the next iteration of the existing OIDC
authenticator. JWT authenticator is an authenticator to
authenticate Kubernetes users using JWT compliant tokens. The authenticator
will attempt to parse a raw ID token, verify it&#39;s been signed by the configured 
issuer.
--&gt;
&lt;h2 id=&#34;what-is-structured-authentication-configuration&#34;&gt;什么是结构化身份认证配置？  &lt;/h2&gt;
&lt;p&gt;Kubernetes v1.30 针对基于文件来配置身份认证提供实验性支持，这是在 Kubernetes v1.30 中新增的 Alpha 特性。
在此 Beta 阶段，Kubernetes 仅支持配置 JWT 认证组件，这是现有 OIDC 认证组件的下一次迭代。
JWT 认证组件使用符合 JWT 标准的令牌对 Kubernetes 用户进行身份认证。
此认证组件将尝试解析原始 ID 令牌，验证其是否由配置的签发方签名。&lt;/p&gt;
&lt;!--
The Kubernetes project added configuration from a file so that it can provide more
flexibility than using command line options (which continue to work, and are still supported).
Supporting a configuration file also makes it easy to deliver further improvements in upcoming
releases.
--&gt;
&lt;p&gt;Kubernetes 项目新增了基于文件的配置，以便提供比使用命令行选项（命令行依然有效，仍受支持）更灵活的方式。
对配置文件的支持还使得在即将发布的版本中更容易提供更多改进措施。&lt;/p&gt;
&lt;!--
### Benefits of structured authentication configuration
Here&#39;s why using a configuration file to configure cluster authentication is a benefit:
--&gt;
&lt;h3 id=&#34;benefits-of-structured-authentication-configuration&#34;&gt;结构化身份认证配置的好处  &lt;/h3&gt;
&lt;p&gt;以下是使用配置文件来配置集群身份认证的好处：&lt;/p&gt;
&lt;!--
1. **Multiple JWT authenticators**: You can configure multiple JWT authenticators
   simultaneously. This allows you to use multiple identity providers (e.g.,
   Okta, Keycloak, GitLab) without needing to use an intermediary like Dex
   that handles multiplexing between multiple identity providers.
2. **Dynamic configuration**: You can change the configuration without
   restarting the API server. This allows you to add, remove, or modify
   authenticators without disrupting the API server.
--&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;多个 JWT 认证组件&lt;/strong&gt;：你可以同时配置多个 JWT 认证组件。
这允许你使用多个身份提供程序（例如 Okta、Keycloak、GitLab）而无需使用像
Dex 这样的中间程序来处理多个身份提供程序之间的多路复用。&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;动态配置&lt;/strong&gt;：你可以在不重启 API 服务器的情况下更改配置。
这允许你添加、移除或修改认证组件而不会中断 API 服务器。&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
3. **Any JWT-compliant token**: You can use any JWT-compliant token for
   authentication. This allows you to use tokens from any identity provider that
   supports JWT. The minimum valid JWT payload must contain the claims documented 
   in [structured authentication configuration](/docs/reference/access-authn-authz/authentication/#using-authentication-configuration)
   page in the Kubernetes documentation.
4. **CEL (Common Expression Language) support**: You can use [CEL](/docs/reference/using-api/cel/) 
   to determine whether the token&#39;s claims match the user&#39;s attributes in Kubernetes (e.g.,
   username, group). This allows you to use complex logic to determine whether a
   token is valid.
--&gt;
&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;&lt;strong&gt;任何符合 JWT 标准的令牌&lt;/strong&gt;：你可以使用任何符合 JWT 标准的令牌进行身份认证。
这允许你使用任何支持 JWT 的身份提供程序的令牌。最小有效的 JWT 载荷必须包含 Kubernetes
文档中&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/access-authn-authz/authentication/#using-authentication-configuration&#34;&gt;结构化身份认证配置&lt;/a&gt;页面中记录的申领。&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;CEL（通用表达式语言）支持&lt;/strong&gt;：你可以使用 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/using-api/cel/&#34;&gt;CEL&lt;/a&gt;
来确定令牌的申领是否与 Kubernetes 中用户的属性（例如用户名、组）匹配。
这允许你使用复杂逻辑来确定令牌是否有效。&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
5. **Multiple audiences**: You can configure multiple audiences for a single
   authenticator. This allows you to use the same authenticator for multiple
   audiences, such as using a different OAuth client for `kubectl` and dashboard.
6. **Using identity providers that don&#39;t support OpenID connect discovery**: You
   can use identity providers that don&#39;t support [OpenID Connect 
   discovery](https://openid.net/specs/openid-connect-discovery-1_0.html). The only
   requirement is to host the discovery document at a different location than the
   issuer (such as locally in the cluster) and specify the `issuer.discoveryURL` in
   the configuration file.
--&gt;
&lt;ol start=&#34;5&#34;&gt;
&lt;li&gt;&lt;strong&gt;多个受众群体&lt;/strong&gt;：你可以为单个认证组件配置多个受众群体。
这允许你为多个受众群体使用相同的认证组件，例如为 &lt;code&gt;kubectl&lt;/code&gt; 和仪表板使用不同的 OAuth 客户端。&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;使用不支持 OpenID 连接发现的身份提供程序&lt;/strong&gt;：你可以使用不支持
&lt;a href=&#34;https://openid.net/specs/openid-connect-discovery-1_0.html&#34;&gt;OpenID 连接发现&lt;/a&gt; 的身份提供程序。
唯一的要求是将发现文档托管到与签发方不同的位置（例如在集群中本地），并在配置文件中指定 &lt;code&gt;issuer.discoveryURL&lt;/code&gt;。&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
## How to use Structured Authentication Configuration
To use structured authentication configuration, you specify
the path to the authentication configuration using the `--authentication-config`
command line argument in the API server. The configuration file is a YAML file
that specifies the authenticators and their configuration. Here is an example
configuration file that configures two JWT authenticators:
--&gt;
&lt;h2 id=&#34;how-to-use-structured-authentication-configuration&#34;&gt;如何使用结构化身份认证配置  &lt;/h2&gt;
&lt;p&gt;要使用结构化身份认证配置，你可以使用 &lt;code&gt;--authentication-config&lt;/code&gt; 命令行参数在
API 服务器中指定身份认证配置的路径。此配置文件是一个 YAML 文件，指定认证组件及其配置。
以下是一个配置两个 JWT 认证组件的示例配置文件：&lt;/p&gt;
&lt;!--
# Someone with a valid token from either of these issuers could authenticate
# against this cluster.
# second authenticator that exposes the discovery document at a different location
# than the issuer
--&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;apiserver.config.k8s.io/v1beta1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;AuthenticationConfiguration&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# 如果某人具有这些 issuer 之一签发的有效令牌，则此人可以在集群上进行身份认证&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;jwt&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;issuer&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;url&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;https://issuer1.example.com&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;audiences&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- audience1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- audience2&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;audienceMatchPolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;MatchAny&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claimValidationRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.hd == &amp;#34;example.com&amp;#34;&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;the hosted domain name must be example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claimMappings&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;username&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.username&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;groups&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.groups&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;uid&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.uid&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;extra&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;key&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;example.com/tenant&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.tenant&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;userValidationRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;!user.username.startsWith(&amp;#39;system:&amp;#39;)&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;username cannot use reserved system: prefix&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# 第二个认证组件将发现文档公布于与签发方不同的位置&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;issuer&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;url&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;https://issuer2.example.com&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;discoveryURL&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;https://discovery.example.com/.well-known/openid-configuration&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;audiences&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- audience3&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- audience4&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;audienceMatchPolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;MatchAny&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claimValidationRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.hd == &amp;#34;example.com&amp;#34;&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;the hosted domain name must be example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claimMappings&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;username&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.username&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;groups&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.groups&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;uid&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.uid&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;extra&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;key&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;example.com/tenant&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;claims.tenant&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;userValidationRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;!user.username.startsWith(&amp;#39;system:&amp;#39;)&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;username cannot use reserved system: prefix&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
## Migration from command line arguments to configuration file
The Structured Authentication Configuration feature is designed to be
backwards-compatible with the existing approach, based on command line options, for 
configuring the JWT authenticator. This means that you can continue to use the existing
command-line options to configure the JWT authenticator. However, we (Kubernetes SIG Auth) 
recommend migrating to the new configuration file-based approach, as it provides more
flexibility and extensibility.
--&gt;
&lt;h2 id=&#34;migration-from-command-line-arguments-to-configuration-file&#34;&gt;从命令行参数迁移到配置文件  &lt;/h2&gt;
&lt;p&gt;结构化身份认证配置特性旨在与基于命令行选项配置 JWT 认证组件的现有方法向后兼容。
这意味着你可以继续使用现有的命令行选项来配置 JWT 认证组件。
但是，我们（Kubernetes SIG Auth）建议迁移到新的基于配置文件的方法，因为这种方法更灵活，更好扩展。&lt;/p&gt;


&lt;div class=&#34;alert alert-primary&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;alert-heading&#34;&gt;Note&lt;/h4&gt;

    &lt;!--
If you specify `--authentication-config` along with any of the `--oidc-*` command line arguments, this is
a misconfiguration. In this situation, the API server reports an error and then immediately exits.

If you want to switch to using structured authentication configuration, you have to remove the `--oidc-*`
command line arguments, and use the configuration file instead.
--&gt;
&lt;p&gt;如果你同时指定 &lt;code&gt;--authentication-config&lt;/code&gt; 和任何 &lt;code&gt;--oidc-*&lt;/code&gt; 命令行参数，这是一种错误的配置。
在这种情况下，API 服务器会报告错误，然后立即退出。&lt;/p&gt;
&lt;p&gt;如果你想切换到使用结构化身份认证配置，你必须移除 &lt;code&gt;--oidc-*&lt;/code&gt; 命令行参数，并改为使用配置文件。&lt;/p&gt;


&lt;/div&gt;

&lt;!--
Here is an example of how to migrate from the command-line flags to the
configuration file:

### Command-line arguments
--&gt;
&lt;p&gt;以下是如何从命令行标志迁移到配置文件的示例：&lt;/p&gt;
&lt;h3 id=&#34;command-line-arguments&#34;&gt;命令行参数  &lt;/h3&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-issuer-url&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;https://issuer.example.com
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-client-id&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;example-client-id
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-username-claim&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;username
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-groups-claim&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;groups
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-username-prefix&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;oidc:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-groups-prefix&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;oidc:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-required-claim&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;hd=example.com&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-required-claim&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;admin=true&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--oidc-ca-file&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;/path/to/ca.pem
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
There is no equivalent in the configuration file for the `--oidc-signing-algs`. 
For Kubernetes v1.30, the authenticator supports all the asymmetric algorithms listed in
[`oidc.go`](https://github.com/kubernetes/kubernetes/blob/b4935d910dcf256288694391ef675acfbdb8e7a3/staging/src/k8s.io/apiserver/plugin/pkg/authenticator/token/oidc/oidc.go#L222-L233).

### Configuration file
--&gt;
&lt;p&gt;在配置文件中没有与 &lt;code&gt;--oidc-signing-algs&lt;/code&gt; 相对应的配置项。
对于 Kubernetes v1.30，认证组件支持在
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/b4935d910dcf256288694391ef675acfbdb8e7a3/staging/src/k8s.io/apiserver/plugin/pkg/authenticator/token/oidc/oidc.go#L222-L233&#34;&gt;&lt;code&gt;oidc.go&lt;/code&gt;&lt;/a&gt;
中列出的所有非对称算法。&lt;/p&gt;
&lt;h3 id=&#34;configuration-file&#34;&gt;配置文件  &lt;/h3&gt;
&lt;!--
certificateAuthority: &lt;value is the content of file /path/to/ca.pem&gt;
--&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;apiserver.config.k8s.io/v1beta1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;AuthenticationConfiguration&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;jwt&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;issuer&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;url&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;https://issuer.example.com&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;audiences&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- example-client-id&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;certificateAuthority&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&amp;lt;取值是 /path/to/ca.pem 文件的内容&amp;gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claimMappings&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;username&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claim&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;username&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;prefix&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;oidc:&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;groups&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claim&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;groups&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;prefix&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;oidc:&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claimValidationRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claim&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;hd&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requiredValue&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;claim&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;admin&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requiredValue&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;true&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
## What&#39;s next?
For Kubernetes v1.31, we expect the feature to stay in beta while we get more
feedback. In the coming releases, we want to investigate:
- Making distributed claims work via CEL expressions.
- Egress selector configuration support for calls to `issuer.url` and
  `issuer.discoveryURL`.
--&gt;
&lt;h2 id=&#34;whats-next&#34;&gt;下一步是什么？  &lt;/h2&gt;
&lt;p&gt;对于 Kubernetes v1.31，我们预计该特性将保持在 Beta，我们要收集更多反馈意见。
在即将发布的版本中，我们希望调查以下内容：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;通过 CEL 表达式使分布式申领生效。&lt;/li&gt;
&lt;li&gt;对 &lt;code&gt;issuer.url&lt;/code&gt; 和 &lt;code&gt;issuer.discoveryURL&lt;/code&gt; 的调用提供 Egress 选择算符配置支持。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
You can learn more about this feature on the [structured authentication
configuration](/docs/reference/access-authn-authz/authentication/#using-authentication-configuration)
page in the Kubernetes documentation. You can also follow along on the
[KEP-3331](https://kep.k8s.io/3331) to track progress across the coming
Kubernetes releases.
--&gt;
&lt;p&gt;你可以在 Kubernetes
文档的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/access-authn-authz/authentication/#using-authentication-configuration&#34;&gt;结构化身份认证配置&lt;/a&gt;页面上了解关于此特性的更多信息。
你还可以通过 &lt;a href=&#34;https://kep.k8s.io/3331&#34;&gt;KEP-3331&lt;/a&gt; 跟踪未来 Kubernetes 版本中的进展。&lt;/p&gt;
&lt;!--
## Try it out
In this post, I have covered the benefits the Structured Authentication
Configuration feature brings in Kubernetes v1.30. To use this feature, you must specify the path to the
authentication configuration using the `--authentication-config` command line
argument. From Kubernetes v1.30, the feature is in beta and enabled by default.
If you want to keep using command line arguments instead of a configuration file,
those will continue to work as-is.
--&gt;
&lt;h2 id=&#34;try-it-out&#34;&gt;试用一下  &lt;/h2&gt;
&lt;p&gt;在本文中，我介绍了结构化身份认证配置特性在 Kubernetes v1.30 中带来的好处。
要使用此特性，你必须使用 &lt;code&gt;--authentication-config&lt;/code&gt; 命令行参数指定身份认证配置的路径。
从 Kubernetes v1.30 开始，此特性处于 Beta 并默认启用。
如果你希望继续使用命令行参数而不想用配置文件，原来的命令行参数也将继续按原样起作用。&lt;/p&gt;
&lt;!--
We would love to hear your feedback on this feature. Please reach out to us on the
[#sig-auth-authenticators-dev](https://kubernetes.slack.com/archives/C04UMAUC4UA)
channel on Kubernetes Slack (for an invitation, visit [https://slack.k8s.io/](https://slack.k8s.io/)).
--&gt;
&lt;p&gt;我们很高兴听取你对此特性的反馈意见。请在 Kubernetes Slack 上的
&lt;a href=&#34;https://kubernetes.slack.com/archives/C04UMAUC4UA&#34;&gt;#sig-auth-authenticators-dev&lt;/a&gt;
频道与我们联系（若要获取邀请，请访问 &lt;a href=&#34;https://slack.k8s.io/&#34;&gt;https://slack.k8s.io/&lt;/a&gt;）。&lt;/p&gt;
&lt;!--
## How to get involved
If you are interested in getting involved in the development of this feature,
share feedback, or participate in any other ongoing SIG Auth projects, please
reach out on the [#sig-auth](https://kubernetes.slack.com/archives/C0EN96KUY)
channel on Kubernetes Slack.
--&gt;
&lt;h2 id=&#34;how-to-get-involved&#34;&gt;如何参与  &lt;/h2&gt;
&lt;p&gt;如果你有兴趣参与此特性的开发、分享反馈意见或参与任何其他 SIG Auth 项目，
请在 Kubernetes Slack 上的 &lt;a href=&#34;https://kubernetes.slack.com/archives/C0EN96KUY&#34;&gt;#sig-auth&lt;/a&gt; 频道联系我们。&lt;/p&gt;
&lt;!--
You are also welcome to join the bi-weekly [SIG Auth
meetings](https://github.com/kubernetes/community/blob/master/sig-auth/README.md#meetings)
held every-other Wednesday.
--&gt;
&lt;p&gt;我们也欢迎你参加 &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-auth/README.md#meetings&#34;&gt;SIG Auth 双周会议&lt;/a&gt;。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.30：验证准入策略 ValidatingAdmissionPolicy 正式发布</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/04/24/validating-admission-policy-ga/</link>
      <pubDate>Wed, 24 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/04/24/validating-admission-policy-ga/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Kubernetes 1.30: Validating Admission Policy Is Generally Available&#34;
slug: validating-admission-policy-ga
date: 2024-04-24
author: &gt;
  Jiahui Feng (Google)
--&gt;
&lt;!--
On behalf of the Kubernetes project, I am excited to announce that ValidatingAdmissionPolicy has reached
**general availability**
as part of Kubernetes 1.30 release. If you have not yet read about this new declarative alternative to
validating admission webhooks, it may be interesting to read our
[previous post](/blog/2022/12/20/validating-admission-policies-alpha/) about the new feature.
If you have already heard about ValidatingAdmissionPolicies and you are eager to try them out,
there is no better time to do it than now.

Let&#39;s have a taste of a ValidatingAdmissionPolicy, by replacing a simple webhook.
--&gt;
&lt;p&gt;我代表 Kubernetes 项目组成员，很高兴地宣布 ValidatingAdmissionPolicy 已经作为 Kubernetes 1.30 发布的一部分&lt;strong&gt;正式发布&lt;/strong&gt;。
如果你还不了解这个全新的声明式验证准入 Webhook 的替代方案，
请参阅有关这个新特性的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2022/12/20/validating-admission-policies-alpha/&#34;&gt;上一篇博文&lt;/a&gt;。
如果你已经对 ValidatingAdmissionPolicy 有所了解并且想要尝试一下，那么现在是最好的时机。&lt;/p&gt;
&lt;p&gt;让我们替换一个简单的 Webhook，体验一下 ValidatingAdmissionPolicy。&lt;/p&gt;
&lt;!--
## Example admission webhook
First, let&#39;s take a look at an example of a simple webhook. Here is an excerpt from a webhook that
enforces `runAsNonRoot`, `readOnlyRootFilesystem`, `allowPrivilegeEscalation`, and `privileged` to be set to the least permissive values.
--&gt;
&lt;h2 id=&#34;准入-webhook-示例&#34;&gt;准入 Webhook 示例&lt;/h2&gt;
&lt;p&gt;首先，让我们看一个简单 Webhook 的示例。以下是一个强制将
&lt;code&gt;runAsNonRoot&lt;/code&gt;、&lt;code&gt;readOnlyRootFilesystem&lt;/code&gt;、&lt;code&gt;allowPrivilegeEscalation&lt;/code&gt; 和 &lt;code&gt;privileged&lt;/code&gt; 设置为最低权限值的 Webhook 代码片段。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-go&#34; data-lang=&#34;go&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;func&lt;/span&gt; &lt;span style=&#34;color:#00a000&#34;&gt;verifyDeployment&lt;/span&gt;(deploy &lt;span style=&#34;color:#666&#34;&gt;*&lt;/span&gt;appsv1.Deployment) &lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;error&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;	&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;var&lt;/span&gt; errs []&lt;span style=&#34;color:#0b0;font-weight:bold&#34;&gt;error&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;	&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;for&lt;/span&gt; i, c &lt;span style=&#34;color:#666&#34;&gt;:=&lt;/span&gt; &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;range&lt;/span&gt; deploy.Spec.Template.Spec.Containers {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;if&lt;/span&gt; c.Name &lt;span style=&#34;color:#666&#34;&gt;==&lt;/span&gt; &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;&amp;#34;&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;			&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;return&lt;/span&gt; fmt.&lt;span style=&#34;color:#00a000&#34;&gt;Errorf&lt;/span&gt;(&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;container %d has no name&amp;#34;&lt;/span&gt;, i)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;if&lt;/span&gt; c.SecurityContext &lt;span style=&#34;color:#666&#34;&gt;==&lt;/span&gt; &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;nil&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;			errs = &lt;span style=&#34;color:#a2f&#34;&gt;append&lt;/span&gt;(errs, fmt.&lt;span style=&#34;color:#00a000&#34;&gt;Errorf&lt;/span&gt;(&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;container %q does not have SecurityContext&amp;#34;&lt;/span&gt;, c.Name))
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;if&lt;/span&gt; c.SecurityContext.RunAsNonRoot &lt;span style=&#34;color:#666&#34;&gt;==&lt;/span&gt; &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;nil&lt;/span&gt; &lt;span style=&#34;color:#666&#34;&gt;||&lt;/span&gt; !&lt;span style=&#34;color:#666&#34;&gt;*&lt;/span&gt;c.SecurityContext.RunAsNonRoot {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;			errs = &lt;span style=&#34;color:#a2f&#34;&gt;append&lt;/span&gt;(errs, fmt.&lt;span style=&#34;color:#00a000&#34;&gt;Errorf&lt;/span&gt;(&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;container %q must set RunAsNonRoot to true in its SecurityContext&amp;#34;&lt;/span&gt;, c.Name))
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;if&lt;/span&gt; c.SecurityContext.ReadOnlyRootFilesystem &lt;span style=&#34;color:#666&#34;&gt;==&lt;/span&gt; &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;nil&lt;/span&gt; &lt;span style=&#34;color:#666&#34;&gt;||&lt;/span&gt; !&lt;span style=&#34;color:#666&#34;&gt;*&lt;/span&gt;c.SecurityContext.ReadOnlyRootFilesystem {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;			errs = &lt;span style=&#34;color:#a2f&#34;&gt;append&lt;/span&gt;(errs, fmt.&lt;span style=&#34;color:#00a000&#34;&gt;Errorf&lt;/span&gt;(&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;container %q must set ReadOnlyRootFilesystem to true in its SecurityContext&amp;#34;&lt;/span&gt;, c.Name))
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;if&lt;/span&gt; c.SecurityContext.AllowPrivilegeEscalation &lt;span style=&#34;color:#666&#34;&gt;!=&lt;/span&gt; &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;nil&lt;/span&gt; &lt;span style=&#34;color:#666&#34;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span style=&#34;color:#666&#34;&gt;*&lt;/span&gt;c.SecurityContext.AllowPrivilegeEscalation {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;			errs = &lt;span style=&#34;color:#a2f&#34;&gt;append&lt;/span&gt;(errs, fmt.&lt;span style=&#34;color:#00a000&#34;&gt;Errorf&lt;/span&gt;(&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;container %q must NOT set AllowPrivilegeEscalation to true in its SecurityContext&amp;#34;&lt;/span&gt;, c.Name))
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;if&lt;/span&gt; c.SecurityContext.Privileged &lt;span style=&#34;color:#666&#34;&gt;!=&lt;/span&gt; &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;nil&lt;/span&gt; &lt;span style=&#34;color:#666&#34;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span style=&#34;color:#666&#34;&gt;*&lt;/span&gt;c.SecurityContext.Privileged {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;			errs = &lt;span style=&#34;color:#a2f&#34;&gt;append&lt;/span&gt;(errs, fmt.&lt;span style=&#34;color:#00a000&#34;&gt;Errorf&lt;/span&gt;(&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;container %q must NOT set Privileged to true in its SecurityContext&amp;#34;&lt;/span&gt;, c.Name))
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;		}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;	}
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;	&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;return&lt;/span&gt; errors.&lt;span style=&#34;color:#00a000&#34;&gt;NewAggregate&lt;/span&gt;(errs)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Check out [What are admission webhooks?](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
Or, see the [full code](webhook.go) of this webhook to follow along with this walkthrough. 

## The policy
Now let&#39;s try to recreate the validation faithfully with a ValidatingAdmissionPolicy.
--&gt;
&lt;p&gt;查阅&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks&#34;&gt;什么是准入 Webhook？&lt;/a&gt;，
或者查看这个 Webhook 的&lt;a href=&#34;webhook.go&#34;&gt;完整代码&lt;/a&gt;以便更好地理解下述演示。&lt;/p&gt;
&lt;h2 id=&#34;策略&#34;&gt;策略&lt;/h2&gt;
&lt;p&gt;现在，让我们尝试使用 ValidatingAdmissionPolicy 来忠实地重新创建验证。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;admissionregistration.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ValidatingAdmissionPolicy&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;pod-security.policy.example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;failurePolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Fail&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchConstraints&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resourceRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiGroups&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;   &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;apps&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;v1&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;operations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;CREATE&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;UPDATE&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;   &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;deployments&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;validations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, has(c.securityContext) &amp;amp;&amp;amp; has(c.securityContext.runAsNonRoot) &amp;amp;&amp;amp; c.securityContext.runAsNonRoot)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must set runAsNonRoot to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, has(c.securityContext) &amp;amp;&amp;amp; has(c.securityContext.readOnlyRootFilesystem) &amp;amp;&amp;amp; c.securityContext.readOnlyRootFilesystem)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must set readOnlyRootFilesystem to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.allowPrivilegeEscalation) || !c.securityContext.allowPrivilegeEscalation)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must NOT set allowPrivilegeEscalation to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.Privileged) || !c.securityContext.Privileged)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must NOT set privileged to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Create the policy with `kubectl`. Great, no complain so far. But let&#39;s get the policy object back and take a look at its status.
--&gt;
&lt;p&gt;使用 &lt;code&gt;kubectl&lt;/code&gt; 创建策略。很好，到目前为止没有任何问题。那我们获取此策略对象并查看其状态。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl get -oyaml validatingadmissionpolicies/pod-security.policy.example.com
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;status&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;typeChecking&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expressionWarnings&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;fieldRef&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;spec.validations[3].expression&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;warning&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;|&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;          apps/v1, Kind=Deployment: ERROR: &amp;lt;input&amp;gt;:1:76: undefined field &amp;#39;Privileged&amp;#39;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;           | object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.Privileged) || !c.securityContext.Privileged)
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;           | ...........................................................................^
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;          ERROR: &amp;lt;input&amp;gt;:1:128: undefined field &amp;#39;Privileged&amp;#39;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;           | object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.Privileged) || !c.securityContext.Privileged)
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;           | ...............................................................................................................................^&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
The policy was checked against its matched type, which is `apps/v1.Deployment`.
Looking at the `fieldRef`, the problem was with the 3rd expression (index starts with 0)
The expression in question accessed an undefined `Privileged` field.
Ahh, looks like it was a copy-and-paste error. The field name should be in lowercase.
--&gt;
&lt;p&gt;系统根据所匹配的类别 &lt;code&gt;apps/v1.Deployment&lt;/code&gt; 对策略执行了检查。
查看 &lt;code&gt;fieldRef&lt;/code&gt; 后，发现问题出现在第 3 个表达式上（索引从 0 开始）。
有问题的表达式访问了一个未定义的 &lt;code&gt;Privileged&lt;/code&gt; 字段。
噢，看起来是一个复制粘贴错误。字段名应该是小写的。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;admissionregistration.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ValidatingAdmissionPolicy&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;pod-security.policy.example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;failurePolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Fail&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchConstraints&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resourceRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiGroups&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;   &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;apps&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;v1&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;operations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;CREATE&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;UPDATE&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;   &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;deployments&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;validations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, has(c.securityContext) &amp;amp;&amp;amp; has(c.securityContext.runAsNonRoot) &amp;amp;&amp;amp; c.securityContext.runAsNonRoot)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must set runAsNonRoot to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, has(c.securityContext) &amp;amp;&amp;amp; has(c.securityContext.readOnlyRootFilesystem) &amp;amp;&amp;amp; c.securityContext.readOnlyRootFilesystem)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must set readOnlyRootFilesystem to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.allowPrivilegeEscalation) || !c.securityContext.allowPrivilegeEscalation)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must NOT set allowPrivilegeEscalation to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.privileged) || !c.securityContext.privileged)&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must NOT set privileged to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Check its status again, and you should see all warnings cleared.

Next, let&#39;s create a namespace for our tests.
--&gt;
&lt;p&gt;再次检查状态，你应该看到所有警告都已被清除。&lt;/p&gt;
&lt;p&gt;接下来，我们创建一个命名空间进行测试。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl create namespace policy-test
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Then, I bind the policy to the namespace. But at this point, I set the action to `Warn`
so that the policy prints out [warnings](/blog/2020/09/03/warnings/) instead of rejecting the requests.
This is especially useful to collect results from all expressions during development and automated testing.
--&gt;
&lt;p&gt;接下来，我将策略绑定到命名空间。但此时我将动作设置为 &lt;code&gt;Warn&lt;/code&gt;，
这样此策略将打印出&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2020/09/03/warnings/&#34;&gt;警告&lt;/a&gt;而不是拒绝请求。
这对于在开发和自动化测试期间收集所有表达式的结果非常有用。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;admissionregistration.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ValidatingAdmissionPolicyBinding&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;pod-security.policy-binding.example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;policyName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;pod-security.policy.example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;validationActions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;Warn&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchResources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;namespaceSelector&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchLabels&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;kubernetes.io/metadata.name&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;policy-test&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Tests out policy enforcement.
--&gt;
&lt;p&gt;测试策略的执行过程。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl create -n policy-test -f- &lt;span style=&#34;color:#b44&#34;&gt;&amp;lt;&amp;lt;EOF
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;apiVersion: apps/v1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;kind: Deployment
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  labels:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    app: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  name: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  selector:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    matchLabels:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      app: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  template:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    metadata:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      labels:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;        app: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    spec:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      containers:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      - image: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;        name: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;        securityContext:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;          privileged: true
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;          allowPrivilegeEscalation: true
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;EOF&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-text&#34; data-lang=&#34;text&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Warning: Validation failed for ValidatingAdmissionPolicy &amp;#39;pod-security.policy.example.com&amp;#39; with binding &amp;#39;pod-security.policy-binding.example.com&amp;#39;: all containers must set runAsNonRoot to true
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Warning: Validation failed for ValidatingAdmissionPolicy &amp;#39;pod-security.policy.example.com&amp;#39; with binding &amp;#39;pod-security.policy-binding.example.com&amp;#39;: all containers must set readOnlyRootFilesystem to true
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Warning: Validation failed for ValidatingAdmissionPolicy &amp;#39;pod-security.policy.example.com&amp;#39; with binding &amp;#39;pod-security.policy-binding.example.com&amp;#39;: all containers must NOT set allowPrivilegeEscalation to true
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Warning: Validation failed for ValidatingAdmissionPolicy &amp;#39;pod-security.policy.example.com&amp;#39; with binding &amp;#39;pod-security.policy-binding.example.com&amp;#39;: all containers must NOT set privileged to true
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Error from server: error when creating &amp;#34;STDIN&amp;#34;: admission webhook &amp;#34;webhook.example.com&amp;#34; denied the request: [container &amp;#34;nginx&amp;#34; must set RunAsNonRoot to true in its SecurityContext, container &amp;#34;nginx&amp;#34; must set ReadOnlyRootFilesystem to true in its SecurityContext, container &amp;#34;nginx&amp;#34; must NOT set AllowPrivilegeEscalation to true in its SecurityContext, container &amp;#34;nginx&amp;#34; must NOT set Privileged to true in its SecurityContext]
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Looks great! The policy and the webhook give equivalent results.
After a few other cases, when we are confident with our policy, maybe it is time to do some cleanup.

- For every expression, we repeat access to `object.spec.template.spec.containers` and to each `securityContext`;
- There is a pattern of checking presence of a field and then accessing it, which looks a bit verbose.
--&gt;
&lt;p&gt;看起来很不错！策略和 Webhook 给出了等效的结果。
又测试了其他几种情形后，当我们对策略有信心时，也许是时候进行一些清理工作了。&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;对于每个表达式，我们重复访问 &lt;code&gt;object.spec.template.spec.containers&lt;/code&gt; 和每个 &lt;code&gt;securityContext&lt;/code&gt;；&lt;/li&gt;
&lt;li&gt;有一个检查某字段是否存在然后访问该字段的模式，这种模式看起来有点繁琐。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
Fortunately, since Kubernetes 1.28, we have new solutions for both issues.
Variable Composition allows us to extract repeated sub-expressions into their own variables.
Kubernetes enables [the optional library](https://github.com/google/cel-spec/wiki/proposal-246) for CEL, which are excellent to work with fields that are, you guessed it, optional. 

With both features in mind, let&#39;s refactor the policy a bit.
--&gt;
&lt;p&gt;幸运的是，自 Kubernetes 1.28 以来，我们对这两个问题都有了新的解决方案。
变量组合（Variable Composition）允许我们将重复的子表达式提取到单独的变量中。
Kubernetes 允许为 CEL 使用&lt;a href=&#34;https://github.com/google/cel-spec/wiki/proposal-246&#34;&gt;可选库&lt;/a&gt;，
这些库非常适合处理可选的字段，你猜对了。&lt;/p&gt;
&lt;p&gt;在了解了这两个特性后，让我们稍微重构一下此策略。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;admissionregistration.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ValidatingAdmissionPolicy&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;pod-security.policy.example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;failurePolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Fail&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchConstraints&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resourceRules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiGroups&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;   &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;apps&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;v1&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;operations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;CREATE&amp;#34;&lt;/span&gt;,&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;UPDATE&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;   &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;deployments&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;variables&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;containers&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;object.spec.template.spec.containers&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;securityContexts&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;variables.containers.map(c, c.?securityContext)&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;validations&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;variables.securityContexts.all(c, c.?runAsNonRoot == optional.of(true))&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must set runAsNonRoot to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;variables.securityContexts.all(c, c.?readOnlyRootFilesystem == optional.of(true))&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must set readOnlyRootFilesystem to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;variables.securityContexts.all(c, c.?allowPrivilegeEscalation != optional.of(true))&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must NOT set allowPrivilegeEscalation to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;expression&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;variables.securityContexts.all(c, c.?privileged != optional.of(true))&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;all containers must NOT set privileged to true&amp;#39;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
The policy is now much cleaner and more readable. Update the policy, and you should see
it function the same as before.

Now let&#39;s change the policy binding from warning to actually denying requests that fail validation.
--&gt;
&lt;p&gt;策略现在更简洁、更易读。更新策略后，你应该看到它的功用与之前无异。&lt;/p&gt;
&lt;p&gt;现在让我们将策略绑定从警告更改为实际拒绝验证失败的请求。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;admissionregistration.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ValidatingAdmissionPolicyBinding&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;pod-security.policy-binding.example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;policyName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;pod-security.policy.example.com&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;validationActions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;[&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;Deny&amp;#34;&lt;/span&gt;]&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchResources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;namespaceSelector&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchLabels&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;kubernetes.io/metadata.name&amp;#34;: &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;policy-test&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
And finally, remove the webhook. Now the result should include only messages from 
the policy.
--&gt;
&lt;p&gt;最后，移除 Webhook。现在结果应该只包含来自策略的消息。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl create -n policy-test -f- &lt;span style=&#34;color:#b44&#34;&gt;&amp;lt;&amp;lt;EOF
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;apiVersion: apps/v1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;kind: Deployment
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;metadata:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  labels:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    app: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  name: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;spec:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  selector:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    matchLabels:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      app: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;  template:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    metadata:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      labels:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;        app: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;    spec:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      containers:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;      - image: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;        name: nginx
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;        securityContext:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;          privileged: true
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;          allowPrivilegeEscalation: true
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;EOF&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-text&#34; data-lang=&#34;text&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;The deployments &amp;#34;nginx&amp;#34; is invalid: : ValidatingAdmissionPolicy &amp;#39;pod-security.policy.example.com&amp;#39; with binding &amp;#39;pod-security.policy-binding.example.com&amp;#39; denied request: all containers must set runAsNonRoot to true
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Please notice that, by design, the policy will stop evaluation after the first expression that causes the request to be denied.
This is different from what happens when the expressions generate only warnings.
--&gt;
&lt;p&gt;请注意，根据设计，此策略在第一个导致请求被拒绝的表达式之后停止处理。
这与表达式只生成警告时的情况不同。&lt;/p&gt;
&lt;!--
## Set up monitoring
Unlike a webhook, a policy is not a dedicated process that can expose its own metrics.
Instead, you can use metrics from the API server in their place.

Here are some examples in Prometheus Query Language of common monitoring tasks.

To find the 95th percentile execution duration of the policy shown above.
--&gt;
&lt;h2 id=&#34;设置监控&#34;&gt;设置监控&lt;/h2&gt;
&lt;p&gt;与 Webhook 不同，策略不是一个可以公开其自身指标的专用进程。
相反，你可以使用源自 API 服务器的指标来代替。&lt;/p&gt;
&lt;p&gt;以下是使用 Prometheus 查询语言执行一些常见监控任务的示例。&lt;/p&gt;
&lt;p&gt;找到上述策略执行期间的 95 分位值：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-text&#34; data-lang=&#34;text&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;histogram_quantile(0.95, sum(rate(apiserver_validating_admission_policy_check_duration_seconds_bucket{policy=&amp;#34;pod-security.policy.example.com&amp;#34;}[5m])) by (le)) 
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
To find the rate of the policy evaluation.
--&gt;
&lt;p&gt;找到策略评估的速率：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-text&#34; data-lang=&#34;text&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;rate(apiserver_validating_admission_policy_check_total{policy=&amp;#34;pod-security.policy.example.com&amp;#34;}[5m])
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
You can read [the metrics reference](/docs/reference/instrumentation/metrics/) to learn more about the metrics above.
The metrics of ValidatingAdmissionPolicy are currently in alpha,
and more and better metrics will come while the stability graduates in the future release.
--&gt;
&lt;p&gt;你可以阅读&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/instrumentation/metrics/&#34;&gt;指标参考&lt;/a&gt;了解有关上述指标的更多信息。
ValidatingAdmissionPolicy 的指标目前处于 Alpha 阶段，随着稳定性在未来版本中的提升，将会有更多和更好的指标。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.30：只读卷挂载终于可以真正实现只读了</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/04/23/recursive-read-only-mounts/</link>
      <pubDate>Tue, 23 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/04/23/recursive-read-only-mounts/</guid>
      <description>
        
        
        &lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Akihiro Suda (NTT)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
layout: blog
title: &#39;Kubernetes 1.30: Read-only volume mounts can be finally literally read-only&#39;
date: 2024-04-23
slug: recursive-read-only-mounts
author: &gt;
  Akihiro Suda (NTT)
--&gt;
&lt;!--
Read-only volume mounts have been a feature of Kubernetes since the beginning.
Surprisingly, read-only mounts are not completely read-only under certain conditions on Linux.
As of the v1.30 release, they can be made completely read-only,
with alpha support for _recursive read-only mounts_.
--&gt;
&lt;p&gt;只读卷挂载从一开始就是 Kubernetes 的一个特性。
令人惊讶的是，在 Linux 上的某些条件下，只读挂载并不是完全只读的。
从 v1.30 版本开始，这类卷挂载可以被处理为完全只读；v1.30 为&lt;strong&gt;递归只读挂载&lt;/strong&gt;提供 Alpha 支持。&lt;/p&gt;
&lt;!--
## Read-only volume mounts are not really read-only by default

Volume mounts can be deceptively complicated.

You might expect that the following manifest makes everything under `/mnt` in the containers read-only:
--&gt;
&lt;h2 id=&#34;默认情况下-只读卷装载并不是真正的只读&#34;&gt;默认情况下，只读卷装载并不是真正的只读&lt;/h2&gt;
&lt;p&gt;卷挂载可能看似复杂。&lt;/p&gt;
&lt;p&gt;你可能期望以下清单使容器中 &lt;code&gt;/mnt&lt;/code&gt; 下的所有内容变为只读：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;---&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;hostPath&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;path&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeMounts&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;mountPath&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;readOnly&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;true&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
However, any sub-mounts beneath `/mnt` may still be writable!
For example, consider that `/mnt/my-nfs-server` is writeable on the host.
Inside the container, writes to `/mnt/*` will be rejected but `/mnt/my-nfs-server/*` will still be writeable.
--&gt;
&lt;p&gt;但是，&lt;code&gt;/mnt&lt;/code&gt; 下的任何子挂载可能仍然是可写的！
例如，假设 &lt;code&gt;/mnt/my-nfs-server&lt;/code&gt; 在主机上是可写的。
在容器内部，写入 &lt;code&gt;/mnt/*&lt;/code&gt; 将被拒绝，但 &lt;code&gt;/mnt/my-nfs-server/*&lt;/code&gt; 仍然可写。&lt;/p&gt;
&lt;!--
## New mount option: recursiveReadOnly

Kubernetes 1.30 added a new mount option `recursiveReadOnly` so as to make submounts recursively read-only.

The option can be enabled as follows:
--&gt;
&lt;h2 id=&#34;新的挂载选项-递归只读&#34;&gt;新的挂载选项：递归只读&lt;/h2&gt;
&lt;p&gt;Kubernetes 1.30 添加了一个新的挂载选项 &lt;code&gt;recursiveReadOnly&lt;/code&gt;，以使子挂载递归只读。&lt;/p&gt;
&lt;p&gt;可以按如下方式启用该选项：&lt;/p&gt;
&lt;!--
# Possible values are `Enabled`, `IfPossible`, and `Disabled`.
# Needs to be specified in conjunction with `readOnly: true`.
--&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;display:grid;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;---&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;hostPath&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;path&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeMounts&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;mountPath&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/mnt&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;readOnly&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;true&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex; background-color:#dfdfdf&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# NEW&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex; background-color:#dfdfdf&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# 可能的值为 `Enabled`、`IfPossible` 和 `Disabled`。&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex; background-color:#dfdfdf&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# 需要与 `readOnly: true` 一起指定。&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex; background-color:#dfdfdf&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;recursiveReadOnly&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Enabled&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;!--
This is implemented by applying the `MOUNT_ATTR_RDONLY` attribute with the `AT_RECURSIVE` flag
using [`mount_setattr(2)`](https://man7.org/linux/man-pages/man2/mount_setattr.2.html) added in
Linux kernel v5.12.

For backwards compatibility, the `recursiveReadOnly` field is not a replacement for `readOnly`,
but is used _in conjunction_ with it.
To get a properly recursive read-only mount, you must set both fields.
--&gt;
&lt;p&gt;这是通过使用 Linux 内核 v5.12 中添加的
&lt;a href=&#34;https://man7.org/linux/man-pages/man2/mount_setattr.2.html&#34;&gt;&lt;code&gt;mount_setattr(2)&lt;/code&gt;&lt;/a&gt;
应用带有 &lt;code&gt;AT_RECURSIVE&lt;/code&gt; 标志的 &lt;code&gt;MOUNT_ATTR_RDONLY&lt;/code&gt; 属性来实现的。&lt;/p&gt;
&lt;p&gt;为了向后兼容，&lt;code&gt;recursiveReadOnly&lt;/code&gt; 字段不是 &lt;code&gt;readOnly&lt;/code&gt; 的替代品，而是与其结合使用。
要获得正确的递归只读挂载，你必须设置这两个字段。&lt;/p&gt;
&lt;!--
## Feature availability {#availability}

To enable `recursiveReadOnly` mounts, the following components have to be used:
--&gt;
&lt;h2 id=&#34;availability&#34;&gt;特性可用性  &lt;/h2&gt;
&lt;p&gt;要启用 &lt;code&gt;recursiveReadOnly&lt;/code&gt; 挂载，必须使用以下组件：&lt;/p&gt;
&lt;!--
* Kubernetes: v1.30 or later, with the `RecursiveReadOnlyMounts`
  [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) enabled.
  As of v1.30, the gate is marked as alpha.

* CRI runtime:
  * containerd: v2.0 or later

* OCI runtime:
  * runc: v1.1 or later
  * crun: v1.8.6 or later

* Linux kernel: v5.12 or later
--&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes：v1.30 或更新版本，并启用 &lt;code&gt;RecursiveReadOnlyMounts&lt;/code&gt; &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;特性门控&lt;/a&gt;。
从 v1.30 开始，此特性被标记为 Alpha。&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;CRI 运行时：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;containerd：v2.0 或更新版本&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;OCI 运行时：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;runc：v1.1 或更新版本&lt;/li&gt;
&lt;li&gt;crun: v1.8.6 或更新版本&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Linux 内核: v5.12 或更新版本&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## What&#39;s next?

Kubernetes SIG Node hope - and expect - that the feature will be promoted to beta and eventually
general availability (GA) in future releases of Kubernetes, so that users no longer need to enable
the feature gate manually.

The default value of `recursiveReadOnly` will still remain `Disabled`, for backwards compatibility.
--&gt;
&lt;h2 id=&#34;接下来&#34;&gt;接下来&lt;/h2&gt;
&lt;p&gt;Kubernetes SIG Node 希望并期望该特性将在 Kubernetes
的未来版本中升级为 Beta 版本并最终稳定可用（GA），以便用户不再需要手动启用此特性门控。&lt;/p&gt;
&lt;p&gt;为了向后兼容，&lt;code&gt;recursive ReadOnly&lt;/code&gt; 的默认值仍将保持 &lt;code&gt;Disabled&lt;/code&gt;。&lt;/p&gt;
&lt;!--
## How can I learn more?
--&gt;
&lt;h2 id=&#34;怎样才能了解更多&#34;&gt;怎样才能了解更多？&lt;/h2&gt;
&lt;!-- https://github.com/kubernetes/website/pull/45159 --&gt;
&lt;!--
Please check out the [documentation](/docs/concepts/storage/volumes/#read-only-mounts)
for the further details of `recursiveReadOnly` mounts.
--&gt;
&lt;p&gt;请查看&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/storage/volumes/#read-only-mounts&#34;&gt;文档&lt;/a&gt;以获取
&lt;code&gt;recursiveReadOnly&lt;/code&gt; 挂载的更多详细信息。&lt;/p&gt;
&lt;!--
## How to get involved?

This feature is driven by the SIG Node community. Please join us to connect with
the community and share your ideas and feedback around the above feature and
beyond. We look forward to hearing from you!
--&gt;
&lt;h2 id=&#34;如何参与&#34;&gt;如何参与？&lt;/h2&gt;
&lt;p&gt;此特性由 SIG Node 社区推动。
请加入我们，与社区建立联系，并分享你对上述特性及其他特性的想法和反馈。
我们期待你的回音！&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.30：对 Pod 使用用户命名空间的支持进阶至 Beta</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/04/22/userns-beta/</link>
      <pubDate>Mon, 22 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/04/22/userns-beta/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Kubernetes 1.30: Beta Support For Pods With User Namespaces&#34;
date: 2024-04-22
slug: userns-beta
author: &gt;
  Rodrigo Campos Catelin (Microsoft),
  Giuseppe Scrivano (Red Hat),
  Sascha Grunert (Red Hat)
--&gt;
&lt;!--
Linux provides different namespaces to isolate processes from each other. For
example, a typical Kubernetes pod runs within a network namespace to isolate the
network identity and a PID namespace to isolate the processes.

One Linux namespace that was left behind is the [user
namespace](https://man7.org/linux/man-pages/man7/user_namespaces.7.html). This
namespace allows us to isolate the user and group identifiers (UIDs and GIDs) we
use inside the container from the ones on the host.
--&gt;
&lt;p&gt;Linux 提供了不同的命名空间来将进程彼此隔离。
例如，一个典型的 Kubernetes Pod 运行在一个网络命名空间中可以隔离网络身份，运行在一个 PID 命名空间中可以隔离进程。&lt;/p&gt;
&lt;p&gt;Linux 有一个以前一直未被容器化应用所支持的命名空间是&lt;a href=&#34;https://man7.org/linux/man-pages/man7/user_namespaces.7.html&#34;&gt;用户命名空间&lt;/a&gt;。
这个命名空间允许我们将容器内使用的用户标识符和组标识符（UID 和 GID）与主机上的标识符隔离开来。&lt;/p&gt;
&lt;!--
This is a powerful abstraction that allows us to run containers as &#34;root&#34;: we
are root inside the container and can do everything root can inside the pod,
but our interactions with the host are limited to what a non-privileged user can
do. This is great for limiting the impact of a container breakout.
--&gt;
&lt;p&gt;这是一个强大的抽象，允许我们以 “root” 身份运行容器：
我们在容器内部有 root 权限，可以在 Pod 内执行所有 root 能做的操作，
但我们与主机的交互仅限于非特权用户可以执行的操作。这对于限制容器逃逸的影响非常有用。&lt;/p&gt;
&lt;!--
A container breakout is when a process inside a container can break out
onto the host using some unpatched vulnerability in the container runtime or the
kernel and can access/modify files on the host or other containers. If we
run our pods with user namespaces, the privileges the container has over the
rest of the host are reduced, and the files outside the container it can access
are limited too.
--&gt;
&lt;p&gt;容器逃逸是指容器内的进程利用容器运行时或内核中的某些未打补丁的漏洞逃逸到主机上，
并可以访问/修改主机或其他容器上的文件。如果我们以用户命名空间运行我们的 Pod，
容器对主机其余部分的特权将减少，并且此容器可以访问的容器外的文件也将受到限制。&lt;/p&gt;
&lt;!--
In Kubernetes v1.25, we introduced support for user namespaces only for stateless
pods. Kubernetes 1.28 lifted that restriction, and now, with Kubernetes 1.30, we
are moving to beta!
--&gt;
&lt;p&gt;在 Kubernetes v1.25 中，我们仅为无状态 Pod 引入了对用户命名空间的支持。
Kubernetes 1.28 取消了这一限制，目前在 Kubernetes 1.30 中，这个特性进阶到了 Beta！&lt;/p&gt;
&lt;!--
## What is a user namespace?

Note: Linux user namespaces are a different concept from [Kubernetes
namespaces](/docs/concepts/overview/working-with-objects/namespaces/).
The former is a Linux kernel feature; the latter is a Kubernetes feature.
--&gt;
&lt;h2 id=&#34;what-is-a-user-namespace&#34;&gt;什么是用户命名空间？  &lt;/h2&gt;
&lt;p&gt;注意：Linux 用户命名空间与
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/overview/working-with-objects/namespaces/&#34;&gt;Kubernetes 命名空间&lt;/a&gt;是不同的概念。
前者是一个 Linux 内核特性；后者是一个 Kubernetes 特性。&lt;/p&gt;
&lt;!--
User namespaces are a Linux feature that isolates the UIDs and GIDs of the
containers from the ones on the host. The identifiers in the container can be
mapped to identifiers on the host in a way where the host UID/GIDs used for
different containers never overlap. Furthermore, the identifiers can be mapped
to unprivileged, non-overlapping UIDs and GIDs on the host. This brings two key
benefits:
--&gt;
&lt;p&gt;用户命名空间是一个 Linux 特性，它将容器的 UID 和 GID 与主机上的隔离开来。
容器中的标识符可以被映射为主机上的标识符，并且保证不同容器所使用的主机 UID/GID 不会重叠。
此外，这些标识符可以被映射到主机上没有特权的、非重叠的 UID 和 GID。这带来了两个关键好处：&lt;/p&gt;
&lt;!--
* _Prevention of lateral movement_: As the UIDs and GIDs for different
containers are mapped to different UIDs and GIDs on the host, containers have a
harder time attacking each other, even if they escape the container boundaries.
For example, suppose container A runs with different UIDs and GIDs on the host
than container B. In that case, the operations it can do on container B&#39;s files and processes
are limited: only read/write what a file allows to others, as it will never
have permission owner or group permission (the UIDs/GIDs on the host are
guaranteed to be different for different containers).
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;防止横向移动&lt;/strong&gt;：由于不同容器的 UID 和 GID 被映射到主机上的不同 UID 和 GID，
即使这些标识符逃出了容器的边界，容器之间也很难互相攻击。
例如，假设容器 A 在主机上使用的 UID 和 GID 与容器 B 不同。
在这种情况下，它对容器 B 的文件和进程的操作是有限的：只能读取/写入某文件所允许的操作，
因为它永远不会拥有文件所有者或组权限（主机上的 UID/GID 保证对不同容器是不同的）。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
* _Increased host isolation_: As the UIDs and GIDs are mapped to unprivileged
users on the host, if a container escapes the container boundaries, even if it
runs as root inside the container, it has no privileges on the host. This
greatly protects what host files it can read/write, which process it can send
signals to, etc. Furthermore, capabilities granted are only valid inside the
user namespace and not on the host, limiting the impact a container
escape can have.
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;增加主机隔离&lt;/strong&gt;：由于 UID 和 GID 被映射到主机上的非特权用户，如果某容器逃出了它的边界，
即使它在容器内部以 root 身份运行，它在主机上也没有特权。
这大大保护了它可以读取/写入的主机文件，它可以向哪个进程发送信号等。
此外，所授予的权能仅在用户命名空间内有效，而在主机上无效，这就限制了容器逃逸的影响。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/images/blog/2024-04-22-userns-beta/userns-ids.png&#34;
         alt=&#34;Image showing IDs 0-65535 are reserved to the host, pods use higher IDs&#34;/&gt; &lt;figcaption&gt;
            &lt;h4&gt;User namespace IDs allocation&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
--&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/images/blog/2024-04-22-userns-beta/userns-ids.png&#34;
         alt=&#34;此图显示了 ID 0-65535 为主机预留，Pod 使用更大的 ID&#34;/&gt; &lt;figcaption&gt;
            &lt;h4&gt;用户命名空间 ID 分配&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;!--
Without using a user namespace, a container running as root in the case of a
container breakout has root privileges on the node. If some capabilities
were granted to the container, the capabilities are valid on the host too. None
of this is true when using user namespaces (modulo bugs, of course 🙂).
--&gt;
&lt;p&gt;如果不使用用户命名空间，容器逃逸时以 root 运行的容器在节点上将具有 root 特权。
如果某些权能授权给了此容器，这些权能在主机上也会有效。
如果使用用户命名空间，就不会是这种情况（当然，除非有漏洞 🙂）。&lt;/p&gt;
&lt;!--
## Changes in 1.30

In Kubernetes 1.30, besides moving user namespaces to beta, the contributors
working on this feature:
--&gt;
&lt;h2 id=&#34;changes-in-1.30&#34;&gt;1.30 的变化  &lt;/h2&gt;
&lt;p&gt;在 Kubernetes 1.30 中，除了将用户命名空间进阶至 Beta，参与此特性的贡献者们还：&lt;/p&gt;
&lt;!--
* Introduced a way for the kubelet to use custom ranges for the UIDs/GIDs mapping 
 * Have added a way for Kubernetes to enforce that the runtime supports all the features
   needed for user namespaces. If they are not supported, Kubernetes will show a
   clear error when trying to create a pod with user namespaces. Before 1.30, if
   the container runtime didn&#39;t support user namespaces, the pod could be created
   without a user namespace.
 * Added more tests, including [tests in the
   cri-tools](https://github.com/kubernetes-sigs/cri-tools/pull/1354)
   repository.
--&gt;
&lt;ul&gt;
&lt;li&gt;为 kubelet 引入了一种使用自定义范围进行 UID/GID 映射的方式&lt;/li&gt;
&lt;li&gt;为 Kubernetes 添加了一种强制执行的方式让运行时支持用户命名空间所需的所有特性。
如果不支持这些特性，Kubernetes 在尝试创建具有用户命名空间的 Pod 时，会显示一个明确的错误。
在 1.30 之前，如果容器运行时不支持用户命名空间，Pod 可能会在没有用户命名空间的情况下被创建。&lt;/li&gt;
&lt;li&gt;新增了更多的测试，包括在 &lt;a href=&#34;https://github.com/kubernetes-sigs/cri-tools/pull/1354&#34;&gt;cri-tools&lt;/a&gt; 仓库中的测试。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
You can check the
[documentation](/docs/concepts/workloads/pods/user-namespaces/#set-up-a-node-to-support-user-namespaces)
on user namespaces for how to configure custom ranges for the mapping.
--&gt;
&lt;p&gt;你可以查阅有关用户命名空间的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/pods/user-namespaces/#set-up-a-node-to-support-user-namespaces&#34;&gt;文档&lt;/a&gt;，
了解如何配置映射的自定义范围。&lt;/p&gt;
&lt;!--
## Demo

A few months ago, [CVE-2024-21626][runc-cve] was disclosed. This **vulnerability
score is 8.6 (HIGH)**. It allows an attacker to escape a container and
**read/write to any path on the node and other pods hosted on the same node**.

Rodrigo created a demo that exploits [CVE 2024-21626][runc-cve] and shows how
the exploit, which works without user namespaces, **is mitigated when user
namespaces are in use.**
--&gt;
&lt;h2 id=&#34;demo&#34;&gt;演示  &lt;/h2&gt;
&lt;p&gt;几个月前，&lt;a href=&#34;https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv&#34;&gt;CVE-2024-21626&lt;/a&gt; 被披露。
这个 &lt;strong&gt;漏洞评分为 8.6（高）&lt;/strong&gt;。它允许攻击者让容器逃逸，并&lt;strong&gt;读取/写入节点上的任何路径以及同一节点上托管的其他 Pod&lt;/strong&gt;。&lt;/p&gt;
&lt;p&gt;Rodrigo 创建了一个滥用 &lt;a href=&#34;https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv&#34;&gt;CVE 2024-21626&lt;/a&gt; 的演示，
演示了此漏洞在没有用户命名空间时的工作方式，而在使用用户命名空间后 &lt;strong&gt;得到了缓解&lt;/strong&gt;。&lt;/p&gt;
&lt;!--

&lt;div class=&#34;youtube-quote-sm&#34;&gt;
  &lt;iframe src=&#34;https://www.youtube.com/embed/07y5bl5UDdA&#34; allowfullscreen title=&#34;Mitigation of CVE-2024-21626 on Kubernetes by enabling User Namespace support&#34;&gt;&lt;/iframe&gt;
&lt;/div&gt;

--&gt;

&lt;div class=&#34;youtube-quote-sm&#34;&gt;
  &lt;iframe src=&#34;https://www.youtube.com/embed/07y5bl5UDdA&#34; allowfullscreen title=&#34;通过启用用户命名空间支持来在 Kubernetes 上缓解 CVE-2024-21626&#34;&gt;&lt;/iframe&gt;
&lt;/div&gt;

&lt;!--
Please note that with user namespaces, an attacker can do on the host file system
what the permission bits for &#34;others&#34; allow. Therefore, the CVE is not
completely prevented, but the impact is greatly reduced.
--&gt;
&lt;p&gt;请注意，使用用户命名空间时，攻击者可以在主机文件系统上执行“其他”权限位所允许的操作。
因此，此 CVE 并没有完全被修复，但影响大大降低。&lt;/p&gt;
&lt;!--
## Node system requirements

There are requirements on the Linux kernel version and the container
runtime to use this feature.

On Linux you need Linux 6.3 or greater. This is because the feature relies on a
kernel feature named idmap mounts, and support for using idmap mounts with tmpfs
was merged in Linux 6.3.
--&gt;
&lt;h2 id=&#34;node-system-requirements&#34;&gt;节点系统要求  &lt;/h2&gt;
&lt;p&gt;使用此特性对 Linux 内核版本和容器运行时有一些要求。&lt;/p&gt;
&lt;p&gt;在 Linux 上，你需要 Linux 6.3 或更高版本。
这是因为此特性依赖于一个名为 idmap 挂载的内核特性，而支持 idmap 挂载与 tmpfs 一起使用的特性是在 Linux 6.3 中合并的。&lt;/p&gt;
&lt;!--
Suppose you are using [CRI-O][crio] with crun; as always, you can expect support for
Kubernetes 1.30 with CRI-O 1.30. Please note you also need [crun][crun] 1.9 or
greater. If you are using CRI-O with [runc][runc], this is still not supported.

Containerd support is currently targeted for [containerd][containerd] 2.0, and
the same crun version requirements apply. If you are using containerd with runc,
this is still not supported.
--&gt;
&lt;p&gt;假设你使用 &lt;a href=&#34;https://cri-o.io/&#34;&gt;CRI-O&lt;/a&gt; 和 crun；就像往常一样，你可以期待 CRI-O 1.30 支持 Kubernetes 1.30。
请注意，你还需要 &lt;a href=&#34;https://github.com/containers/crun&#34;&gt;crun&lt;/a&gt; 1.9 或更高版本。如果你使用的是 CRI-O 和 &lt;a href=&#34;https://github.com/opencontainers/runc/&#34;&gt;runc&lt;/a&gt;，则仍然不支持用户命名空间。&lt;/p&gt;
&lt;p&gt;containerd 对此特性的支持目前锁定为 &lt;a href=&#34;https://containerd.io/&#34;&gt;containerd&lt;/a&gt; 2.0，同样 crun 也有适用的版本要求。
如果你使用的是 containerd 和 runc，则仍然不支持用户命名空间。&lt;/p&gt;
&lt;!--
Please note that containerd 1.7 added _experimental_ support for user
namespaces, as implemented in Kubernetes 1.25 and 1.26. We did a redesign in
Kubernetes 1.27, which requires changes in the container runtime. Those changes
are not present in containerd 1.7, so it only works with user namespaces
support in Kubernetes 1.25 and 1.26.
--&gt;
&lt;p&gt;请注意，正如在 Kubernetes 1.25 和 1.26 中实现的那样，containerd 1.7 增加了对用户命名空间的&lt;strong&gt;实验性&lt;/strong&gt;支持。
我们曾在 Kubernetes 1.27 中进行了重新设计，所以容器运行时需要做一些变更。
而 containerd 1.7 并未包含这些变更，所以它仅在 Kubernetes 1.25 和 1.26 中支持使用用户命名空间。&lt;/p&gt;
&lt;!--
Another limitation of containerd 1.7 is that it needs to change the
ownership of every file and directory inside the container image during Pod
startup. This has a storage overhead and can significantly impact the
container startup latency. Containerd 2.0 will probably include an implementation
that will eliminate the added startup latency and storage overhead. Consider
this if you plan to use containerd 1.7 with user namespaces in
production.

None of these containerd 1.7 limitations apply to CRI-O.
--&gt;
&lt;p&gt;containerd 1.7 的另一个限制是，它需要在 Pod 启动期间变更容器镜像内的每个文件和目录的所有权。
这会增加存储开销，并可能显著影响容器启动延迟。containerd 2.0 可能会包含一个实现，以消除增加的启动延迟和存储开销。
如果你计划在生产环境中使用 containerd 1.7 和用户命名空间，请考虑这一点。&lt;/p&gt;
&lt;p&gt;containerd 1.7 的这些限制均不适用于 CRI-O。&lt;/p&gt;
&lt;!--
## How do I get involved?

You can reach SIG Node by several means:
- Slack: [#sig-node](https://kubernetes.slack.com/messages/sig-node)
- [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-node)
- [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fnode)
--&gt;
&lt;h2 id=&#34;how-do-i-get-involved&#34;&gt;如何参与？  &lt;/h2&gt;
&lt;p&gt;你可以通过以下方式联系 SIG Node：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Slack：&lt;a href=&#34;https://kubernetes.slack.com/messages/sig-node&#34;&gt;#sig-node&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://groups.google.com/forum/#!forum/kubernetes-sig-node&#34;&gt;邮件列表&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/community/labels/sig%2Fnode&#34;&gt;提交社区 Issue/PR&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
You can also contact us directly:
- GitHub: @rata @giuseppe @saschagrunert
- Slack: @rata @giuseppe @sascha
--&gt;
&lt;p&gt;你也可以通过以下方式直接联系我们：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;GitHub：@rata @giuseppe @saschagrunert&lt;/li&gt;
&lt;li&gt;Slack：@rata @giuseppe @sascha&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>SIG Architecture 特别报道：代码组织</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/04/11/sig-architecture-code-spotlight-2024/</link>
      <pubDate>Thu, 11 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/04/11/sig-architecture-code-spotlight-2024/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Spotlight on SIG Architecture: Code Organization&#34;
slug: sig-architecture-code-spotlight-2024
canonicalUrl: https://www.kubernetes.dev/blog/2024/04/11/sig-architecture-code-spotlight-2024
date: 2024-04-11
author: &gt;
  Frederico Muñoz (SAS Institute)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Frederico Muñoz (SAS Institute)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
_This is the third interview of a SIG Architecture Spotlight series that will cover the different
subprojects. We will cover [SIG Architecture: Code Organization](https://github.com/kubernetes/community/blob/e44c2c9d0d3023e7111d8b01ac93d54c8624ee91/sig-architecture/README.md#code-organization)._

In this SIG Architecture spotlight I talked with [Madhav Jivrajani](https://github.com/MadhavJivrajani)
(VMware), a member of the Code Organization subproject.
--&gt;
&lt;p&gt;&lt;strong&gt;这是 SIG Architecture Spotlight 系列的第三次采访，该系列将涵盖不同的子项目。
我们将介绍 &lt;a href=&#34;https://github.com/kubernetes/community/blob/e44c2c9d0d3023e7111d8b01ac93d54c8624ee91/sig-architecture/README.md#code-organization&#34;&gt;SIG Architecture：代码组织&lt;/a&gt;。&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;在本次 SIG Architecture 聚焦中，我与代码组织子项目的成员
&lt;a href=&#34;https://github.com/MadhavJivrajani&#34;&gt;Madhav Jivrajani&lt;/a&gt;（VMware）进行了交谈。&lt;/p&gt;
&lt;!--
## Introducing the Code Organization subproject

**Frederico (FSM)**: Hello Madhav, thank you for your availability. Could you start by telling us a
bit about yourself, your role and how you got involved in Kubernetes?
--&gt;
&lt;h2 id=&#34;介绍代码组织子项目&#34;&gt;介绍代码组织子项目&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Frederico (FSM)&lt;/strong&gt;：你好，Madhav，感谢你百忙之中接受我们的采访。你能否首先向我们介绍一下你自己、你的角色以及你是如何参与 Kubernetes 的？&lt;/p&gt;
&lt;!--
**Madhav Jivrajani (MJ)**: Hello! My name is Madhav Jivrajani, I serve as a technical lead for SIG
Contributor Experience and a GitHub Admin for the Kubernetes project. Apart from that I also
contribute to SIG API Machinery and SIG Etcd, but more recently, I’ve been helping out with the work
that is needed to help Kubernetes [stay on supported versions of
Go](https://github.com/kubernetes/enhancements/tree/cf6ee34e37f00d838872d368ec66d7a0b40ee4e6/keps/sig-release/3744-stay-on-supported-go-versions),
and it is through this that I am involved with the Code Organization subproject of SIG Architecture.
--&gt;
&lt;p&gt;&lt;strong&gt;Madhav Jivrajani (MJ)&lt;/strong&gt;：你好！我叫 Madhav Jivrajani，担任 SIG 贡献者体验的技术主管和 Kubernetes 项目的 GitHub 管理员。
除此之外，我还为 SIG API Machinery 和 SIG Etcd 做出贡献，但最近，我一直在帮助完成 Kubernetes
&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/cf6ee34e37f00d838872d368ec66d7a0b40ee4e6/keps/sig-release/3744-stay-on-supported-go-versions&#34;&gt;保留受支持的 Go 版本&lt;/a&gt; 所需的工作，
正是通过这一点，参与到了 SIG Architecture 的代码组织子项目中。&lt;/p&gt;
&lt;!--
**FSM**: A project the size of Kubernetes must have unique challenges in terms of code organization
-- is this a fair assumption?  If so, what would you pick as some of the main challenges that are
specific to Kubernetes?
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：像 Kubernetes 这样规模的项目在代码组织方面肯定会遇到独特的挑战 -- 这是一个合理的假设吗？
如果是这样，你认为 Kubernetes 特有的一些主要挑战是什么？&lt;/p&gt;
&lt;!--
**MJ**: That’s a fair assumption! The first interesting challenge comes from the sheer size of the
Kubernetes codebase. We have ≅2.2 million lines of Go code (which is steadily decreasing thanks to
[dims](https://github.com/dims) and other folks in this sub-project!), and a little over 240
dependencies that we rely on either directly or indirectly, which is why having a sub-project
dedicated to helping out with dependency management is crucial: we need to know what dependencies
we’re pulling in, what versions these dependencies are at, and tooling to help make sure we are
managing these dependencies across different parts of the codebase in a consistent manner.
--&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt;：这是一个合理的假设！第一个有趣的挑战来自 Kubernetes 代码库的庞大规模。
我们有大约 220 万行 Go 代码（由于 &lt;a href=&#34;https://github.com/dims&#34;&gt;dims&lt;/a&gt; 和这个子项目中的其他人的努力，该代码正在稳步减少！），
而且我们的依赖项（无论是直接还是间接）超过 240 个，这就是为什么拥有一个致力于帮助进行依赖项管理的子项目至关重要：
我们需要知道我们正在引入哪些依赖项，这些依赖项处于什么版本，
以及帮助确保我们能够以一致的方式管理代码库不同部分的依赖关系的工具。
以一致的方式管理代码库不同部分的这些依赖关系。&lt;/p&gt;
&lt;!--
Another interesting challenge with Kubernetes is that we publish a lot of Go modules as part of the
Kubernetes release cycles, one example of this is
[`client-go`](https://github.com/kubernetes/client-go).However, we as a project would also like the
benefits of having everything in one repository to get the advantages of using a monorepo, like
atomic commits... so, because of this, code organization works with other SIGs (like SIG Release) to
automate the process of publishing code from the monorepo to downstream individual repositories
which are much easier to consume, and this way you won’t have to import the entire Kubernetes
codebase!
--&gt;
&lt;p&gt;Kubernetes 的另一个有趣的挑战是，我们在 Kubernetes 发布周期中发布了许多 Go 模块，其中一个例子是
&lt;a href=&#34;https://github.com/kubernetes/client-go&#34;&gt;&lt;code&gt;client-go&lt;/code&gt;&lt;/a&gt;。
然而，作为一个项目，我们也希望将所有内容都放在一个仓库中，便获得使用单一仓库的优势，例如原子性的提交……
因此，代码组织与其他 SIG（例如 SIG Release）合作，以实现将代码从单一仓库发布到下游仓库的自动化过程，
下游仓库更容易使用，因为你就不必导入整个 Kubernetes 代码库！&lt;/p&gt;
&lt;!--
## Code organization and Kubernetes

**FSM**: For someone just starting contributing to Kubernetes code-wise, what are the main things
they should consider in terms of code organization? How would you sum up the key concepts?

**MJ**: I think one of the key things to keep in mind at least as you’re starting off is the concept
of staging directories. In the [`kubernetes/kubernetes`](https://github.com/kubernetes/kubernetes)
repository, you will come across a directory called
[`staging/`](https://github.com/kubernetes/kubernetes/tree/master/staging). The sub-folders in this
directory serve as a bunch of pseudo-repositories. For example, the
[`kubernetes/client-go`](https://github.com/kubernetes/client-go) repository that publishes releases
for `client-go` is actually a [staging
repo](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go).
--&gt;
&lt;h2 id=&#34;代码组织和-kubernetes&#34;&gt;代码组织和 Kubernetes&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：对于刚刚开始为 Kubernetes 代码做出贡献的人来说，在代码组织方面他们应该考虑的主要事项是什么？
你认为有哪些关键概念？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt;：我认为至少在开始时要记住的关键事情之一是 staging 目录的概念。
在 &lt;a href=&#34;https://github.com/kubernetes/kubernetes&#34;&gt;&lt;code&gt;kubernetes/kubernetes&lt;/code&gt;&lt;/a&gt; 中，你会遇到一个名为
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/tree/master/staging&#34;&gt;&lt;code&gt;staging/&lt;/code&gt;&lt;/a&gt; 的目录。
该目录中的子文件夹充当一堆伪仓库。
例如，发布 &lt;code&gt;client-go&lt;/code&gt; 版本的 &lt;a href=&#34;https://github.com/kubernetes/client-go&#34;&gt;&lt;code&gt;kubernetes/client-go&lt;/code&gt;&lt;/a&gt;
仓库实际上是一个 &lt;a href=&#34;https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go&#34;&gt;staging 仓库&lt;/a&gt;。&lt;/p&gt;
&lt;!--
**FSM**: So the concept of staging directories fundamentally impact contributions?

**MJ**: Precisely, because if you’d like to contribute to any of the staging repos, you will need to
send in a PR to its corresponding staging directory in `kubernetes/kubernetes`. Once the code merges
there, we have a bot called the [`publishing-bot`](https://github.com/kubernetes/publishing-bot)
that will sync the merged commits to the required staging repositories (like
`kubernetes/client-go`). This way we get the benefits of a monorepo but we also can modularly
publish code for downstream consumption. PS: The `publishing-bot` needs more folks to help out!

For more information on staging repositories, please see the [contributor
documentation](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/staging.md).
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：那么 staging 目录的概念会从根本上影响贡献？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt;：准确地说，因为如果你想为任何 staging 仓库做出贡献，你需要将 PR 发送到 &lt;code&gt;kubernetes/kubernetes&lt;/code&gt; 中相应的 staging 目录。
一旦代码合并到那里，我们就会让一个名为 &lt;a href=&#34;https://github.com/kubernetes/publishing-bot&#34;&gt;&lt;code&gt;publishing-bot&lt;/code&gt;&lt;/a&gt;
的机器人将合并的提交同步到必要的 staging 仓库（例如 &lt;code&gt;kubernetes/client-go&lt;/code&gt;）中。
通过这种方式，我们可以获得单一仓库的好处，但我们也可以以模块化的形式发布代码以供下游使用。
PS：&lt;code&gt;publishing-bot&lt;/code&gt; 需要更多人的帮助！&lt;/p&gt;
&lt;!--
**FSM**: Speaking of contributions, the very high number of contributors, both individuals and
companies, must also be a challenge: how does the subproject operate in terms of making sure that
standards are being followed?
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：说到贡献，贡献者数量非常多，包括个人和公司，也一定是一个挑战：这个子项目是如何运作的以确保大家都遵循标准呢？&lt;/p&gt;
&lt;!--
**MJ**: When it comes to dependency management in the project, there is a [dedicated
team](https://github.com/kubernetes/org/blob/a106af09b8c345c301d072bfb7106b309c0ad8e9/config/kubernetes/org.yaml#L1329)
that helps review and approve dependency changes. These are folks who have helped lay the foundation
of much of the
[tooling](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/vendor.md)
that Kubernetes uses today for dependency management. This tooling helps ensure there is a
consistent way that contributors can make changes to dependencies. The project has also worked on
additional tooling to signal statistics of dependencies that is being added or removed:
[`depstat`](https://github.com/kubernetes-sigs/depstat)
--&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt;：当涉及到项目中的依赖关系管理时，
有一个&lt;a href=&#34;https://github.com/kubernetes/org/blob/a106af09b8c345c301d072bfb7106b309c0ad8e9/config/kubernetes/org.yaml#L1329&#34;&gt;专门团队&lt;/a&gt;帮助审查和批准依赖关系更改。
这些人为目前 Kubernetes 用于管理依赖的许多&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/vendor.md&#34;&gt;工具&lt;/a&gt;做了开拓性的工作。
这些工具帮助我们确保贡献者可以以一致的方式更改依赖项。
这个子项目还开发了其他工具来基于被添加或删除的依赖项的统计信息发出通知：
&lt;a href=&#34;https://github.com/kubernetes-sigs/depstat&#34;&gt;&lt;code&gt;depstat&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;
&lt;!--
Apart from dependency management, another crucial task that the project does is management of the
staging repositories. The tooling for achieving this (`publishing-bot`) is completely transparent to
contributors and helps ensure that the staging repos get a consistent view of contributions that are
submitted to `kubernetes/kubernetes`.

Code Organization also works towards making sure that Kubernetes [stays on supported versions of
Go](https://github.com/kubernetes/enhancements/tree/cf6ee34e37f00d838872d368ec66d7a0b40ee4e6/keps/sig-release/3744-stay-on-supported-go-versions). The
linked KEP provides more context on why we need to do this. We collaborate with SIG Release to
ensure that we are testing Kubernetes as rigorously and as early as we can on Go releases and
working on changes that break our CI as a part of this. An example of how we track this process can
be found [here](https://github.com/kubernetes/release/issues/3076).
--&gt;
&lt;p&gt;除了依赖管理之外，这个项目执行的另一项重要任务是管理 staging 仓库。
用于实现此目的的工具（&lt;code&gt;publishing-bot&lt;/code&gt;）对贡献者完全透明，
有助于确保就提交给 &lt;code&gt;kubernetes/kubernetes&lt;/code&gt; 的贡献而言，各个 staging 仓库获得的视图是一致的。&lt;/p&gt;
&lt;p&gt;代码组织还致力于确保 Kubernetes
&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/cf6ee34e37f00d838872d368ec66d7a0b40ee4e6/keps/sig-release/3744-stay-on-supported-go-versions&#34;&gt;一直在使用受支持的 Go 版本&lt;/a&gt;。
链接所指向的 KEP 中包含更详细的背景信息，用来说明为什么我们需要这样做。
我们与 SIG Release 合作，确保我们在 Go 版本上尽可能严格、尽早地测试 Kubernetes；
作为这些工作的一部分，我们要处理会破坏我们的 CI 的那些变更。
我们如何跟踪此过程的示例可以在&lt;a href=&#34;https://github.com/kubernetes/release/issues/3076&#34;&gt;此处&lt;/a&gt;找到。&lt;/p&gt;
&lt;!--
## Release cycle and current priorities

**FSM**: Is there anything that changes during the release cycle?

**MJ** During the release cycle, specifically before code freeze, there are often changes that go in
that add/update/delete dependencies, fix code that needs fixing as part of our effort to stay on
supported versions of Go.

Furthermore, some of these changes are also candidates for
[backporting](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/cherry-picks.md)
to our supported release branches.
--&gt;
&lt;h2 id=&#34;发布周期和当前优先级&#34;&gt;发布周期和当前优先级&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：在发布周期中有什么变化吗？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt;：在发布周期内，特别是在代码冻结之前，通常会发生添加、更新、删除依赖项的变更，以及修复需要修复的代码等更改，
这些都是我们继续使用受支持的 Go 版本的努力的一部分。&lt;/p&gt;
&lt;p&gt;此外，其中一些更改也可以&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/cherry-picks.md&#34;&gt;向后移植&lt;/a&gt;
到我们支持的发布分支。&lt;/p&gt;
&lt;!--
**FSM**: Is there any major project or theme the subproject is working on right now that you would
like to highlight?

**MJ**: I think one very interesting and immensely useful change that
has been recently added (and I take the opportunity to specifically
highlight the work of [Tim Hockin](https://github.com/thockin) on
this) is the introduction of [Go workspaces to the Kubernetes
repo](https://www.kubernetes.dev/blog/2024/03/19/go-workspaces-in-kubernetes/). A lot of our
current tooling for dependency management and code publishing, as well
as the experience of editing code in the Kubernetes repo, can be
significantly improved by this change.
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：就子项目中目前正在进行的主要项目或主题而言你有什么要特别强调的吗？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt;：我认为最近添加的一个非常有趣且非常有用的变更（我借这个机会特别强调
&lt;a href=&#34;https://github.com/thockin&#34;&gt;Tim Hockin&lt;/a&gt; 在这方面的工作）是引入
&lt;a href=&#34;https://www.kubernetes.dev/blog/2024/03/19/go-workspaces-in-kubernetes/&#34;&gt;Go 工作空间的概念到Kubernetes 仓库中&lt;/a&gt;。
我们当前的许多依赖管理和代码发布工具，以及在 Kubernetes 仓库中编辑代码的体验，
都可以通过此更改得到显着改善。&lt;/p&gt;
&lt;!--
## Wrapping up

**FSM**: How would someone interested in the topic start helping the subproject?

**MJ**: The first step, as is the first step with any project in Kubernetes, is to join our slack:
[slack.k8s.io](https://slack.k8s.io), and after that join the `#k8s-code-organization` channel. There is also a
[code-organization office
hours](https://github.com/kubernetes/community/tree/master/sig-architecture#meetings) that takes
place that you can choose to attend. Timezones are hard, so feel free to also look at the recordings
or meeting notes and follow up on slack!
--&gt;
&lt;h2 id=&#34;收尾&#34;&gt;收尾&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：对这个主题感兴趣的人要怎样开始帮助这个子项目？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt;：与 Kubernetes 中任何项目的第一步一样，第一步是加入我们的
Slack：&lt;a href=&#34;https://slack.k8s.io&#34;&gt;slack.k8s.io&lt;/a&gt;，然后加入 &lt;code&gt;#k8s-code-organization&lt;/code&gt; 频道，
你还可以选择参加&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-architecture#meetings&#34;&gt;代码组织办公时间&lt;/a&gt;。
时区是个困难点，所以请随时查看录音或会议记录并跟进 Slack！&lt;/p&gt;
&lt;!--
**FSM**: Excellent, thank you! Any final comments you would like to share?

**MJ**: The Code Organization subproject always needs help! Especially areas like the publishing
bot, so don’t hesitate to get involved in the `#k8s-code-organization` Slack channel.
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：非常好，谢谢！最后你还有什么想分享的吗？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MJ&lt;/strong&gt;：代码组织子项目总是需要帮助！特别是像发布机器人这样的领域，所以请不要犹豫，参与到 &lt;code&gt;#k8s-code-organization&lt;/code&gt; Slack 频道中。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes v1.30 初探</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/03/12/kubernetes-1-30-upcoming-changes/</link>
      <pubDate>Tue, 12 Mar 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/03/12/kubernetes-1-30-upcoming-changes/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#39;A Peek at Kubernetes v1.30&#39;
date: 2024-03-12
slug: kubernetes-1-30-upcoming-changes
--&gt;
&lt;!-- 
**Authors:** Amit Dsouza, Frederick Kautz, Kristin Martin, Abigail McCarthy, Natali Vlatko
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Amit Dsouza, Frederick Kautz, Kristin Martin, Abigail McCarthy, Natali Vlatko&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; Paco Xu (DaoCloud)&lt;/p&gt;
&lt;!--
## A quick look: exciting changes in Kubernetes v1.30

It&#39;s a new year and a new Kubernetes release. We&#39;re halfway through the release cycle and
have quite a few interesting and exciting enhancements coming in v1.30. From brand new features
in alpha, to established features graduating to stable, to long-awaited improvements, this release
has something for everyone to pay attention to!

To tide you over until the official release, here&#39;s a sneak peek of the enhancements we&#39;re most
excited about in this cycle!
--&gt;
&lt;h2 id=&#34;快速预览-kubernetes-v1-30-中令人兴奋的变化&#34;&gt;快速预览：Kubernetes v1.30 中令人兴奋的变化&lt;/h2&gt;
&lt;p&gt;新年新版本，v1.30 发布周期已过半，我们将迎来一系列有趣且令人兴奋的增强功能。
从全新的 alpha 特性，到已有的特性升级为稳定版，再到期待已久的改进，这个版本对每个人都有值得关注的内容！&lt;/p&gt;
&lt;p&gt;为了让你在正式发布之前对其有所了解，下面给出我们在这个周期中最为期待的增强功能的预览！&lt;/p&gt;
&lt;!--
## Major changes for Kubernetes v1.30
--&gt;
&lt;h2 id=&#34;kubernetes-v1-30-的主要变化&#34;&gt;Kubernetes v1.30 的主要变化&lt;/h2&gt;
&lt;!--
### Structured parameters for dynamic resource allocation ([KEP-4381](https://kep.k8s.io/4381))
--&gt;
&lt;h3 id=&#34;动态资源分配-dra-的结构化参数-kep-4381-https-kep-k8s-io-4381&#34;&gt;动态资源分配（DRA）的结构化参数 (&lt;a href=&#34;https://kep.k8s.io/4381&#34;&gt;KEP-4381&lt;/a&gt;)&lt;/h3&gt;
&lt;!--
[Dynamic resource allocation](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/) was
added to Kubernetes as an alpha feature in v1.26. It defines an alternative to the traditional
device-plugin API for requesting access to third-party resources. By design, dynamic resource
allocation uses parameters for resources that are completely opaque to core Kubernetes. This
approach poses a problem for the Cluster Autoscaler (CA) or any higher-level controller that
needs to make decisions for a group of pods (e.g. a job scheduler). It cannot simulate the effect of
allocating or deallocating claims over time. Only the third-party DRA drivers have the information
available to do this.
--&gt;
&lt;p&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/scheduling-eviction/dynamic-resource-allocation/&#34;&gt;动态资源分配（DRA）&lt;/a&gt; 在 Kubernetes v1.26 中作为 alpha 特性添加。
它定义了一种替代传统设备插件 device plugin API 的方式，用于请求访问第三方资源。
在设计上，动态资源分配（DRA）使用的资源参数对于核心 Kubernetes 完全不透明。
这种方法对于集群自动缩放器（CA）或任何需要为一组 Pod 做决策的高级控制器（例如作业调度器）都会带来问题。
这一设计无法模拟在不同时间分配或释放请求的效果。
只有第三方 DRA 驱动程序才拥有信息来做到这一点。&lt;/p&gt;
&lt;!--
​​Structured Parameters for dynamic resource allocation is an extension to the original
implementation that addresses this problem by building a framework to support making these claim
parameters less opaque. Instead of handling the semantics of all claim parameters themselves,
drivers could manage resources and describe them using a specific &#34;structured model&#34; pre-defined by
Kubernetes. This would allow components aware of this &#34;structured model&#34; to make decisions about
these resources without outsourcing them to some third-party controller. For example, the scheduler
could allocate claims rapidly without back-and-forth communication with dynamic resource
allocation drivers. Work done for this release centers on defining the framework necessary to enable
different &#34;structured models&#34; and to implement the &#34;named resources&#34; model. This model allows
listing individual resource instances and, compared to the traditional device plugin API, adds the
ability to select those instances individually via attributes.
--&gt;
&lt;p&gt;动态资源分配（DRA）的结构化参数是对原始实现的扩展，它通过构建一个框架来支持增加请求参数的透明度来解决这个问题。
驱动程序不再需要自己处理所有请求参数的语义，而是可以使用 Kubernetes 预定义的特定“结构化模型”来管理和描述资源。
这一设计允许了解这个“结构化规范”的组件做出关于这些资源的决策，而不再将它们外包给某些第三方控制器。
例如，调度器可以在不与动态资源分配（DRA）驱动程序反复通信的前提下快速完成分配请求。
这个版本的工作重点是定义一个框架来支持不同的“结构化模型”，并实现“命名资源”模型。
此模型允许列出各个资源实例，同时，与传统的设备插件 API 相比，模型增加了通过属性逐一选择实例的能力。&lt;/p&gt;
&lt;!--
### Node memory swap support ([KEP-2400](https://kep.k8s.io/2400))
--&gt;
&lt;h3 id=&#34;节点交换内存-swap-支持-kep-2400-https-kep-k8s-io-2400&#34;&gt;节点交换内存 SWAP 支持 (&lt;a href=&#34;https://kep.k8s.io/2400&#34;&gt;KEP-2400&lt;/a&gt;)&lt;/h3&gt;
&lt;!--
In Kubernetes v1.30, memory swap support on Linux nodes gets a big change to how it works - with a
strong emphasis on improving system stability. In previous Kubernetes versions, the `NodeSwap`
feature gate was disabled by default, and when enabled, it used `UnlimitedSwap` behavior as the
default behavior. To achieve better stability, `UnlimitedSwap` behavior (which might compromise node
stability) will be removed in v1.30.
--&gt;
&lt;p&gt;在 Kubernetes v1.30 中，Linux 节点上的交换内存支持机制有了重大改进，其重点是提高系统的稳定性。
以前的 Kubernetes 版本默认情况下禁用了 &lt;code&gt;NodeSwap&lt;/code&gt; 特性门控。当门控被启用时，&lt;code&gt;UnlimitedSwap&lt;/code&gt; 行为被作为默认行为。
为了提高稳定性，&lt;code&gt;UnlimitedSwap&lt;/code&gt; 行为（可能会影响节点的稳定性）将在 v1.30 中被移除。&lt;/p&gt;
&lt;!--
The updated, still-beta support for swap on Linux nodes will be available by default. However, the
default behavior will be to run the node set to `NoSwap` (not `UnlimitedSwap`) mode. In `NoSwap`
mode, the kubelet supports running on a node where swap space is active, but Pods don&#39;t use any of
the page file. You&#39;ll still need to set `--fail-swap-on=false` for the kubelet to run on that node.
However, the big change is the other mode: `LimitedSwap`. In this mode, the kubelet actually uses
the page file on that node and allows Pods to have some of their virtual memory paged out.
Containers (and their parent pods)  do not have access to swap beyond their memory limit, but the
system can still use the swap space if available.
--&gt;
&lt;p&gt;更新后的 Linux 节点上的交换内存支持仍然是 beta 级别，并且默认情况下开启。
然而，节点默认行为是使用 &lt;code&gt;NoSwap&lt;/code&gt;（而不是 &lt;code&gt;UnlimitedSwap&lt;/code&gt;）模式。
在 &lt;code&gt;NoSwap&lt;/code&gt; 模式下，kubelet 支持在启用了磁盘交换空间的节点上运行，但 Pod 不会使用页面文件（pagefile）。
你仍然需要为 kubelet 设置 &lt;code&gt;--fail-swap-on=false&lt;/code&gt; 才能让 kubelet 在该节点上运行。
特性的另一个重大变化是针对另一种模式：&lt;code&gt;LimitedSwap&lt;/code&gt;。
在 &lt;code&gt;LimitedSwap&lt;/code&gt; 模式下，kubelet 会实际使用节点上的页面文件，并允许 Pod 的一些虚拟内存被换页出去。
容器（及其父 Pod）访问交换内存空间不可超出其内存限制，但系统的确可以使用可用的交换空间。&lt;/p&gt;
&lt;!--
Kubernetes&#39; Node special interest group (SIG Node) will also update the documentation to help you
understand how to use the revised implementation, based on feedback from end users, contributors,
and the wider Kubernetes community.
--&gt;
&lt;p&gt;Kubernetes 的 SIG Node 小组还将根据最终用户、贡献者和更广泛的 Kubernetes 社区的反馈更新文档，
以帮助你了解如何使用经过修订的实现。&lt;/p&gt;
&lt;!--
Read the previous [blog post](/blog/2023/08/24/swap-linux-beta/) or the [node swap
documentation](/docs/concepts/architecture/nodes/#swap-memory) for more details on 
Linux node swap support in Kubernetes.
--&gt;
&lt;p&gt;阅读之前的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/24/swap-linux-beta/&#34;&gt;博客文章&lt;/a&gt;或&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/architecture/nodes/#swap-memory&#34;&gt;交换内存管理文档&lt;/a&gt;以获取有关
Kubernetes 中 Linux 节点交换支持的更多详细信息。&lt;/p&gt;
&lt;!--
### Support user namespaces in pods ([KEP-127](https://kep.k8s.io/127))
--&gt;
&lt;h3 id=&#34;支持-pod-运行在用户命名空间-kep-127-https-kep-k8s-io-127&#34;&gt;支持 Pod 运行在用户命名空间 (&lt;a href=&#34;https://kep.k8s.io/127&#34;&gt;KEP-127&lt;/a&gt;)&lt;/h3&gt;
&lt;!--
[User namespaces](/docs/concepts/workloads/pods/user-namespaces) is a Linux-only feature that better
isolates pods to prevent or mitigate several CVEs rated high/critical, including
[CVE-2024-21626](https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv),
published in January 2024. In Kubernetes 1.30, support for user namespaces is migrating to beta and
now supports pods with and without volumes, custom UID/GID ranges, and more!
--&gt;
&lt;p&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/pods/user-namespaces&#34;&gt;用户命名空间&lt;/a&gt; 是一个仅在 Linux 上可用的特性，它更好地隔离 Pod，
以防止或减轻几个高/严重级别的 CVE，包括 2024 年 1 月发布的 &lt;a href=&#34;https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv&#34;&gt;CVE-2024-21626&lt;/a&gt;。
在 Kubernetes 1.30 中，对用户命名空间的支持正在迁移到 beta，并且现在支持带有和不带有卷的 Pod，自定义 UID/GID 范围等等！&lt;/p&gt;
&lt;!--
### Structured authorization configuration ([KEP-3221](https://kep.k8s.io/3221))
--&gt;
&lt;h3 id=&#34;结构化鉴权配置-kep-3221-https-kep-k8s-io-3221&#34;&gt;结构化鉴权配置(&lt;a href=&#34;https://kep.k8s.io/3221&#34;&gt;KEP-3221&lt;/a&gt;)&lt;/h3&gt;
&lt;!--
Support for [structured authorization
configuration](/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file)
is moving to beta and will be enabled by default. This feature enables the creation of
authorization chains with multiple webhooks with well-defined parameters that validate requests in a
particular order and allows fine-grained control – such as explicit Deny on failures. The
configuration file approach even allows you to specify [CEL](/docs/reference/using-api/cel/) rules
to pre-filter requests before they are dispatched to webhooks, helping you to prevent unnecessary
invocations. The API server also automatically reloads the authorizer chain when the configuration
file is modified.
--&gt;
&lt;p&gt;对&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file&#34;&gt;结构化鉴权配置&lt;/a&gt;的支持正在晋级到 Beta 版本，并将默认启用。
这个特性支持创建具有明确参数定义的多个 Webhook 所构成的鉴权链；这些 Webhook 按特定顺序验证请求，
并允许进行细粒度的控制，例如在失败时明确拒绝。
配置文件方法甚至允许你指定 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/using-api/cel/&#34;&gt;CEL&lt;/a&gt; 规则，以在将请求分派到 Webhook 之前对其进行预过滤，帮助你防止不必要的调用。
当配置文件被修改时，API 服务器还会自动重新加载鉴权链。&lt;/p&gt;
&lt;!--
You must specify the path to that authorization configuration using the `--authorization-config`
command line argument. If you want to keep using command line flags instead of a
configuration file, those will continue to work as-is. To gain access to new authorization webhook
capabilities like multiple webhooks, failure policy, and pre-filter rules, switch to putting options
in an `--authorization-config` file. From Kubernetes 1.30, the configuration file format is
beta-level, and only requires specifying `--authorization-config` since the feature gate is enabled by
default. An example configuration with all possible values is provided in the [Authorization
docs](/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file).
For more details, read the [Authorization
docs](/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file).
--&gt;
&lt;p&gt;你必须使用 &lt;code&gt;--authorization-config&lt;/code&gt; 命令行参数指定鉴权配置的路径。
如果你想继续使用命令行标志而不是配置文件，命令行方式没有变化。
要访问新的 Webhook 功能，例如多 Webhook 支持、失败策略和预过滤规则，需要切换到将选项放在 &lt;code&gt;--authorization-config&lt;/code&gt; 文件中。
从 Kubernetes 1.30 开始，配置文件格式约定是 beta 级别的，只需要指定 &lt;code&gt;--authorization-config&lt;/code&gt;，因为特性门控默认启用。
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file&#34;&gt;鉴权文档&lt;/a&gt;
中提供了一个包含所有可能值的示例配置。
有关更多详细信息，请阅读&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/access-authn-authz/authorization/#configuring-the-api-server-using-an-authorization-config-file&#34;&gt;鉴权文档&lt;/a&gt;。&lt;/p&gt;
&lt;!--
### Container resource based pod autoscaling ([KEP-1610](https://kep.k8s.io/1610))
--&gt;
&lt;h3 id=&#34;基于容器资源指标的-pod-自动扩缩容-kep-1610-https-kep-k8s-io-1610&#34;&gt;基于容器资源指标的 Pod 自动扩缩容 (&lt;a href=&#34;https://kep.k8s.io/1610&#34;&gt;KEP-1610&lt;/a&gt;)&lt;/h3&gt;
&lt;!--
Horizontal pod autoscaling based on `ContainerResource` metrics will graduate to stable in v1.30.
This new behavior for HorizontalPodAutoscaler allows you to configure automatic scaling based on the
resource usage for individual containers, rather than the aggregate resource use over a Pod. See our
[previous article](/blog/2023/05/02/hpa-container-resource-metric/) for further details, or read
[container resource metrics](/docs/tasks/run-application/horizontal-pod-autoscale/#container-resource-metrics).
--&gt;
&lt;p&gt;基于 &lt;code&gt;ContainerResource&lt;/code&gt; 指标的 Pod 水平自动扩缩容将在 v1.30 中升级为稳定版。
HorizontalPodAutoscaler 的这一新行为允许你根据各个容器的资源使用情况而不是 Pod 的聚合资源使用情况来配置自动伸缩。
有关更多详细信息，请参阅我们的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/05/02/hpa-container-resource-metric/&#34;&gt;先前文章&lt;/a&gt;，
或阅读&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/#container-resource-metrics&#34;&gt;容器资源指标&lt;/a&gt;。&lt;/p&gt;
&lt;!--
### CEL for admission control ([KEP-3488](https://kep.k8s.io/3488))
--&gt;
&lt;h3 id=&#34;在准入控制中使用-cel-kep-3488-https-kep-k8s-io-3488&#34;&gt;在准入控制中使用 CEL (&lt;a href=&#34;https://kep.k8s.io/3488&#34;&gt;KEP-3488&lt;/a&gt;)&lt;/h3&gt;
&lt;!--
Integrating Common Expression Language (CEL) for admission control in Kubernetes introduces a more
dynamic and expressive way of evaluating admission requests. This feature allows complex,
fine-grained policies to be defined and enforced directly through the Kubernetes API, enhancing
security and governance capabilities without compromising performance or flexibility.
--&gt;
&lt;p&gt;Kubernetes 为准入控制集成了 Common Expression Language (CEL) 。
这一集成引入了一种更动态、表达能力更强的方式来判定准入请求。
这个特性允许通过 Kubernetes API 直接定义和执行复杂的、细粒度的策略，同时增强了安全性和治理能力，而不会影响性能或灵活性。&lt;/p&gt;
&lt;!--
CEL&#39;s addition to Kubernetes admission control empowers cluster administrators to craft intricate
rules that can evaluate the content of API requests against the desired state and policies of the
cluster without resorting to Webhook-based access controllers. This level of control is crucial for
maintaining the integrity, security, and efficiency of cluster operations, making Kubernetes
environments more robust and adaptable to various use cases and requirements. For more information
on using CEL for admission control, see the [API
documentation](/docs/reference/access-authn-authz/validating-admission-policy/) for
ValidatingAdmissionPolicy.
--&gt;
&lt;p&gt;将 CEL 引入到 Kubernetes 的准入控制后，集群管理员就具有了制定复杂规则的能力，
这些规则可以根据集群的期望状态和策略来评估 API 请求的内容，而无需使用基于 Webhook 的访问控制器。
这种控制水平对于维护集群操作的完整性、安全性和效率至关重要，使 Kubernetes 环境更加健壮，更适应各种用例和需求。
有关使用 CEL 进行准入控制的更多信息，请参阅 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/access-authn-authz/validating-admission-policy/&#34;&gt;API 文档&lt;/a&gt;中的 ValidatingAdmissionPolicy。&lt;/p&gt;
&lt;!--
We hope you&#39;re as excited for this release as we are. Keep an eye out for the official release 
blog in a few weeks for more highlights!
--&gt;
&lt;p&gt;我们希望你和我们一样对这个版本的发布感到兴奋。请在未来几周内密切关注官方发布博客，以了解其他亮点！&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>走进 Kubernetes 读书会（Book Club）</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/02/22/k8s-book-club/</link>
      <pubDate>Thu, 22 Feb 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/02/22/k8s-book-club/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;A look into the Kubernetes Book Club&#34;
slug: k8s-book-club
date: 2024-02-22
canonicalUrl: https://www.k8s.dev/blog/2024/02/22/k8s-book-club/
author: &gt;
  Frederico Muñoz (SAS Institute)
--&gt;
&lt;!--
Learning Kubernetes and the entire ecosystem of technologies around it is not without its
challenges. In this interview, we will talk with [Carlos Santana
(AWS)](https://www.linkedin.com/in/csantanapr/) to learn a bit more about how he created the
[Kubernetes Book Club](https://community.cncf.io/kubernetes-virtual-book-club/), how it works, and
how anyone can join in to take advantage of a community-based learning experience.
--&gt;
&lt;p&gt;学习 Kubernetes 及其整个生态的技术并非易事。在本次采访中，我们的访谈对象是
&lt;a href=&#34;https://www.linkedin.com/in/csantanapr/&#34;&gt;Carlos Santana (AWS)&lt;/a&gt;，
了解他是如何创办 &lt;a href=&#34;https://community.cncf.io/kubernetes-virtual-book-club/&#34;&gt;Kubernetes 读书会（Book Club）&lt;/a&gt;的，
整个读书会是如何运作的，以及大家如何加入其中，进而更好地利用社区学习体验。&lt;/p&gt;
&lt;!--
![Carlos Santana speaking at KubeCon NA 2023](csantana_k8s_book_club.jpg)

**Frederico Muñoz (FSM)**: Hello Carlos, thank you so much for your availability. To start with,
could you tell us a bit about yourself?
--&gt;
&lt;p&gt;&lt;img src=&#34;csantana_k8s_book_club.jpg&#34; alt=&#34;Carlos Santana 在 KubeCon NA 2023 上演讲&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Frederico Muñoz (FSM)&lt;/strong&gt;：你好 Carlos，非常感谢你能接受我们的采访。首先，你能介绍一下自己吗？&lt;/p&gt;
&lt;!--
**Carlos Santana (CS)**: Of course. My experience in deploying Kubernetes in production six
years ago opened the door for me to join [Knative](https://knative.dev/) and then contribute to
Kubernetes through the Release Team. Working on upstream Kubernetes has been one of the best
experiences I&#39;ve had in open-source. Over the past two years, in my role as a Senior Specialist
Solutions Architect at AWS, I have been assisting large enterprises build their internal developer
platforms (IDP) on top of Kubernetes. Going forward, my open source contributions are directed
towards [CNOE](https://cnoe.io/) and CNCF projects like [Argo](https://github.com/argoproj),
[Crossplane](https://www.crossplane.io/), and [Backstage](https://www.cncf.io/projects/backstage/).
--&gt;
&lt;p&gt;&lt;strong&gt;Carlos Santana (CS)&lt;/strong&gt;：当然可以。六年前，我在生产环境中部署 Kubernetes 的经验为我加入
&lt;a href=&#34;https://knative.dev/&#34;&gt;Knative&lt;/a&gt; 并通过 Release Team 为 Kubernetes 贡献代码打开了大门。
为上游 Kubernetes 工作是我在开源领域最好的经历之一。在过去的两年里，作为 AWS 的高级专业解决方案架构师，
我一直在帮助大型企业在 Kubernetes 之上构建他们的内部开发平台（IDP）。
未来我的开源贡献将主要集中在 &lt;a href=&#34;https://cnoe.io/&#34;&gt;CNOE&lt;/a&gt; 和 CNCF 项目，如
&lt;a href=&#34;https://github.com/argoproj&#34;&gt;Argo&lt;/a&gt;、&lt;a href=&#34;https://www.crossplane.io/&#34;&gt;Crossplane&lt;/a&gt; 和
&lt;a href=&#34;https://www.cncf.io/projects/backstage/&#34;&gt;Backstage&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## Creating the Book Club

**FSM**: So your path led you to Kubernetes, and at that point what was the motivating factor for
starting the Book Club?
--&gt;
&lt;h2 id=&#34;创办读书会&#34;&gt;创办读书会&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：所以你的职业道路把你引向了 Kubernetes，那么是什么动机促使你开始创办读书会呢？&lt;/p&gt;
&lt;!--
**CS**: The idea for the Kubernetes Book Club sprang from a casual suggestion during a
[TGIK](https://github.com/vmware-archive/tgik) livestream. For me, it was more than just about
reading a book; it was about creating a learning community. This platform has not only been a source
of knowledge but also a support system, especially during the challenging times of the
pandemic. It&#39;s gratifying to see how this initiative has helped members cope and grow. The first
book [Production
Kubernetes](https://www.oreilly.com/library/view/production-kubernetes/9781492092292/) took 36
weeks, when we started on March 5th 2021. Currently don&#39;t take that long to cover a book, one or two
chapters per week.
--&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;：Kubernetes 读书会的想法源于一次 &lt;a href=&#34;https://github.com/vmware-archive/tgik&#34;&gt;TGIK&lt;/a&gt; 直播中的一个临时建议。
对我来说，这不仅仅是读一本书，更是创办一个学习社区。这个社区平台不仅是知识的来源，也是一个支持系统，
特别是在疫情期间陪我度过了艰难时刻。读书会的这项倡议后来帮助许多成员学会了应对和成长，这让我感到很欣慰。
我们在 2021 年 3 月 5 日开始第一本书
&lt;a href=&#34;https://www.oreilly.com/library/view/production-kubernetes/9781492092292/&#34;&gt;Production Kubernetes&lt;/a&gt;，
花了 36 周时间。目前，一本书不会再花那么长时间了，如今每周会完成一到两章。&lt;/p&gt;
&lt;!--
**FSM**: Could you describe the way the Kubernetes Book Club works? How do you select the books and how
do you go through them?

**CS**: We collectively choose books based on the interests and needs of the group. This practical
approach helps members, especially beginners, grasp complex concepts more easily. We have two weekly
series, one for the EMEA timezone, and I organize the US one. Each organizer works with their co-host
and picks a book on Slack, then sets up a lineup of hosts for a couple of weeks to discuss each
chapter.
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：你能介绍一下 Kubernetes 读书会是如何运作的吗？你们如何选书以及如何阅读它们？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;：我们根据小组的兴趣和需求以集体的方式选书。这种实用的方法有助于成员们（特别是初学者）更容易地掌握复杂的概念。
我们每周有两次读书会应对不同的时区，一个针对 EMEA（欧洲、中东及非洲）时区，另一个是由我自己负责的美国时区。
每位组织者与他们的联合主持人在 Slack 上甄选一本书，然后安排几个主持人用几周时间讨论每一章。&lt;/p&gt;
&lt;!--
**FSM**: If I’m not mistaken, the Kubernetes Book Club is in its 17th book, which is significant: is
there any secret recipe for keeping things active?

**CS**: The secret to keeping the club active and engaging lies in a couple of key factors.
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：如果我没记错的话，Kubernetes 读书会如今已经进行到了第 17 本书。这很了不起：有什么秘诀可以让读书这件事保持活跃吗？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;：保持俱乐部活跃和吸引人参与的秘诀在于几个关键因素。&lt;/p&gt;
&lt;!--
Firstly, consistency has been crucial. We strive to maintain a regular schedule, only cancelling
meetups for major events like holidays or KubeCon. This regularity helps members stay engaged and
builds a reliable community.

Secondly, making the sessions interesting and interactive has been vital. For instance, I often
introduce pop-up quizzes during the meetups, which not only tests members&#39; understanding but also
adds an element of fun. This approach keeps the content relatable and helps members understand how
theoretical concepts are applied in real-world scenarios.
--&gt;
&lt;p&gt;首先，一贯性至关重要。我们努力保持定期聚会，只有在重大事件如节假日或 KubeCon 时才会取消聚会。
这种规律性有助于成员保持惯性参与，有助于建立一个可靠的社区。&lt;/p&gt;
&lt;p&gt;其次，让聚会有趣生动也非常重要。例如，我经常在聚会期间引入提问测验，不仅检测成员们的理解程度，还增加了一些乐趣。
这种方法使读书内容更加贴近实际，并帮助成员们理解理论概念在现实世界中的运用方式。&lt;/p&gt;
&lt;!--
## Topics covered in the Book Club

**FSM**: The main topics of the books have been Kubernetes, GitOps, Security, SRE, and
Observability: is this a reflection of the cloud native landscape, especially in terms of
popularity?
--&gt;
&lt;h2 id=&#34;读书会涵盖的话题&#34;&gt;读书会涵盖的话题&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：书籍的主要话题包括 Kubernetes、GitOps、安全、SRE 和可观测性：
这是否也反映了云原生领域的现状，特别是在受欢迎程度方面？&lt;/p&gt;
&lt;!--
**CS**: Our journey began with &#39;Production Kubernetes&#39;, setting the tone for our focus on practical,
production-ready solutions. Since then, we&#39;ve delved into various aspects of the CNCF landscape,
aligning our books with a different theme.  Each theme, whether it be Security, Observability, or
Service Mesh, is chosen based on its relevance and demand within the community. For instance, in our
recent themes on Kubernetes Certifications, we brought the book authors into our fold as active
hosts, enriching our discussions with their expertise.
--&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;：我们的旅程始于《Production Kubernetes》，为我们专注于实用、生产就绪的解决方案定下了基调。
从那时起，我们深入探讨了 CNCF 领域的各个方面，根据不同的主题去选书。
每个主题，无论是安全性、可观测性还是服务网格，都是根据其相关性和社区需求来选择的。
例如，在我们最近关于 Kubernetes 考试认证的主题中，我们邀请了书籍的作者作为活跃现场的主持人，用他们的专业知识丰富了我们的讨论。&lt;/p&gt;
&lt;!--
**FSM**: I know that the project had recent changes, namely being integrated into the CNCF as a
[Cloud Native Community Group](https://community.cncf.io/). Could you talk a bit about this change?

**CS**: The CNCF graciously accepted the book club as a Cloud Native Community Group. This is a
significant development that has streamlined our operations and expanded our reach. This alignment
has been instrumental in enhancing our administrative capabilities, similar to those used by
Kubernetes Community Days (KCD) meetups. Now, we have a more robust structure for memberships, event
scheduling, mailing lists, hosting web conferences, and recording sessions.
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：我了解到此项目最近有一些变化，即被整合到了 CNCF
作为&lt;a href=&#34;https://community.cncf.io/&#34;&gt;云原生社区组（Cloud Native Community Group）&lt;/a&gt;的一部分。你能谈谈这个变化吗？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;：CNCF 慷慨地接受了读书会作为云原生社区组的一部分。
这是读书会发展过程中的重要一步，优化了读书会的运作并扩大了读书会的影响力。
这种拉齐对于增强读书会的管理能力至关重要，这点很像 Kubernetes Community Days (KCD) 聚会。
现在，读书会有了更稳健的会员结构、活动安排、邮件列表、托管的网络会议和录播系统。&lt;/p&gt;
&lt;!--
**FSM**: How has your involvement with the CNCF impacted the growth and engagement of the Kubernetes
Book Club over the past six months?

**CS**: Since becoming part of the CNCF community six months ago, we&#39;ve witnessed significant
quantitative changes within the Kubernetes Book Club. Our membership has surged to over 600 members,
and we&#39;ve successfully organized and conducted more than 40 events during this period. What&#39;s even
more promising is the consistent turnout, with an average of 30 attendees per event. This growth and
engagement are clear indicators of the positive influence of our CNCF affiliation on the Kubernetes
Book Club&#39;s reach and impact in the community.
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：在过去的六个月里，你参与 CNCF 这件事对 Kubernetes 读书会的成长和参与度产生了什么影响？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;：自从六个月前成为 CNCF 社区的一部分以来，我们在 Kubernetes 读书会中看到了一些显著的变化。
我们的会员人数激增至 600 多人，并在此期间成功组织并举办了超过 40 场活动。
更令人鼓舞的是，每场活动的出席人数都很稳定，平均约有 30 人参加。
这种增长和参与度清楚地表明了我们与 CNCF 的合作让 Kubernetes 读书会在社区中增强了影响力。&lt;/p&gt;
&lt;!--
## Joining the Book Club

**FSM**: For anyone wanting to join, what should they do?

**CS**: There are three steps to join:
--&gt;
&lt;h2 id=&#34;加入读书会&#34;&gt;加入读书会&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：若有人想加入读书会，他们应该怎么做？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;：加入读书会只需三步：&lt;/p&gt;
&lt;!--
- First, join the [Kubernetes Book Club Community](https://community.cncf.io/kubernetes-virtual-book-club/)
- Then RSVP to the
  [events](https://community.cncf.io/kubernetes-virtual-book-club/)
  on the community page
- Lastly, join the CNCF Slack channel
  [#kubernetes-book-club](https://cloud-native.slack.com/archives/C05EYA14P37).
--&gt;
&lt;ul&gt;
&lt;li&gt;首先加入 &lt;a href=&#34;https://community.cncf.io/kubernetes-virtual-book-club/&#34;&gt;Kubernetes 读书会社区&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;然后注册参与在社区页面上列出的&lt;a href=&#34;https://community.cncf.io/kubernetes-virtual-book-club/&#34;&gt;活动&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;最后加入 CNCF Slack 频道 &lt;a href=&#34;https://cloud-native.slack.com/archives/C05EYA14P37&#34;&gt;#kubernetes-book-club&lt;/a&gt;。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
**FSM**: Excellent, thank you! Any final comments you would like to share?

**CS**: The Kubernetes Book Club is more than just a group of professionals discussing books; it&#39;s a
vibrant community and amazing volunteers that help organize and host
[Neependra Khare](https://www.linkedin.com/in/neependra/),
[Eric Smalling](https://www.linkedin.com/in/ericsmalling/),
[Sevi Karakulak](https://www.linkedin.com/in/sevikarakulak/),
[Chad M. Crowell](https://www.linkedin.com/in/chadmcrowell/),
and [Walid (CNJ) Shaari](https://www.linkedin.com/in/walidshaari/).
Look us up at KubeCon and get your Kubernetes Book Club sticker!
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：太好了，谢谢你！最后你还有什么想法要跟大家分享吗？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CS&lt;/strong&gt;：Kubernetes 读书会不仅仅是一个讨论书籍的专业小组，它是一个充满活力的社区，
有许多令人敬佩的志愿者帮助组织和主持聚会。我想借这次机会感谢几位志愿者：
&lt;a href=&#34;https://www.linkedin.com/in/neependra/&#34;&gt;Neependra Khare&lt;/a&gt;、
&lt;a href=&#34;https://www.linkedin.com/in/ericsmalling/&#34;&gt;Eric Smalling&lt;/a&gt;、
&lt;a href=&#34;https://www.linkedin.com/in/sevikarakulak/&#34;&gt;Sevi Karakulak&lt;/a&gt;、
&lt;a href=&#34;https://www.linkedin.com/in/chadmcrowell/&#34;&gt;Chad M. Crowell&lt;/a&gt;
和 &lt;a href=&#34;https://www.linkedin.com/in/walidshaari/&#34;&gt;Walid (CNJ) Shaari&lt;/a&gt;。
欢迎来 KubeCon 与我们相聚，还能领取你的 Kubernetes 读书会贴纸！&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>镜像文件系统：配置 Kubernetes 将容器存储在独立的文件系统上</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/01/23/kubernetes-separate-image-filesystem/</link>
      <pubDate>Tue, 23 Jan 2024 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2024/01/23/kubernetes-separate-image-filesystem/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#39;Image Filesystem: Configuring Kubernetes to store containers on a separate filesystem&#39;
date: 2024-01-23
slug: kubernetes-separate-image-filesystem
--&gt;
&lt;!--
**Author:** Kevin Hannon (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Kevin Hannon (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href=&#34;https://github.com/windsonsea&#34;&gt;Michael Yao&lt;/a&gt;&lt;/p&gt;
&lt;!--
A common issue in running/operating Kubernetes clusters is running out of disk space.
When the node is provisioned, you should aim to have a good amount of storage space for your container images and running containers.
The [container runtime](/docs/setup/production-environment/container-runtimes/) usually writes to `/var`. 
This can be located as a separate partition or on the root filesystem.
CRI-O, by default, writes its containers and images to `/var/lib/containers`, while containerd writes its containers and images to `/var/lib/containerd`.
--&gt;
&lt;p&gt;磁盘空间不足是运行或操作 Kubernetes 集群时的一个常见问题。
在制备节点时，你应该为容器镜像和正在运行的容器留足够的存储空间。
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/setup/production-environment/container-runtimes/&#34;&gt;容器运行时&lt;/a&gt;通常会向 &lt;code&gt;/var&lt;/code&gt; 目录写入数据。
此目录可以位于单独的分区或根文件系统上。CRI-O 默认将其容器和镜像写入 &lt;code&gt;/var/lib/containers&lt;/code&gt;，
而 containerd 将其容器和镜像写入 &lt;code&gt;/var/lib/containerd&lt;/code&gt;。&lt;/p&gt;
&lt;!--
In this blog post, we want to bring attention to ways that you can configure your container runtime to store its content separately from the default partition.  
This allows for more flexibility in configuring Kubernetes and provides support for adding a larger disk for the container storage while keeping the default filesystem untouched.  

One area that needs more explaining is where/what Kubernetes is writing to disk.
--&gt;
&lt;p&gt;在这篇博文中，我们想要关注的是几种不同方式，用来配置容器运行时将其内容存储到别的位置而非默认分区。
这些配置允许我们更灵活地配置 Kubernetes，支持在保持默认文件系统不受影响的情况下为容器存储添加更大的磁盘。&lt;/p&gt;
&lt;p&gt;需要额外讲述的是 Kubernetes 向磁盘在写入数据的具体位置及内容。&lt;/p&gt;
&lt;!--
## Understanding Kubernetes disk usage

Kubernetes has persistent data and ephemeral data.  The base path for the kubelet and local
Kubernetes-specific storage is configurable, but it is usually assumed to be `/var/lib/kubelet`.
In the Kubernetes docs, this is sometimes referred to as the root or node filesystem. The bulk of this data can be categorized into:
--&gt;
&lt;h2 id=&#34;understanding-kubernetes-disk-usage&#34;&gt;了解 Kubernetes 磁盘使用情况  &lt;/h2&gt;
&lt;p&gt;Kubernetes 有持久数据和临时数据。kubelet 和特定于 Kubernetes 的本地存储的基础路径是可配置的，
但通常假定为 &lt;code&gt;/var/lib/kubelet&lt;/code&gt;。在 Kubernetes 文档中，
这一位置有时被称为根文件系统或节点文件系统。写入的数据可以大致分类为：&lt;/p&gt;
&lt;!--
- ephemeral storage
- logs
- and container runtime

This is different from most POSIX systems as the root/node filesystem is not `/` but the disk that `/var/lib/kubelet` is on.
--&gt;
&lt;ul&gt;
&lt;li&gt;临时存储&lt;/li&gt;
&lt;li&gt;日志&lt;/li&gt;
&lt;li&gt;容器运行时&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;与大多数 POSIX 系统不同，这里的根/节点文件系统不是 &lt;code&gt;/&lt;/code&gt;，而是 &lt;code&gt;/var/lib/kubelet&lt;/code&gt; 所在的磁盘。&lt;/p&gt;
&lt;!--
### Ephemeral storage

Pods and containers can require temporary or transient local storage for their operation.
The lifetime of the ephemeral storage does not extend beyond the life of the individual pod, and the ephemeral storage cannot be shared across pods.
--&gt;
&lt;h3 id=&#34;ephemeral-storage&#34;&gt;临时存储  &lt;/h3&gt;
&lt;p&gt;Pod 和容器的某些操作可能需要临时或瞬态的本地存储。
临时存储的生命周期短于 Pod 的生命周期，且临时存储不能被多个 Pod 共享。&lt;/p&gt;
&lt;!--
### Logs

By default, Kubernetes stores the logs of each running container, as files within `/var/log`.
These logs are ephemeral and are monitored by the kubelet to make sure that they do not grow too large while the pods are running.

You can customize the [log rotation](/docs/concepts/cluster-administration/logging/#log-rotation) settings
for each node to manage the size of these logs, and configure log shipping (using a 3rd party solution)
to avoid relying on the node-local storage.
--&gt;
&lt;h3 id=&#34;logs&#34;&gt;日志  &lt;/h3&gt;
&lt;p&gt;默认情况下，Kubernetes 将每个运行容器的日志存储为 &lt;code&gt;/var/log&lt;/code&gt; 中的文件。
这些日志是临时性质的，并由 kubelet 负责监控以确保不会在 Pod 运行时变得过大。&lt;/p&gt;
&lt;p&gt;你可以为每个节点自定义&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/cluster-administration/logging/#log-rotation&#34;&gt;日志轮换&lt;/a&gt;设置，
以管控这些日志的大小，并（使用第三方解决方案）配置日志转储以避免对节点本地存储形成依赖。&lt;/p&gt;
&lt;!--
### Container runtime

The container runtime has two different areas of storage for containers and images.
- read-only layer: Images are usually denoted as the read-only layer, as they are not modified when containers are running.
The read-only layer can consist of multiple layers that are combined into a single read-only layer.
There is a thin layer on top of containers that provides ephemeral storage for containers if the container is writing to the filesystem.
--&gt;
&lt;h3 id=&#34;container-runtime&#34;&gt;容器运行时  &lt;/h3&gt;
&lt;p&gt;容器运行时针对容器和镜像使用两个不同的存储区域。&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;只读层：镜像通常被表示为只读层，因为镜像在容器处于运行状态期间不会被修改。
只读层可以由多个层组成，这些层组合到一起形成最终的只读层。
如果容器要向文件系统中写入数据，则在容器层之上会存在一个薄层为容器提供临时存储。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- writeable layer: Depending on your container runtime, local writes might be
implemented as a layered write mechanism (for example, `overlayfs` on Linux or CimFS on Windows).
This is referred to as the writable layer.
Local writes could also use a writeable filesystem that is initialized with a full clone of the container
image; this is used for some runtimes based on hypervisor virtualisation.

The container runtime filesystem contains both the read-only layer and the writeable layer.
This is considered the `imagefs` in Kubernetes documentation.
--&gt;
&lt;ul&gt;
&lt;li&gt;可写层：取决于容器运行时的不同实现，本地写入可能会用分层写入机制来实现
（例如 Linux 上的 &lt;code&gt;overlayfs&lt;/code&gt; 或 Windows 上的 CimFS）。这一机制被称为可写层。
本地写入也可以使用一个可写文件系统来实现，该文件系统使用容器镜像的完整克隆来初始化；
这种方式适用于某些基于 Hypervisor 虚拟化的运行时。&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;容器运行时文件系统包含只读层和可写层。在 Kubernetes 文档中，这一文件系统被称为 &lt;code&gt;imagefs&lt;/code&gt;。&lt;/p&gt;
&lt;!--
## Container runtime configurations

### CRI-O

CRI-O uses a storage configuration file in TOML format that lets you control how the container runtime stores persistent and temporary data.
CRI-O utilizes the [storage library](https://github.com/containers/storage).  
Some Linux distributions have a manual entry for storage (`man 5 containers-storage.conf`).
The main configuration for storage is located in `/etc/containers/storage.conf` and one can control the location for temporary data and the root directory.  
The root directory is where CRI-O stores the persistent data.
--&gt;
&lt;h2 id=&#34;container-runtime-configurations&#34;&gt;容器运行时配置  &lt;/h2&gt;
&lt;h3 id=&#34;cri-o&#34;&gt;CRI-O&lt;/h3&gt;
&lt;p&gt;CRI-O 使用 TOML 格式的存储配置文件，让你控制容器运行时如何存储持久数据和临时数据。
CRI-O 使用了 &lt;a href=&#34;https://github.com/containers/storage&#34;&gt;containers-storage 库&lt;/a&gt;。
某些 Linux 发行版为 containers-storage 提供了帮助手册条目（&lt;code&gt;man 5 containers-storage.conf&lt;/code&gt;）。
存储的主要配置位于 &lt;code&gt;/etc/containers/storage.conf&lt;/code&gt; 中，你可以控制临时数据和根目录的位置。
根目录是 CRI-O 存储持久数据的位置。&lt;/p&gt;
&lt;!--
```toml
[storage]
# Default storage driver
driver = &#34;overlay&#34;
# Temporary storage location
runroot = &#34;/var/run/containers/storage&#34;
# Primary read/write location of container storage 
graphroot = &#34;/var/lib/containers/storage&#34;
```
--&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-toml&#34; data-lang=&#34;toml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;[storage]
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# 默认存储驱动&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;driver = &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;overlay&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# 临时存储位置&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;runroot = &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;/var/run/containers/storage&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# 容器存储的主要读/写位置&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;graphroot = &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;/var/lib/containers/storage&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
- `graphroot`
  - Persistent data stored from the container runtime
  - If SELinux is enabled, this must match the `/var/lib/containers/storage`
- `runroot`
  - Temporary read/write access for container
  - Recommended to have this on a temporary filesystem
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;graphroot&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;存储来自容器运行时的持久数据&lt;/li&gt;
&lt;li&gt;如果 SELinux 被启用，则此项必须是 &lt;code&gt;/var/lib/containers/storage&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;runroot&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;容器的临时读/写访问&lt;/li&gt;
&lt;li&gt;建议将其放在某个临时文件系统上&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
Here is a quick way to relabel your graphroot directory to match `/var/lib/containers/storage`:

```bash
semanage fcontext -a -e /var/lib/containers/storage &lt;YOUR-STORAGE-PATH&gt;
restorecon -R -v &lt;YOUR-STORAGE-PATH&gt;
```
--&gt;
&lt;p&gt;以下是为你的 graphroot 目录快速重新打标签以匹配 &lt;code&gt;/var/lib/containers/storage&lt;/code&gt; 的方法：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;semanage fcontext -a -e /var/lib/containers/storage &amp;lt;你的存储路径&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;restorecon -R -v &amp;lt;你的存储路径&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### containerd

The containerd runtime uses a TOML configuration file to control where persistent and ephemeral data is stored.
The default path for the config file is located at `/etc/containerd/config.toml`.

The relevant fields for containerd storage are `root` and `state`.
--&gt;
&lt;h3 id=&#34;containerd&#34;&gt;containerd&lt;/h3&gt;
&lt;p&gt;containerd 运行时使用 TOML 配置文件来控制存储持久数据和临时数据的位置。
配置文件的默认路径位于 &lt;code&gt;/etc/containerd/config.toml&lt;/code&gt;。&lt;/p&gt;
&lt;p&gt;与 containerd 存储的相关字段是 &lt;code&gt;root&lt;/code&gt; 和 &lt;code&gt;state&lt;/code&gt;。&lt;/p&gt;
&lt;!--
- `root`
  - The root directory for containerd metadata
  - Default is `/var/lib/containerd`
  - Root also requires SELinux labels if your OS requires it
- `state`
  - Temporary data for containerd
  - Default is `/run/containerd`
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;root&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;containerd 元数据的根目录&lt;/li&gt;
&lt;li&gt;默认为 &lt;code&gt;/var/lib/containerd&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;如果你的操作系统要求，需要为根目录设置 SELinux 标签&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;state&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;containerd 的临时数据&lt;/li&gt;
&lt;li&gt;默认为 &lt;code&gt;/run/containerd&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Kubernetes node pressure eviction

Kubernetes will automatically detect if the container filesystem is split from the node filesystem. 
When one separates the filesystem, Kubernetes is responsible for monitoring both the node filesystem and the container runtime filesystem.
Kubernetes documentation refers to the node filesystem and the container runtime filesystem as nodefs and imagefs.
If either nodefs or the imagefs are running out of disk space, then the overall node is considered to have disk pressure.
Kubernetes will first reclaim space by deleting unusued containers and images, and then it will resort to evicting pods.
On a node that has a nodefs and an imagefs, the kubelet will
[garbage collect](/docs/concepts/architecture/garbage-collection/#containers-images) unused container images
on imagefs and will remove dead pods and their containers from the nodefs.
If there is only a nodefs, then Kubernetes garbage collection includes dead containers, dead pods and unused images.
--&gt;
&lt;h2 id=&#34;kubernetes-node-pressure-eviction&#34;&gt;Kubernetes 节点压力驱逐  &lt;/h2&gt;
&lt;p&gt;Kubernetes 将自动检测容器文件系统是否与节点文件系统分离。
当你分离文件系统时，Kubernetes 负责同时监视节点文件系统和容器运行时文件系统。
Kubernetes 文档将节点文件系统称为 nodefs，将容器运行时文件系统称为 imagefs。
如果 nodefs 或 imagefs 中有一个磁盘空间不足，则整个节点被视为有磁盘压力。
这种情况下，Kubernetes 先通过删除未使用的容器和镜像来回收空间，之后会尝试驱逐 Pod。
在同时具有 nodefs 和 imagefs 的节点上，kubelet 将在 imagefs
上对未使用的容器镜像执行&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/architecture/garbage-collection/#containers-images&#34;&gt;垃圾回收&lt;/a&gt;，
并从 nodefs 中移除死掉的 Pod 及其容器。
如果只有 nodefs，则 Kubernetes 垃圾回收将包括死掉的容器、死掉的 Pod 和未使用的镜像。&lt;/p&gt;
&lt;!--
Kubernetes allows more configurations for determining if your disk is full.  
The eviction manager within the kubelet has some configuration settings that let you control
the relevant thresholds.
For filesystems, the relevant measurements are `nodefs.available`, `nodefs.inodesfree`, `imagefs.available`, and `imagefs.inodesfree`.
If there is not a dedicated disk for the container runtime then imagefs is ignored.

Users can use the existing defaults:
--&gt;
&lt;p&gt;Kubernetes 提供额外的配置方法来确定磁盘是否已满。kubelet 中的驱逐管理器有一些让你可以控制相关阈值的配置项。
对于文件系统，相关测量值有 &lt;code&gt;nodefs.available&lt;/code&gt;、&lt;code&gt;nodefs.inodesfree&lt;/code&gt;、&lt;code&gt;imagefs.available&lt;/code&gt; 和
&lt;code&gt;imagefs.inodesfree&lt;/code&gt;。如果容器运行时没有专用磁盘，则 imagefs 被忽略。&lt;/p&gt;
&lt;p&gt;用户可以使用现有的默认值：&lt;/p&gt;
&lt;!--
- `memory.available` &lt; 100MiB
- `nodefs.available` &lt; 10%
- `imagefs.available` &lt; 15%
- `nodefs.inodesFree` &lt; 5% (Linux nodes)

Kubernetes allows you to set user defined values in `EvictionHard` and `EvictionSoft` in the kubelet configuration file.
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;memory.available&lt;/code&gt; &amp;lt; 100MiB&lt;/li&gt;
&lt;li&gt;&lt;code&gt;nodefs.available&lt;/code&gt; &amp;lt; 10%&lt;/li&gt;
&lt;li&gt;&lt;code&gt;imagefs.available&lt;/code&gt; &amp;lt; 15%&lt;/li&gt;
&lt;li&gt;&lt;code&gt;nodefs.inodesFree&lt;/code&gt; &amp;lt; 5%（Linux 节点）&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Kubernetes 允许你在 kubelet 配置文件中将 &lt;code&gt;EvictionHard&lt;/code&gt; 和 &lt;code&gt;EvictionSoft&lt;/code&gt; 设置为用户定义的值。&lt;/p&gt;
&lt;!--
`EvictionHard`
: defines limits; once these limits are exceeded, pods will be evicted without any grace period.

`EvictionSoft`
: defines limits; once these limits are exceeded, pods will be evicted with a grace period that can be set per signal.
--&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;EvictionHard&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;定义限制；一旦超出这些限制，Pod 将被立即驱逐，没有任何宽限期。&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;EvictionSoft&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;定义限制；一旦超出这些限制，Pod 将在按各信号所设置的宽限期后被驱逐。&lt;/dd&gt;
&lt;/dl&gt;
&lt;!--
If you specify a value for `EvictionHard`, it will replace the defaults.  
This means it is important to set all signals in your configuration.

For example, the following kubelet configuration could be used to configure [eviction signals](/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals-and-thresholds) and grace period options.
--&gt;
&lt;p&gt;如果你为 &lt;code&gt;EvictionHard&lt;/code&gt; 指定了值，所设置的值将取代默认值。
这意味着在你的配置中设置所有信号非常重要。&lt;/p&gt;
&lt;p&gt;例如，以下 kubelet
配置可用于配置&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals-and-thresholds&#34;&gt;驱逐信号&lt;/a&gt;和宽限期选项。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;kubelet.config.k8s.io/v1beta1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;KubeletConfiguration&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;address&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;192.168.0.8&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;port&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;20250&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;serializeImagePulls&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;false&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;evictionHard&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;memory.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;100Mi&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;nodefs.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;10%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;nodefs.inodesFree&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;5%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imagefs.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;15%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imagefs.inodesFree&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;5%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;evictionSoft&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;memory.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;100Mi&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;nodefs.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;10%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;nodefs.inodesFree&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;5%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imagefs.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;15%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imagefs.inodesFree&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;5%&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;evictionSoftGracePeriod&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;memory.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;1m30s&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;nodefs.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;2m&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;nodefs.inodesFree&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;2m&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imagefs.available&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;2m&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;imagefs.inodesFree&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;2m&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;evictionMaxPodGracePeriod&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;60s&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### Problems

The Kubernetes project recommends that you either use the default settings for eviction or you set all the fields for eviction.
You can use the default settings or specify your own `evictionHard` settings. If you miss a signal, then Kubernetes will not monitor that resource.
One common misconfiguration administrators or users can hit is mounting a new filesystem to `/var/lib/containers/storage` or `/var/lib/containerd`.
Kubernetes will detect a separate filesystem, so you want to make sure to check that `imagefs.inodesfree` and `imagefs.available` match your needs if you&#39;ve done this.
--&gt;
&lt;h3 id=&#34;problems&#34;&gt;问题  &lt;/h3&gt;
&lt;p&gt;Kubernetes 项目建议你针对 Pod 驱逐要么使用其默认设置，要么设置与之相关的所有字段。
你可以使用默认设置或指定你自己的 &lt;code&gt;evictionHard&lt;/code&gt; 设置。 如果你漏掉一个信号，那么 Kubernetes 将不会监视该资源。
管理员或用户可能会遇到的一个常见误配是将新的文件系统挂载到 &lt;code&gt;/var/lib/containers/storage&lt;/code&gt; 或 &lt;code&gt;/var/lib/containerd&lt;/code&gt;。
Kubernetes 将检测到一个单独的文件系统，因此你要确保 &lt;code&gt;imagefs.inodesfree&lt;/code&gt; 和 &lt;code&gt;imagefs.available&lt;/code&gt; 符合你的需要。&lt;/p&gt;
&lt;!--
Another area of confusion is that ephemeral storage reporting does not change if you define an image
filesystem for your node. The image filesystem (`imagefs`) is used to store container image layers; if a
container writes to its own root filesystem, that local write doesn&#39;t count towards the size of the container image. The place where the container runtime stores those local modifications is runtime-defined, but is often
the image filesystem.
If a container in a pod is writing to a filesystem-backed `emptyDir` volume, then this uses space from the
`nodefs` filesystem.
The kubelet always reports ephemeral storage capacity and allocations based on the filesystem represented
by `nodefs`; this can be confusing when ephemeral writes are actually going to the image filesystem.
--&gt;
&lt;p&gt;另一个令人困惑的地方是，如果你为节点定义了镜像文件系统，则临时存储报告不会发生变化。
镜像文件系统（&lt;code&gt;imagefs&lt;/code&gt;）用于存储容器镜像层；如果容器向自己的根文件系统写入，
那么这种本地写入不会计入容器镜像的大小。容器运行时存储这些本地修改的位置是由运行时定义的，但通常是镜像文件系统。
如果 Pod 中的容器正在向基于文件系统的 &lt;code&gt;emptyDir&lt;/code&gt; 卷写入，所写入的数据将使用 &lt;code&gt;nodefs&lt;/code&gt; 文件系统的空间。
kubelet 始终根据 &lt;code&gt;nodefs&lt;/code&gt; 所表示的文件系统来报告临时存储容量和分配情况；
当临时写入操作实际上是写到镜像文件系统时，这种差别可能会让人困惑。&lt;/p&gt;
&lt;!--
### Future work

To fix the ephemeral storage reporting limitations and provide more configuration options to the container runtime, SIG Node are working on [KEP-4191](http://kep.k8s.io/4191).
In KEP-4191, Kubernetes will detect if the writeable layer is separated from the read-only layer (images).
This would allow us to have all ephemeral storage, including the writeable layer, on the same disk as well as allowing for a separate disk for images.
--&gt;
&lt;h3 id=&#34;future-work&#34;&gt;后续工作  &lt;/h3&gt;
&lt;p&gt;为了解决临时存储报告相关的限制并为容器运行时提供更多配置选项，SIG Node
正在处理 &lt;a href=&#34;http://kep.k8s.io/4191&#34;&gt;KEP-4191&lt;/a&gt;。在 KEP-4191 中，
Kubernetes 将检测可写层是否与只读层（镜像）分离。
这种检测使我们可以将包括可写层在内的所有临时存储放在同一磁盘上，同时也可以为镜像使用单独的磁盘。&lt;/p&gt;
&lt;!--
### Getting involved

If you would like to get involved, you can
join [Kubernetes Node Special-Interest-Group](https://github.com/kubernetes/community/tree/master/sig-node) (SIG).

If you would like to share feedback, you can do so on our
[#sig-node](https://kubernetes.slack.com/archives/C0BP8PW9G) Slack channel.
If you&#39;re not already part of that Slack workspace, you can visit https://slack.k8s.io/ for an invitation.
--&gt;
&lt;h3 id=&#34;getting-involved&#34;&gt;参与其中  &lt;/h3&gt;
&lt;p&gt;如果你想参与其中，可以加入
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-node&#34;&gt;Kubernetes Node 特别兴趣小组&lt;/a&gt;（SIG）。&lt;/p&gt;
&lt;p&gt;如果你想分享反馈，可以分享到我们的
&lt;a href=&#34;https://kubernetes.slack.com/archives/C0BP8PW9G&#34;&gt;#sig-node&lt;/a&gt; Slack 频道。
如果你还没有加入该 Slack 工作区，可以访问 &lt;a href=&#34;https://slack.k8s.io/&#34;&gt;https://slack.k8s.io/&lt;/a&gt; 获取邀请。&lt;/p&gt;
&lt;!--
Special thanks to all the contributors who provided great reviews, shared valuable insights or suggested the topic idea.
--&gt;
&lt;p&gt;特别感谢所有提供出色评审、分享宝贵见解或建议主题想法的贡献者。&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Peter Hunt&lt;/li&gt;
&lt;li&gt;Mrunal Patel&lt;/li&gt;
&lt;li&gt;Ryan Phillips&lt;/li&gt;
&lt;li&gt;Gaurav Singh&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.29 中的上下文日志生成：更好的故障排除和增强的日志记录</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/12/20/contextual-logging-in-kubernetes-1-29/</link>
      <pubDate>Wed, 20 Dec 2023 09:30:00 -0800</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/12/20/contextual-logging-in-kubernetes-1-29/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Contextual logging in Kubernetes 1.29: Better troubleshooting and enhanced logging&#34;
slug: contextual-logging-in-kubernetes-1-29
date: 2023-12-20T09:30:00-08:00
canonicalUrl: https://www.kubernetes.dev/blog/2023/12/20/contextual-logging/
--&gt;
&lt;!--
**Authors**: [Mengjiao Liu](https://github.com/mengjiao-liu/) (DaoCloud), [Patrick Ohly](https://github.com/pohly) (Intel)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：&lt;a href=&#34;https://github.com/mengjiao-liu/&#34;&gt;Mengjiao Liu&lt;/a&gt; (DaoCloud), &lt;a href=&#34;https://github.com/pohly&#34;&gt;Patrick Ohly&lt;/a&gt; (Intel)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：&lt;a href=&#34;https://github.com/mengjiao-liu/&#34;&gt;Mengjiao Liu&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
On behalf of the [Structured Logging Working Group](https://github.com/kubernetes/community/blob/master/wg-structured-logging/README.md) 
and [SIG Instrumentation](https://github.com/kubernetes/community/tree/master/sig-instrumentation#readme), 
we are pleased to announce that the contextual logging feature
introduced in Kubernetes v1.24 has now been successfully migrated to
two components (kube-scheduler and kube-controller-manager)
as well as some directories. This feature aims to provide more useful logs 
for better troubleshooting of Kubernetes and to empower developers to enhance Kubernetes.
--&gt;
&lt;p&gt;代表&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/wg-structed-logging/README.md&#34;&gt;结构化日志工作组&lt;/a&gt;和
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-instrumentation#readme&#34;&gt;SIG Instrumentation&lt;/a&gt;，
我们很高兴地宣布在 Kubernetes v1.24 中引入的上下文日志记录功能现已成功迁移了两个组件（kube-scheduler 和 kube-controller-manager）
以及一些目录。该功能旨在为 Kubernetes 提供更多有用的日志以更好地进行故障排除，并帮助开发人员增强 Kubernetes。&lt;/p&gt;
&lt;!--
## What is contextual logging?

[Contextual logging](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging)
is based on the [go-logr](https://github.com/go-logr/logr#a-minimal-logging-api-for-go) API. 
The key idea is that libraries are passed a logger instance by their caller
and use that for logging instead of accessing a global logger.
The binary decides the logging implementation, not the libraries.
The go-logr API is designed around structured logging and supports attaching
additional information to a logger.
--&gt;
&lt;h2 id=&#34;what-is-contextual-logging&#34;&gt;上下文日志记录是什么？ &lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging&#34;&gt;上下文日志记录&lt;/a&gt;基于
&lt;a href=&#34;https://github.com/go-logr/logr#a-minimal-logging-api-for-go&#34;&gt;go-logr&lt;/a&gt; API。
关键思想是调用者将一个日志生成器实例传递给库，并使用它进行日志记录而不是访问全局日志生成器。
二进制文件而不是库负责选择日志记录的实现。go-logr API 围绕结构化日志记录而设计，并支持向日志生成器提供额外信息。&lt;/p&gt;
&lt;!--
This enables additional use cases:

- The caller can attach additional information to a logger:
  - [WithName](&lt;https://pkg.go.dev/github.com/go-logr/logr#Logger.WithName&gt;) adds a &#34;logger&#34; key with the names concatenated by a dot as value
  - [WithValues](&lt;https://pkg.go.dev/github.com/go-logr/logr#Logger.WithValues&gt;) adds key/value pairs

  When passing this extended logger into a function, and the function uses it
  instead of the global logger, the additional information is then included 
  in all log entries, without having to modify the code that generates the log entries. 
  This is useful in highly parallel applications where it can become hard to identify 
  all log entries for a certain operation, because the output from different operations gets interleaved.

- When running unit tests, log output can be associated with the current test.
  Then, when a test fails, only the log output of the failed test gets shown by go test.
  That output can also be more verbose by default because it will not get shown for successful tests.
  Tests can be run in parallel without interleaving their output.
--&gt;
&lt;p&gt;这一设计可以支持某些额外的使用场景：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;调用者可以为日志生成器提供额外的信息：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://pkg.go.dev/github.com/go-logr/logr#Logger.WithName&#34;&gt;WithName&lt;/a&gt; 添加一个 “logger” 键，
并用句点（.）将名称的各个部分串接起来作为取值&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://pkg.go.dev/github.com/go-logr/logr#Logger.WithValues&#34;&gt;WithValues&lt;/a&gt; 添加键/值对&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;当将此经过扩展的日志生成器传递到函数中，并且该函数使用它而不是全局日志生成器时，
所有日志条目中都会包含所给的额外信息，而无需修改生成日志条目的代码。
这一特点在高度并行的应用中非常有用。在这类应用中，很难辨识某操作的所有日志条目，因为不同操作的输出是交错的。&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;运行单元测试时，日志输出可以与当前测试相关联。且当测试失败时，go test 仅显示失败测试的日志输出。
默认情况下，该输出也可能更详细，因为它不会在成功的测试中显示。测试可以并行运行，而无需交错输出。&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
One of the design decisions for contextual logging was to allow attaching a logger as value to a `context.Context`.
Since the logger encapsulates all aspects of the intended logging for the call,
it is *part* of the context, and not just *using* it. A practical advantage is that many APIs
already have a `ctx` parameter or can add one. This provides additional advantages, like being able to
get rid of `context.TODO()` calls inside the functions.
--&gt;
&lt;p&gt;上下文日志记录的设计决策之一是允许将日志生成器作为值附加到 &lt;code&gt;context.Context&lt;/code&gt; 之上。
由于日志生成器封装了调用所预期的、与日志记录相关的所有元素，
因此它是 context 的&lt;strong&gt;一部分&lt;/strong&gt;，而不仅仅是&lt;strong&gt;使用&lt;/strong&gt;它。这一设计的一个比较实际的优点是，
许多 API 已经有一个 &lt;code&gt;ctx&lt;/code&gt; 参数，或者可以添加一个 &lt;code&gt;ctx&lt;/code&gt; 参数。
进而产生的额外好处还包括比如可以去掉函数内的 &lt;code&gt;context.TODO()&lt;/code&gt; 调用。&lt;/p&gt;
&lt;!--
## How to use it

The contextual logging feature is alpha starting from Kubernetes v1.24,
so it requires the `ContextualLogging` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled.
If you want to test the feature while it is alpha, you need to enable this feature gate
on the `kube-controller-manager` and the `kube-scheduler`.
--&gt;
&lt;h2 id=&#34;how-to-use-it&#34;&gt;如何使用它 &lt;/h2&gt;
&lt;p&gt;从 Kubernetes v1.24 开始，上下文日志记录功能处于 Alpha 状态，因此它需要启用
&lt;code&gt;ContextualLogging&lt;/code&gt; &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;特性门控&lt;/a&gt;。
如果你想在该功能处于 Alpha 状态时对其进行测试，则需要在 &lt;code&gt;kube-controller-manager&lt;/code&gt; 和 &lt;code&gt;kube-scheduler&lt;/code&gt; 上启用此特性门控。&lt;/p&gt;
&lt;!--
For the `kube-scheduler`, there is one thing to note, in addition to enabling 
the `ContextualLogging` feature gate, instrumentation also depends on log verbosity.
To avoid slowing down the scheduler with the logging instrumentation for contextual logging added for 1.29,
it is important to choose carefully when to add additional information:
- At `-v3` or lower, only `WithValues(&#34;pod&#34;)` is used once per scheduling cycle.
  This has the intended effect that all log messages for the cycle include the pod information. 
  Once contextual logging is GA, &#34;pod&#34; key/value pairs can be removed from all log calls.
- At `-v4` or higher, richer log entries get produced where `WithValues` is also used for the node (when applicable)
  and `WithName` is used for the current operation and plugin.
--&gt;
&lt;p&gt;对于 &lt;code&gt;kube-scheduler&lt;/code&gt;，有一点需要注意，除了启用 &lt;code&gt;ContextualLogging&lt;/code&gt; 特性门控之外，
插桩行为还取决于日志的详细程度设置。
为了避免因 1.29 添加的上下文日志记录工具而降低调度程序的速度，请务必仔细选择何时添加额外的信息：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;在 &lt;code&gt;-v3&lt;/code&gt; 或更低日志级别中，每个调度周期仅使用一次 &lt;code&gt;WithValues(&amp;quot;pod&amp;quot;)&lt;/code&gt;。
这样做可以达到预期效果，即该周期的所有日志消息都包含 Pod 信息。
一旦上下文日志记录特性到达 GA 阶段，就可以从所有日志调用中删除 “pod” 键值对。&lt;/li&gt;
&lt;li&gt;在 &lt;code&gt;-v4&lt;/code&gt; 或更高日志级别中，会生成更丰富的日志条目，其中 &lt;code&gt;WithValues&lt;/code&gt; 也用于节点（如果适用），&lt;code&gt;WithName&lt;/code&gt; 用于当前操作和插件。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
Here is an example that demonstrates the effect:
--&gt;
&lt;p&gt;下面的示例展示了这一效果：&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I1113 08:43:37.029524   87144 default_binder.go:53] &amp;quot;Attempting to bind pod to node&amp;quot; &lt;strong&gt;logger=&amp;quot;Bind.DefaultBinder&amp;quot;&lt;/strong&gt; &lt;strong&gt;pod&lt;/strong&gt;=&amp;quot;kube-system/coredns-69cbfb9798-ms4pq&amp;quot; &lt;strong&gt;node&lt;/strong&gt;=&amp;quot;127.0.0.1&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;!--
The immediate benefit is that the operation and plugin name are visible in `logger`.
`pod` and `node` are already logged as parameters in individual log calls in `kube-scheduler` code.
Once contextual logging is supported by more packages outside of `kube-scheduler`, 
they will also be visible there (for example, client-go). Once it is GA,
log calls can be simplified to avoid repeating those values.
--&gt;
&lt;p&gt;这一设计的直接好处是在 &lt;code&gt;logger&lt;/code&gt; 中可以看到操作和插件名称。&lt;code&gt;pod&lt;/code&gt; 和 &lt;code&gt;node&lt;/code&gt; 已作为参数记录在
&lt;code&gt;kube-scheduler&lt;/code&gt; 代码中的各个日志调用中。一旦 &lt;code&gt;kube-scheduler&lt;/code&gt; 之外的其他包也支持上下文日志记录，
在这些包（例如，client-go）中也可以看到操作和插件名称。
一旦上下文日志记录特性到达 GA 阶段，就可以简化日志调用以避免重复这些值。&lt;/p&gt;
&lt;!--
In `kube-controller-manager`, `WithName` is used to add the user-visible controller name to log output, 
for example:
--&gt;
&lt;p&gt;在 &lt;code&gt;kube-controller-manager&lt;/code&gt; 中，&lt;code&gt;WithName&lt;/code&gt; 被用来在日志中输出用户可见的控制器名称，例如：&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I1113 08:43:29.284360   87141 graph_builder.go:285] &amp;quot;garbage controller monitor not synced: no monitors&amp;quot; &lt;strong&gt;logger=&amp;quot;garbage-collector-controller&amp;quot;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;!--
The `logger=”garbage-collector-controller”` was added by the `kube-controller-manager` core
when instantiating that controller and appears in all of its log entries - at least as long as the code
that it calls supports contextual logging. Further work is needed to convert shared packages like client-go.
--&gt;
&lt;p&gt;&lt;code&gt;logger=”garbage-collector-controller”&lt;/code&gt; 是由 &lt;code&gt;kube-controller-manager&lt;/code&gt;
核心代码在实例化该控制器时添加的，会出现在其所有日志条目中——只要它所调用的代码支持上下文日志记录。
转换像 client-go 这样的共享包还需要额外的工作。&lt;/p&gt;
&lt;!--
## Performance impact

Supporting contextual logging in a package, i.e. accepting a logger from a caller, is cheap. 
No performance impact was observed for the `kube-scheduler`. As noted above, 
adding `WithName` and `WithValues` needs to be done more carefully.
--&gt;
&lt;h2 id=&#34;performance-impact&#34;&gt;性能影响 &lt;/h2&gt;
&lt;p&gt;在包中支持上下文日志记录，即接受来自调用者的记录器，成本很低。
没有观察到 &lt;code&gt;kube-scheduler&lt;/code&gt; 的性能影响。如上所述，添加 &lt;code&gt;WithName&lt;/code&gt; 和 &lt;code&gt;WithValues&lt;/code&gt; 需要更加小心。&lt;/p&gt;
&lt;!--
In Kubernetes 1.29, enabling contextual logging at production verbosity (`-v3` or lower)
caused no measurable slowdown for the `kube-scheduler` and is not expected for the `kube-controller-manager` either.
At debug levels, a 28% slowdown for some test cases is still reasonable given that the resulting logs make debugging easier. 
For details, see the [discussion around promoting the feature to beta](https://github.com/kubernetes/enhancements/pull/4219#issuecomment-1807811995).
--&gt;
&lt;p&gt;在 Kubernetes 1.29 中，以生产环境日志详细程度（&lt;code&gt;-v3&lt;/code&gt; 或更低）启用上下文日志不会导致 &lt;code&gt;kube-scheduler&lt;/code&gt; 速度出现明显的减慢，
并且 &lt;code&gt;kube-controller-manager&lt;/code&gt; 速度也不会出现明显的减慢。在 debug 级别，考虑到生成的日志使调试更容易，某些测试用例减速 28% 仍然是合理的。
详细信息请参阅&lt;a href=&#34;https://github.com/kubernetes/enhancements/pull/4219#issuecomment-1807811995&#34;&gt;有关将该特性升级为 Beta 版的讨论&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## Impact on downstream users
Log output is not part of the Kubernetes API and changes regularly in each release,
whether it is because developers work on the code or because of the ongoing conversion
to structured and contextual logging.

If downstream users have dependencies on specific logs, 
they need to be aware of how this change affects them.
--&gt;
&lt;h2 id=&#34;impact-on-downstream-users&#34;&gt;对下游用户的影响 &lt;/h2&gt;
&lt;p&gt;日志输出不是 Kubernetes API 的一部分，并且经常在每个版本中都会出现更改，
无论是因为开发人员修改代码还是因为不断转换为结构化和上下文日志记录。&lt;/p&gt;
&lt;p&gt;如果下游用户对特定日志有依赖性，他们需要了解此更改如何影响他们。&lt;/p&gt;
&lt;!--
## Further reading

- Read the [Contextual Logging in Kubernetes 1.24](https://www.kubernetes.dev/blog/2022/05/25/contextual-logging/) article.
- Read the [KEP-3077: contextual logging](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging).
--&gt;
&lt;h2 id=&#34;further-reading&#34;&gt;进一步阅读 &lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;参阅 &lt;a href=&#34;https://www.kubernetes.dev/blog/2022/05/25/contextual-logging/&#34;&gt;Kubernetes 1.24 中的上下文日志记录&lt;/a&gt; 。&lt;/li&gt;
&lt;li&gt;参阅 &lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging&#34;&gt;KEP-3077：上下文日志记录&lt;/a&gt;。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Get involved

If you&#39;re interested in getting involved, we always welcome new contributors to join us.
Contextual logging provides a fantastic opportunity for you to contribute to Kubernetes development and make a meaningful impact.
By joining [Structured Logging WG](https://github.com/kubernetes/community/tree/master/wg-structured-logging),
you can actively participate in the development of Kubernetes and make your first contribution.
It&#39;s a great way to learn and engage with the community while gaining valuable experience.
--&gt;
&lt;h2 id=&#34;get-involved&#34;&gt;如何参与 &lt;/h2&gt;
&lt;p&gt;如果你有兴趣参与，我们始终欢迎新的贡献者加入我们。上下文日志记录为你参与
Kubernetes 开发做出贡献并产生有意义的影响提供了绝佳的机会。
通过加入 &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/wg-structured-logging&#34;&gt;Structured Logging WG&lt;/a&gt;，
你可以积极参与 Kubernetes 的开发并做出你的第一个贡献。这是学习和参与社区并获得宝贵经验的好方法。&lt;/p&gt;
&lt;!--
We encourage you to explore the repository and familiarize yourself with the ongoing discussions and projects. 
It&#39;s a collaborative environment where you can exchange ideas, ask questions, and work together with other contributors.
--&gt;
&lt;p&gt;我们鼓励你探索存储库并熟悉正在进行的讨论和项目。这是一个协作环境，你可以在这里交流想法、提出问题并与其他贡献者一起工作。&lt;/p&gt;
&lt;!--
If you have any questions or need guidance, don&#39;t hesitate to reach out to us 
and you can do so on our [public Slack channel](https://kubernetes.slack.com/messages/wg-structured-logging). 
If you&#39;re not already part of that Slack workspace, you can visit [https://slack.k8s.io/](https://slack.k8s.io/)
for an invitation.
--&gt;
&lt;p&gt;如果你有任何疑问或需要指导，请随时与我们联系，你可以通过我们的&lt;a href=&#34;https://kubernetes.slack.com/messages/wg-structured-logging&#34;&gt;公共 Slack 频道&lt;/a&gt;联系我们。
如果你尚未加入 Slack 工作区，可以访问 &lt;a href=&#34;https://slack.k8s.io/&#34;&gt;https://slack.k8s.io/&lt;/a&gt; 获取邀请。&lt;/p&gt;
&lt;!--
We would like to express our gratitude to all the contributors who provided excellent reviews, 
shared valuable insights, and assisted in the implementation of this feature (in alphabetical order):
--&gt;
&lt;p&gt;我们要向所有提供精彩评论、分享宝贵见解并协助实施此功能的贡献者表示感谢（按字母顺序排列）：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Aldo Culquicondor (&lt;a href=&#34;https://github.com/alculquicondor&#34;&gt;alculquicondor&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Andy Goldstein (&lt;a href=&#34;https://github.com/ncdc&#34;&gt;ncdc&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Feruzjon Muyassarov (&lt;a href=&#34;https://github.com/fmuyassarov&#34;&gt;fmuyassarov&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Freddie (&lt;a href=&#34;https://github.com/freddie400&#34;&gt;freddie400&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;JUN YANG (&lt;a href=&#34;https://github.com/yangjunmyfm192085&#34;&gt;yangjunmyfm192085&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Kante Yin (&lt;a href=&#34;https://github.com/kerthcet&#34;&gt;kerthcet&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Kiki (&lt;a href=&#34;https://github.com/carlory&#34;&gt;carlory&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Lucas Severo Alve (&lt;a href=&#34;https://github.com/knelasevero&#34;&gt;knelasevero&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Maciej Szulik (&lt;a href=&#34;https://github.com/soltysh&#34;&gt;soltysh&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Mengjiao Liu (&lt;a href=&#34;https://github.com/mengjiao-liu&#34;&gt;mengjiao-liu&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Naman Lakhwani (&lt;a href=&#34;https://github.com/Namanl2001&#34;&gt;Namanl2001&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Oksana Baranova (&lt;a href=&#34;https://github.com/oxxenix&#34;&gt;oxxenix&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Patrick Ohly (&lt;a href=&#34;https://github.com/pohly&#34;&gt;pohly&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;songxiao-wang87 (&lt;a href=&#34;https://github.com/songxiao-wang87&#34;&gt;songxiao-wang87&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Tim Allclai (&lt;a href=&#34;https://github.com/tallclair&#34;&gt;tallclair&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;ZhangYu (&lt;a href=&#34;https://github.com/Octopusjust&#34;&gt;Octopusjust&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Ziqi Zhao (&lt;a href=&#34;https://github.com/fatsheep9146&#34;&gt;fatsheep9146&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Zac (&lt;a href=&#34;https://github.com/249043822&#34;&gt;249043822&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.29: 解耦污点管理器与节点生命周期控制器</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/12/19/kubernetes-1-29-taint-eviction-controller/</link>
      <pubDate>Tue, 19 Dec 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/12/19/kubernetes-1-29-taint-eviction-controller/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Kubernetes 1.29: Decoupling taint-manager from node-lifecycle-controller&#34;
date: 2023-12-19
slug: kubernetes-1-29-taint-eviction-controller
--&gt;
&lt;!-- 
**Authors:** Yuan Chen (Apple), Andrea Tosatto (Apple) 
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Yuan Chen (Apple), Andrea Tosatto (Apple)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; Allen Zhang&lt;/p&gt;
&lt;!-- 
This blog discusses a new feature in Kubernetes 1.29 to improve the handling of taint-based pod eviction. 
--&gt;
&lt;p&gt;这篇博客讨论在 Kubernetes 1.29 中基于污点的 Pod 驱逐处理的新特性。&lt;/p&gt;
&lt;!-- 
## Background 
--&gt;
&lt;h2 id=&#34;背景&#34;&gt;背景&lt;/h2&gt;
&lt;!-- 
In Kubernetes 1.29, an improvement has been introduced to enhance the taint-based pod eviction handling on nodes.
This blog discusses the changes made to node-lifecycle-controller
to separate its responsibilities and improve overall code maintainability. 
--&gt;
&lt;p&gt;在 Kubernetes 1.29 中引入了一项改进，以加强节点上基于污点的 Pod 驱逐处理。
本文将讨论对节点生命周期控制器（node-lifecycle-controller）所做的更改，以分离职责并提高代码的整体可维护性。&lt;/p&gt;
&lt;!-- 
## Summary of changes 
--&gt;
&lt;h2 id=&#34;变动摘要&#34;&gt;变动摘要&lt;/h2&gt;
&lt;!-- 
node-lifecycle-controller previously combined two independent functions: 
--&gt;
&lt;p&gt;节点生命周期控制器之前组合了两个独立的功能：&lt;/p&gt;
&lt;!-- 
- Adding a pre-defined set of `NoExecute` taints to Node based on Node&#39;s condition.
- Performing pod eviction on `NoExecute` taint. 
--&gt;
&lt;ul&gt;
&lt;li&gt;基于节点的条件为节点新增了一组预定义的污点 &lt;code&gt;NoExecute&lt;/code&gt;。&lt;/li&gt;
&lt;li&gt;对有 &lt;code&gt;NoExecute&lt;/code&gt; 污点的 Pod 执行驱逐操作。&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- 
With the Kubernetes 1.29 release, the taint-based eviction implementation has been
moved out of node-lifecycle-controller into a separate and independent component called taint-eviction-controller.
This separation aims to disentangle code, enhance code maintainability,
and facilitate future extensions to either component. 
--&gt;
&lt;p&gt;在 Kubernetes 1.29 版本中，基于污点的驱逐实现已经从节点生命周期控制器中移出，
成为一个名为污点驱逐控制器（taint-eviction-controller）的独立组件。
旨在拆分代码，提高代码的可维护性，并方便未来对这两个组件进行扩展。&lt;/p&gt;
&lt;!-- 
As part of the change, additional metrics were introduced to help you monitor taint-based pod evictions: 
--&gt;
&lt;p&gt;以下新指标可以帮助你监控基于污点的 Pod 驱逐：&lt;/p&gt;
&lt;!-- 
- `pod_deletion_duration_seconds` measures the latency between the time when a taint effect
has been activated for the Pod and its deletion via taint-eviction-controller.
- `pod_deletions_total` reports the total number of Pods deleted by taint-eviction-controller since its start. 
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;pod_deletion_duration_seconds&lt;/code&gt; 表示当 Pod 的污点被激活直到这个 Pod 被污点驱逐控制器删除的延迟时间。&lt;/li&gt;
&lt;li&gt;&lt;code&gt;pod_deletions_total&lt;/code&gt; 表示自从污点驱逐控制器启动以来驱逐的 Pod 总数。&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- 
## How to use the new feature? 
--&gt;
&lt;h2 id=&#34;如何使用这个新特性&#34;&gt;如何使用这个新特性？&lt;/h2&gt;
&lt;!-- 
A new feature gate, `SeparateTaintEvictionController`, has been added. The feature is enabled by default as Beta in Kubernetes 1.29.
Please refer to the [feature gate document](/docs/reference/command-line-tools-reference/feature-gates/). 
--&gt;
&lt;p&gt;名为 &lt;code&gt;SeparateTaintEvictionController&lt;/code&gt; 的特性门控已被添加。该特性在 Kubernetes 1.29 Beta 版本中默认开启。
详情请参阅&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;特性门控&lt;/a&gt;。&lt;/p&gt;
&lt;!-- 
When this feature is enabled, users can optionally disable taint-based eviction by setting `--controllers=-taint-eviction-controller`
in kube-controller-manager. 
--&gt;
&lt;p&gt;当这项特性启用时，用户可以通过在 &lt;code&gt;kube-controller-manager&lt;/code&gt; 通过手动设置
&lt;code&gt;--controllers=-taint-eviction-controller&lt;/code&gt; 的方式来禁用基于污点的驱逐功能。&lt;/p&gt;
&lt;!-- 
To disable the new feature and use the old taint-manager within node-lifecylecycle-controller , users can set the feature gate `SeparateTaintEvictionController=false`. 
--&gt;
&lt;p&gt;如果想禁用该特性并在节点生命周期中使用旧版本污点管理器，用户可以通过设置 &lt;code&gt;SeparateTaintEvictionController=false&lt;/code&gt; 来禁用。&lt;/p&gt;
&lt;!-- 
## Use cases 
--&gt;
&lt;h2 id=&#34;使用案例&#34;&gt;使用案例&lt;/h2&gt;
&lt;!-- 
This new feature will allow cluster administrators to extend and enhance the default
taint-eviction-controller and even replace the default taint-eviction-controller with a
custom implementation to meet different needs. An example is to better support
stateful workloads that use PersistentVolume on local disks. 
--&gt;
&lt;p&gt;该特性将允许集群管理员扩展、增强默认的污点驱逐控制器，并且可以使用自定义的实现方式替换默认的污点驱逐控制器以满足不同的需要。
例如：更好地支持在本地磁盘的持久卷中的有状态工作负载。&lt;/p&gt;
&lt;!-- 
## FAQ 
--&gt;
&lt;h2 id=&#34;faq&#34;&gt;FAQ&lt;/h2&gt;
&lt;!-- 
**Does this feature change the existing behavior of taint-based pod evictions?** 
--&gt;
&lt;p&gt;&lt;strong&gt;该特性是否会改变现有的基于污点的 Pod 驱逐行为？&lt;/strong&gt;&lt;/p&gt;
&lt;!-- 
No, the taint-based pod eviction behavior remains unchanged. If the feature gate
`SeparateTaintEvictionController` is turned off, the legacy node-lifecycle-controller with taint-manager will continue to be used. 
--&gt;
&lt;p&gt;不会，基于污点的 Pod 驱逐行为保持不变。如果特性门控 &lt;code&gt;SeparateTaintEvictionController&lt;/code&gt; 被关闭，
将继续使用之前的节点生命周期管理器中的污点管理器。&lt;/p&gt;
&lt;!-- 
**Will enabling/using this feature result in an increase in the time taken by any operations covered by existing SLIs/SLOs?** 
--&gt;
&lt;p&gt;&lt;strong&gt;启用/使用此特性是否会导致现有 SLI/SLO 中任何操作的用时增加？&lt;/strong&gt;&lt;/p&gt;
&lt;!-- 
No. 
--&gt;
&lt;p&gt;不会。&lt;/p&gt;
&lt;!-- 
**Will enabling/using this feature result in an increase in resource usage (CPU, RAM, disk, IO, ...)?** 
--&gt;
&lt;p&gt;&lt;strong&gt;启用/使用此特性是否会导致资源利用量（如 CPU、内存、磁盘、IO 等）的增加？&lt;/strong&gt;&lt;/p&gt;
&lt;!-- 
The increase in resource usage by running a separate `taint-eviction-controller` will be negligible. 
--&gt;
&lt;p&gt;运行单独的 &lt;code&gt;taint-eviction-controller&lt;/code&gt; 所增加的资源利用量可以忽略不计。&lt;/p&gt;
&lt;!-- 
## Learn more 
--&gt;
&lt;h2 id=&#34;了解更多&#34;&gt;了解更多&lt;/h2&gt;
&lt;!-- 
For more details, refer to the [KEP](http://kep.k8s.io/3902). 
--&gt;
&lt;p&gt;更多细节请参考 &lt;a href=&#34;http://kep.k8s.io/3902&#34;&gt;KEP&lt;/a&gt;。&lt;/p&gt;
&lt;!-- 
## Acknowledgments 
--&gt;
&lt;h2 id=&#34;特别鸣谢&#34;&gt;特别鸣谢&lt;/h2&gt;
&lt;!-- 
As with any Kubernetes feature, multiple community members have contributed, from
writing the KEP to implementing the new controller and reviewing the KEP and code. Special thanks to: 
--&gt;
&lt;p&gt;与任何 Kubernetes 特性一样，从撰写 KEP 到实现新控制器再到审核 KEP 和代码，多名社区成员都做出了贡献，特别感谢：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Aldo Culquicondor (@alculquicondor)&lt;/li&gt;
&lt;li&gt;Maciej Szulik (@soltysh)&lt;/li&gt;
&lt;li&gt;Filip Křepinský (@atiratree)&lt;/li&gt;
&lt;li&gt;Han Kang (@logicalhan)&lt;/li&gt;
&lt;li&gt;Wei Huang (@Huang-Wei)&lt;/li&gt;
&lt;li&gt;Sergey Kanzhelevi (@SergeyKanzhelev)&lt;/li&gt;
&lt;li&gt;Ravi Gudimetla (@ravisantoshgudimetla)&lt;/li&gt;
&lt;li&gt;Deep Debroy (@ddebroy)&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.29：PodReadyToStartContainers 状况进阶至 Beta</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/12/19/pod-ready-to-start-containers-condition-now-in-beta/</link>
      <pubDate>Tue, 19 Dec 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/12/19/pod-ready-to-start-containers-condition-now-in-beta/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Kubernetes 1.29: PodReadyToStartContainers Condition Moves to Beta&#34;
date: 2023-12-19
slug: pod-ready-to-start-containers-condition-now-in-beta
--&gt;
&lt;!--
**Authors**: Zefeng Chen (independent), Kevin Hannon (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Zefeng Chen (independent), Kevin Hannon (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：&lt;a href=&#34;https://github.com/windsonsea&#34;&gt;Michael Yao&lt;/a&gt;&lt;/p&gt;
&lt;!--
With the recent release of Kubernetes 1.29, the `PodReadyToStartContainers`
[condition](/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions) is 
available by default.
The kubelet manages the value for that condition throughout a Pod&#39;s lifecycle, 
in the status field of a Pod. The kubelet will use the `PodReadyToStartContainers`
condition to accurately surface the initialization state of a Pod,
from the perspective of Pod sandbox creation and network configuration by a container runtime.
--&gt;
&lt;p&gt;随着最近发布的 Kubernetes 1.29，&lt;code&gt;PodReadyToStartContainers&lt;/code&gt;
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions&#34;&gt;状况&lt;/a&gt;默认可用。
kubelet 在 Pod 的整个生命周期中管理该状况的值，将其存储在 Pod 的状态字段中。
kubelet 将通过容器运行时从 Pod 沙箱创建和网络配置的角度使用 &lt;code&gt;PodReadyToStartContainers&lt;/code&gt;
状况准确地展示 Pod 的初始化状态，&lt;/p&gt;
&lt;!--
## What&#39;s the motivation for this feature?
--&gt;
&lt;h2 id=&#34;这个特性的动机是什么&#34;&gt;这个特性的动机是什么？&lt;/h2&gt;
&lt;!--
Cluster administrators did not have a clear and easily accessible way to view the completion of Pod&#39;s sandbox creation
and initialization. As of 1.28, the `Initialized` condition in Pods tracks the execution of init containers.
However, it has limitations in accurately reflecting the completion of sandbox creation and readiness to start containers for all Pods in a cluster. 
This distinction is particularly important in multi-tenant clusters where tenants own the Pod specifications, including the set of init containers, 
while cluster administrators manage storage plugins, networking plugins, and container runtime handlers. 
Therefore, there is a need for an improved mechanism to provide cluster administrators with a clear and 
comprehensive view of Pod sandbox creation completion and container readiness.
--&gt;
&lt;p&gt;集群管理员以前没有明确且轻松访问的方式来查看 Pod 沙箱创建和初始化的完成情况。
从 1.28 版本开始，Pod 中的 &lt;code&gt;Initialized&lt;/code&gt; 状况跟踪 Init 容器的执行情况。
然而，它在准确反映沙箱创建完成和容器准备启动的方面存在一些限制，无法适用于集群中的所有 Pod。
在多租户集群中，这种区别尤为重要，租户拥有包括 Init 容器集合在内的 Pod 规约，
而集群管理员管理存储插件、网络插件和容器运行时处理程序。
因此，需要改进这个机制，以便为集群管理员提供清晰和全面的 Pod 沙箱创建完成和容器就绪状态的视图。&lt;/p&gt;
&lt;!--
## What&#39;s the benefit?

1. Improved Visibility: Cluster administrators gain a clearer and more comprehensive view of Pod sandbox
   creation completion and container readiness.
   This enhanced visibility allows them to make better-informed decisions and troubleshoot issues more effectively.
--&gt;
&lt;h2 id=&#34;这个特性有什么好处&#34;&gt;这个特性有什么好处？&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;改进可见性：集群管理员可以更清晰和全面地查看 Pod 沙箱的创建完成和容器的就绪状态。
这种增强的可见性使他们能够做出更明智的决策，并更有效地解决问题。&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
2. Metric Collection and Monitoring: Monitoring services can leverage the fields associated with
   the `PodReadyToStartContainers` condition to report sandbox creation state and latency.
   Metrics can be collected at per-Pod cardinality or aggregated based on various
   properties of the Pod, such as `volumes`, `runtimeClassName`, custom annotations for CNI
   and IPAM plugins or arbitrary labels and annotations, and `storageClassName` of
   PersistentVolumeClaims.
   This enables comprehensive monitoring and analysis of Pod readiness across the cluster.
--&gt;
&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;指标收集和监控：监控服务可以利用与 &lt;code&gt;PodReadyToStartContainers&lt;/code&gt; 状况相关的字段来报告沙箱创建状态和延迟。
可以按照每个 Pod 的基数进行指标收集，或者根据 Pod 的各种属性进行聚合，例如
&lt;code&gt;volumes&lt;/code&gt;、&lt;code&gt;runtimeClassName&lt;/code&gt;、CNI 和 IPAM 插件的自定义注解，
以及任意标签和注解，以及 PersistentVolumeClaims 的 &lt;code&gt;storageClassName&lt;/code&gt;。
这样可以全面监控和分析集群中 Pod 的就绪状态。&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
3. Enhanced Troubleshooting: With a more accurate representation of Pod sandbox creation and container readiness,
   cluster administrators can quickly identify and address any issues that may arise during the initialization process.
   This leads to improved troubleshooting capabilities and reduced downtime.
--&gt;
&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;增强故障排查能力：通过更准确地表示 Pod 沙箱的创建和容器的就绪状态，
集群管理员可以快速识别和解决初始化过程中可能出现的任何问题。
这将提高故障排查能力，并减少停机时间。&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
### What’s next?

Due to feedback and adoption, the Kubernetes team promoted `PodReadyToStartContainersCondition` to Beta in 1.29. 
Your comments will help determine if this condition continues forward to get promoted to GA, 
so please submit additional feedback on this feature!
--&gt;
&lt;h3 id=&#34;后续事项&#34;&gt;后续事项&lt;/h3&gt;
&lt;p&gt;鉴于反馈和采用情况，Kubernetes 团队在 1.29 版本中将 &lt;code&gt;PodReadyToStartContainersCondition&lt;/code&gt;
进阶至 Beta版。你的评论将有助于确定该状况是否继续并晋升至 GA，请针对此特性提交更多反馈！&lt;/p&gt;
&lt;!--
### How can I learn more?

Please check out the
[documentation](/docs/concepts/workloads/pods/pod-lifecycle/) for the
`PodReadyToStartContainersCondition` to learn more about it and how it fits in relation to
other Pod conditions.
--&gt;
&lt;h3 id=&#34;如何了解更多&#34;&gt;如何了解更多？&lt;/h3&gt;
&lt;p&gt;请查看关于 &lt;code&gt;PodReadyToStartContainersCondition&lt;/code&gt;
的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/&#34;&gt;文档&lt;/a&gt;，
以了解其更多信息及其与其他 Pod 状况的关系。&lt;/p&gt;
&lt;!--
### How to get involved?

This feature is driven by the SIG Node community. Please join us to connect with
the community and share your ideas and feedback around the above feature and
beyond. We look forward to hearing from you!
--&gt;
&lt;h3 id=&#34;如何参与&#34;&gt;如何参与？&lt;/h3&gt;
&lt;p&gt;该特性由 SIG Node 社区推动。请加入我们，与社区建立联系，分享你对这一特性及更多内容的想法和反馈。
我们期待倾听你的建议！&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.29 新的 Alpha 特性：Service 的负载均衡器 IP 模式</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/12/18/kubernetes-1-29-feature-loadbalancer-ip-mode-alpha/</link>
      <pubDate>Mon, 18 Dec 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/12/18/kubernetes-1-29-feature-loadbalancer-ip-mode-alpha/</guid>
      <description>
        
        
        &lt;!-- 
layout: blog
title: &#34;Kubernetes 1.29: New (alpha) Feature, Load Balancer IP Mode for Services&#34;
date: 2023-12-18
slug: kubernetes-1-29-feature-loadbalancer-ip-mode-alpha
--&gt;
&lt;!-- **Author:** [Aohan Yang](https://github.com/RyanAoh) --&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; &lt;a href=&#34;https://github.com/RyanAoh&#34;&gt;Aohan Yang&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Allen Zhang&lt;/p&gt;
&lt;!-- 
This blog introduces a new alpha feature in Kubernetes 1.29. 
It provides a configurable approach to define how Service implementations, 
exemplified in this blog by kube-proxy, 
handle traffic from pods to the Service, within the cluster. 
--&gt;
&lt;p&gt;本文介绍 Kubernetes 1.29 中一个新的 Alpha 特性。
此特性提供了一种可配置的方式用于定义 Service 的实现方式，本文中以
kube-proxy 为例介绍如何处理集群内从 Pod 到 Service 的流量。&lt;/p&gt;
&lt;!-- 
## Background 
--&gt;
&lt;h2 id=&#34;背景&#34;&gt;背景&lt;/h2&gt;
&lt;!-- 
In older Kubernetes releases, the kube-proxy would intercept traffic that was destined for the IP
address associated with a Service of `type: LoadBalancer`. This happened whatever mode you used
for `kube-proxy`.  
--&gt;
&lt;p&gt;在 Kubernetes 早期版本中，kube-proxy 会拦截指向 &lt;code&gt;type: LoadBalancer&lt;/code&gt; Service 关联
IP 地址的流量。这与你为 &lt;code&gt;kube-proxy&lt;/code&gt; 所使用的哪种模式无关。&lt;/p&gt;
&lt;!-- 
The interception implemented the expected behavior (traffic eventually reaching the expected
endpoints behind the Service). The mechanism to make that work depended on the mode for kube-proxy;
on Linux, kube-proxy in iptables mode would redirecting packets directly to the endpoint; in ipvs mode,
kube-proxy would configure the load balancer&#39;s IP address to one interface on the node. 
The motivation for implementing that interception was for two reasons: 
--&gt;
&lt;p&gt;这种拦截实现了预期行为（流量最终会抵达服务后挂载的端点）。这种机制取决于 kube-proxy 的模式，在
Linux 中，运行于 iptables 模式下的 kube-proxy 会重定向数据包到后端端点；在 ipvs 模式下，
kube-proxy 会将负载均衡器的 IP 地址配置到节点的一个网络接口上。采用这种拦截有两个原因：&lt;/p&gt;
&lt;!-- 
1. **Traffic path optimization:** Efficiently redirecting pod traffic - when a container in a pod sends an outbound
   packet that is destined for the load balancer&#39;s IP address - 
   directly to the backend service by bypassing the load balancer. 
--&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;流量路径优化：&lt;/strong&gt; 高效地重定向 Pod 流量 - 当 Pod 中的容器发送指向负载均衡器 IP 地址的出站包时，
会绕过负载均衡器直接重定向到后端服务。&lt;/li&gt;
&lt;/ol&gt;
&lt;!-- 
2. **Handling load balancer packets:** Some load balancers send packets with the destination IP set to 
the load balancer&#39;s IP address. As a result, these packets need to be routed directly to the correct backend (which 
might not be local to that node), in order to avoid loops. 
--&gt;
&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;&lt;strong&gt;处理负载均衡数据包：&lt;/strong&gt; 有些负载均衡器发送的数据包设置目标 IP 为负载均衡器的 IP 地址。
因此，这些数据包需要被直接路由到正确的后端（可能不在该节点本地），以避免回环。&lt;/li&gt;
&lt;/ol&gt;
&lt;!-- 
## Problems 
--&gt;
&lt;h2 id=&#34;问题&#34;&gt;问题&lt;/h2&gt;
&lt;!-- 
However, there are several problems with the aforementioned behavior: 
--&gt;
&lt;p&gt;然而，上述行为存在几个问题：&lt;/p&gt;
&lt;!-- 
1. **[Source IP](https://github.com/kubernetes/kubernetes/issues/79783):** 
    Some cloud providers use the load balancer&#39;s IP as the source IP when 
    transmitting packets to the node. In the ipvs mode of kube-proxy, 
    there is a problem that health checks from the load balancer never return. This occurs because the reply packets 
    would be forward to the local interface `kube-ipvs0`(where the load balancer&#39;s IP is bound to) 
    and be subsequently ignored. 
--&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/79783&#34;&gt;源 IP（Source IP）&lt;/a&gt;：&lt;/strong&gt;
一些云厂商在传输数据包到节点时使用负载均衡器的 IP 地址作为源 IP。在 kube-proxy 的 ipvs 模式下，
存在负载均衡器健康检查永远不会返回的问题。原因是回复的数据包被转发到本地网络接口 &lt;code&gt;kube-ipvs0&lt;/code&gt;（绑定负载均衡器 IP 的接口）上并被忽略。&lt;/li&gt;
&lt;/ol&gt;
&lt;!-- 
2. **[Feature loss at load balancer level](https://github.com/kubernetes/kubernetes/issues/66607):**
    Certain cloud providers offer features(such as TLS termination, proxy protocol, etc.) at the
    load balancer level.
    Bypassing the load balancer results in the loss of these features when the packet reaches the service
    (leading to protocol errors). 
--&gt;
&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/66607&#34;&gt;负载均衡器层功能缺失&lt;/a&gt;：&lt;/strong&gt;
某些云厂商在负载均衡器层提供了部分特性（例如 TLS 终结、协议代理等）。
绕过负载均衡器会导致当数据包抵达后端服务时这些特性不会生效（导致协议错误等）。&lt;/li&gt;
&lt;/ol&gt;
&lt;!-- 
Even with the new alpha behaviour disabled (the default), there is a 
[workaround](https://github.com/kubernetes/kubernetes/issues/66607#issuecomment-474513060) 
that involves setting `.status.loadBalancer.ingress.hostname` for the Service, in order 
to bypass kube-proxy binding. 
But this is just a makeshift solution. 
--&gt;
&lt;p&gt;即使新的 Alpha 特性默认关闭，也有&lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/66607#issuecomment-474513060&#34;&gt;临时解决方案&lt;/a&gt;，
即为 Service 设置 &lt;code&gt;.status.loadBalancer.ingress.hostname&lt;/code&gt; 以绕过 kube-proxy 绑定。
但这终究只是临时解决方案。&lt;/p&gt;
&lt;!-- 
## Solution 
--&gt;
&lt;h2 id=&#34;解决方案&#34;&gt;解决方案&lt;/h2&gt;
&lt;!-- 
In summary, providing an option for cloud providers to disable the current behavior would be highly beneficial. 
--&gt;
&lt;p&gt;总之，为云厂商提供选项以禁用当前这种行为大有裨益。&lt;/p&gt;
&lt;!-- 
To address this, Kubernetes v1.29 introduces a new (alpha) `.status.loadBalancer.ingress.ipMode` 
field for a Service.
This field specifies how the load balancer IP behaves and can be specified only when 
the `.status.loadBalancer.ingress.ip` field is also specified. 
--&gt;
&lt;p&gt;Kubernetes 1.29 版本为 Service 引入新的 Alpha 字段 &lt;code&gt;.status.loadBalancer.ingress.ipMode&lt;/code&gt; 以解决上述问题。
该字段指定负载均衡器 IP 的运行方式，并且只有在指定 &lt;code&gt;.status.loadBalancer.ingress.ip&lt;/code&gt; 字段时才能指定。&lt;/p&gt;
&lt;!-- 
Two values are possible for `.status.loadBalancer.ingress.ipMode`: `&#34;VIP&#34;` and `&#34;Proxy&#34;`.
The default value is &#34;VIP&#34;, meaning that traffic delivered to the node 
with the destination set to the load balancer&#39;s IP and port will be redirected to the backend service by kube-proxy.
This preserves the existing behavior of kube-proxy. 
The &#34;Proxy&#34; value is intended to prevent kube-proxy from binding the load balancer&#39;s IP address 
to the node in both ipvs and iptables modes. 
Consequently, traffic is sent directly to the load balancer and then forwarded to the destination node. 
The destination setting for forwarded packets varies depending on how the cloud provider&#39;s load balancer delivers traffic: 
--&gt;
&lt;p&gt;&lt;code&gt;.status.loadBalancer.ingress.ipMode&lt;/code&gt; 可选值为：&lt;code&gt;&amp;quot;VIP&amp;quot;&lt;/code&gt; 和 &lt;code&gt;&amp;quot;Proxy&amp;quot;&lt;/code&gt;。
默认值为 &lt;code&gt;VIP&lt;/code&gt;，即目标 IP 设置为负载均衡 IP 和端口并发送到节点的流量会被 kube-proxy 重定向到后端服务。
这种方式保持 kube-proxy 现有行为模式。&lt;code&gt;Proxy&lt;/code&gt; 用于阻止 kube-proxy 在 ipvs 和 iptables 模式下绑定负载均衡 IP 地址到节点。
此时，流量会直达负载均衡器然后被重定向到目标节点。转发数据包的目的值配置取决于云厂商的负载均衡器如何传输流量。&lt;/p&gt;
&lt;!-- 
- If the traffic is delivered to the node then DNATed to the pod, the destination would be set to the node&#39;s IP and node port;
- If the traffic is delivered directly to the pod, the destination would be set to the pod&#39;s IP and port. 
--&gt;
&lt;ul&gt;
&lt;li&gt;如果流量被发送到节点然后通过目标地址转换（&lt;code&gt;DNAT&lt;/code&gt;）的方式到达 Pod，目的地应当设置为节点和 IP 和端口；&lt;/li&gt;
&lt;li&gt;如果流量被直接转发到 Pod，目的地应当被设置为 Pod 的 IP 和端口。&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- 
## Usage 
--&gt;
&lt;h2 id=&#34;用法&#34;&gt;用法&lt;/h2&gt;
&lt;!-- 
Here are the necessary steps to enable this feature: 
--&gt;
&lt;p&gt;开启该特性的必要步骤：&lt;/p&gt;
&lt;!-- 
- Download the [latest Kubernetes project](https://kubernetes.io/releases/download/) (version `v1.29.0` or later).
- Enable the feature gate with the command line flag `--feature-gates=LoadBalancerIPMode=true` 
on kube-proxy, kube-apiserver, and cloud-controller-manager.
- For Services with `type: LoadBalancer`, set `ipMode` to the appropriate value. 
This step is likely handled by your chosen cloud-controller-manager during the `EnsureLoadBalancer` process. 
--&gt;
&lt;ul&gt;
&lt;li&gt;下载 &lt;a href=&#34;https://kubernetes.io/releases/download/&#34;&gt;Kubernetes 最新版本&lt;/a&gt;（&lt;code&gt;v1.29.0&lt;/code&gt; 或更新）。&lt;/li&gt;
&lt;li&gt;通过命令行参数 &lt;code&gt;--feature-gates=LoadBalancerIPMode=true&lt;/code&gt; 在 kube-proxy、kube-apiserver 和
cloud-controller-manager 开启特性门控。&lt;/li&gt;
&lt;li&gt;对于 &lt;code&gt;type: LoadBalancer&lt;/code&gt; 类型的 Service，将 &lt;code&gt;ipMode&lt;/code&gt; 设置为合适的值。
这一步可能由你在 &lt;code&gt;EnsureLoadBalancer&lt;/code&gt; 过程中选择的 cloud-controller-manager 进行处理。&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- 
## More information 
--&gt;
&lt;h2 id=&#34;更多信息&#34;&gt;更多信息&lt;/h2&gt;
&lt;!-- 
- Read [Specifying IPMode of load balancer status](/docs/concepts/services-networking/service/#load-balancer-ip-mode).
- Read [KEP-1860](https://kep.k8s.io/1860) - [Make Kubernetes aware of the LoadBalancer behaviour](https://github.com/kubernetes/enhancements/tree/b103a6b0992439f996be4314caf3bf7b75652366/keps/sig-network/1860-kube-proxy-IP-node-binding#kep-1860-make-kubernetes-aware-of-the-loadbalancer-behaviour) _(sic)_.
 --&gt;
&lt;ul&gt;
&lt;li&gt;阅读&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/services-networking/service/#load-balancer-ip-mode&#34;&gt;指定负载均衡器状态的 IPMode&lt;/a&gt;。&lt;/li&gt;
&lt;li&gt;阅读 &lt;a href=&#34;https://kep.k8s.io/1860&#34;&gt;KEP-1860&lt;/a&gt; - &lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/b103a6b0992439f996be4314caf3bf7b75652366/keps/sig-network/1860-kube-proxy-IP-node-binding#kep-1860-make-kubernetes-aware-of-the-loadbalancer-behaviour&#34;&gt;让 Kubernetes 感知负载均衡器的行为&lt;/a&gt; &lt;em&gt;(sic)&lt;/em&gt;。&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- 
## Getting involved 
--&gt;
&lt;h2 id=&#34;联系我们&#34;&gt;联系我们&lt;/h2&gt;
&lt;!-- 
Reach us on [Slack](https://slack.k8s.io/): [#sig-network](https://kubernetes.slack.com/messages/sig-network), 
or through the [mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-network).
 --&gt;
&lt;p&gt;通过 &lt;a href=&#34;https://slack.k8s.io/&#34;&gt;Slack&lt;/a&gt; 频道 &lt;a href=&#34;https://kubernetes.slack.com/messages/sig-network&#34;&gt;#sig-network&lt;/a&gt;,
或者通过&lt;a href=&#34;https://groups.google.com/forum/#!forum/kubernetes-sig-network&#34;&gt;邮件列表&lt;/a&gt;联系我们。&lt;/p&gt;
&lt;!-- 
## Acknowledgments 
--&gt;
&lt;h2 id=&#34;特别鸣谢&#34;&gt;特别鸣谢&lt;/h2&gt;
&lt;!-- 
Huge thanks to [@Sh4d1](https://github.com/Sh4d1) for the original KEP and initial implementation code. 
I took over midway and completed the work. Similarly, immense gratitude to other contributors 
who have assisted in the design, implementation, and review of this feature (alphabetical order): 
--&gt;
&lt;p&gt;非常感谢 &lt;a href=&#34;https://github.com/Sh4d1&#34;&gt;@Sh4d1&lt;/a&gt; 的原始提案和最初代码实现。
我中途接手并完成了这项工作。同样我们也向其他帮助设计、实现、审查特性代码的贡献者表示感谢（按首字母顺序排列）：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/aojea&#34;&gt;@aojea&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/danwinship&#34;&gt;@danwinship&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/sftim&#34;&gt;@sftim&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/tengqm&#34;&gt;@tengqm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/thockin&#34;&gt;@thockin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/wojtek-t&#34;&gt;@wojtek-t&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.29：修改卷之 VolumeAttributesClass</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/12/15/kubernetes-1-29-volume-attributes-class/</link>
      <pubDate>Fri, 15 Dec 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/12/15/kubernetes-1-29-volume-attributes-class/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Kubernetes 1.29: VolumeAttributesClass for Volume Modification&#34;
date: 2023-12-15
slug: kubernetes-1-29-volume-attributes-class
--&gt;
&lt;!--
**Author**: Sunny Song (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Sunny Song (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：&lt;a href=&#34;https://github.com/carlory&#34;&gt;Baofa Fan&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
The v1.29 release of Kubernetes introduced an alpha feature to support modifying a volume
by changing the `volumeAttributesClassName` that was specified for a PersistentVolumeClaim (PVC).
With the feature enabled, Kubernetes can handle updates of volume attributes other than capacity.
Allowing volume attributes to be changed without managing it through different
provider&#39;s APIs directly simplifies the current flow.

You can read about VolumeAttributesClass usage details in the Kubernetes documentation 
or you can read on to learn about why the Kubernetes project is supporting this feature.
--&gt;
&lt;p&gt;Kubernetes v1.29 版本引入了一个 Alpha 功能，支持通过变更 PersistentVolumeClaim（PVC）的
&lt;code&gt;volumeAttributesClassName&lt;/code&gt; 字段来修改卷。启用该功能后，Kubernetes 可以处理除容量以外的卷属性的更新。
允许更改卷属性，而无需通过不同提供商的 API 对其进行管理，这直接简化了当前流程。&lt;/p&gt;
&lt;p&gt;你可以在 Kubernetes 文档中，阅读有关 VolumeAttributesClass 的详细使用信息，或者继续阅读了解
Kubernetes 项目为什么支持此功能。&lt;/p&gt;
&lt;h2 id=&#34;volumeattributesclass&#34;&gt;VolumeAttributesClass&lt;/h2&gt;
&lt;!--
The new `storage.k8s.io/v1alpha1` API group provides two new types:
--&gt;
&lt;p&gt;新的 &lt;code&gt;storage.k8s.io/v1alpha1&lt;/code&gt; API 组提供了两种新类型：&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;VolumeAttributesClass&lt;/strong&gt;&lt;/p&gt;
&lt;!-- 
Represents a specification of mutable volume attributes defined by the CSI driver.
The class can be specified during dynamic provisioning of PersistentVolumeClaims,
and changed in the PersistentVolumeClaim spec after provisioning. 
--&gt;
&lt;p&gt;表示由 CSI 驱动程序定义的可变卷属性的规约。你可以在 PersistentVolumeClaim 动态制备时指定它，
并且允许在制备完成后在 PersistentVolumeClaim 规约中进行更改。&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;ModifyVolumeStatus&lt;/strong&gt;&lt;/p&gt;
&lt;!--
Represents the status object of `ControllerModifyVolume` operation.
--&gt;
&lt;p&gt;表示 &lt;code&gt;ControllerModifyVolume&lt;/code&gt; 操作的状态对象。&lt;/p&gt;
&lt;!--
With this alpha feature enabled, the spec of PersistentVolumeClaim defines VolumeAttributesClassName
that is used in the PVC. At volume provisioning, the `CreateVolume` operation will apply the parameters in the
VolumeAttributesClass along with the parameters in the StorageClass.
--&gt;
&lt;p&gt;启用此 Alpha 功能后，PersistentVolumeClaim 的 &lt;code&gt;spec.VolumeAttributesClassName&lt;/code&gt; 字段指明了在 PVC 中使用的 VolumeAttributesClass。
在制备卷时，&lt;code&gt;CreateVolume&lt;/code&gt; 操作将应用 VolumeAttributesClass 中的参数以及 StorageClass 中的参数。&lt;/p&gt;
&lt;!--
When there is a change of volumeAttributesClassName in the PVC spec,
the external-resizer sidecar will get an informer event. Based on the current state of the configuration,
the resizer will trigger a CSI ControllerModifyVolume.
More details can be found in [KEP-3751](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/3751-volume-attributes-class/README.md).
--&gt;
&lt;p&gt;当 PVC 的 &lt;code&gt;spec.VolumeAttributesClassName&lt;/code&gt; 发生变化时，external-resizer sidecar 将会收到一个 informer 事件。
基于当前的配置状态，resizer 将触发 CSI ControllerModifyVolume。更多细节可以在
&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/3751-volume-attributes-class/README.md&#34;&gt;KEP-3751&lt;/a&gt; 中找到。&lt;/p&gt;
&lt;!--
## How to use it

If you want to test the feature whilst it&#39;s alpha, you need to enable the relevant feature gate
in the `kube-controller-manager` and the `kube-apiserver`. Use the `--feature-gates` command line argument:
--&gt;
&lt;h2 id=&#34;如何使用它&#34;&gt;如何使用它&lt;/h2&gt;
&lt;p&gt;如果你想在 Alpha 版本中测试该功能，需要在 &lt;code&gt;kube-controller-manager&lt;/code&gt; 和 &lt;code&gt;kube-apiserver&lt;/code&gt; 中启用相关的特性门控。
使用 &lt;code&gt;--feature-gates&lt;/code&gt; 命令行参数：&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;--feature-gates=&amp;#34;...,VolumeAttributesClass=true&amp;#34;
&lt;/code&gt;&lt;/pre&gt;&lt;!--
It also requires that the CSI driver has implemented the ModifyVolume API.
--&gt;
&lt;p&gt;它还需要 CSI 驱动程序实现 ModifyVolume API。&lt;/p&gt;
&lt;!-- 
### User flow

If you would like to see the feature in action and verify it works fine in your cluster, here&#39;s what you can try:
--&gt;
&lt;h3 id=&#34;用户流程&#34;&gt;用户流程&lt;/h3&gt;
&lt;p&gt;如果你想看到该功能的运行情况，并验证它在你的集群中是否正常工作，可以尝试以下操作：&lt;/p&gt;
&lt;!-- 
1. Define a StorageClass and VolumeAttributesClass
--&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;定义 StorageClass 和 VolumeAttributesClass&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;storage.k8s.io/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;StorageClass&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;csi-sc-example&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;provisioner&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pd.csi.storage.gke.io&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;parameters&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;disk-type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;hyperdisk-balanced&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeBindingMode&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;WaitForFirstConsumer&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;storage.k8s.io/v1alpha1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;VolumeAttributesClass&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;silver&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;driverName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pd.csi.storage.gke.io&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;parameters&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;provisioned-iops&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;3000&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;provisioned-throughput&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;50&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;!-- 
2. Define and create the PersistentVolumeClaim
--&gt;
&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;
&lt;p&gt;定义并创建 PersistentVolumeClaim&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;PersistentVolumeClaim&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;test-pv-claim&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storageClassName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;csi-sc-example&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeAttributesClassName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;silver&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;accessModes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- ReadWriteOnce&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requests&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storage&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;64Gi&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
3. Verify that the PersistentVolumeClaim is now provisioned correctly with:
--&gt; 
&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;
&lt;p&gt;验证 PersistentVolumeClaim 是否已正确制备：&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;kubectl get pvc
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
4. Create a new VolumeAttributesClass gold:
--&gt;
&lt;ol start=&#34;4&#34;&gt;
&lt;li&gt;
&lt;p&gt;创建一个新的名为 gold 的 VolumeAttributesClass：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;storage.k8s.io/v1alpha1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;VolumeAttributesClass&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;gold&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;driverName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pd.csi.storage.gke.io&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;parameters&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;iops&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;4000&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;throughput&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;60&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
5. Update the PVC with the new VolumeAttributesClass and apply:
--&gt;
&lt;ol start=&#34;5&#34;&gt;
&lt;li&gt;
&lt;p&gt;使用新的 VolumeAttributesClass 更新 PVC 并应用：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;PersistentVolumeClaim&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;test-pv-claim&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storageClassName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;csi-sc-example&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeAttributesClassName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;gold&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;accessModes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- ReadWriteOnce&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;resources&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;requests&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;storage&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;64Gi&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
6. Verify that PersistentVolumeClaims has the updated VolumeAttributesClass parameters with:
--&gt;
&lt;ol start=&#34;6&#34;&gt;
&lt;li&gt;
&lt;p&gt;验证 PersistentVolumeClaims 是否具有更新的 VolumeAttributesClass 参数：&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;kubectl describe pvc &amp;lt;PVC_NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
## Next steps

* See the [VolumeAttributesClass KEP](https://kep.k8s.io/3751) for more information on the design
* You can view or comment on the [project board](https://github.com/orgs/kubernetes-csi/projects/72) for VolumeAttributesClass
* In order to move this feature towards beta, we need feedback from the community,
  so here&#39;s a call to action: add support to the CSI drivers, try out this feature,
  consider how it can help with problems that your users are having…
--&gt;
&lt;h2 id=&#34;后续步骤&#34;&gt;后续步骤&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;有关设计的更多信息，请参阅 &lt;a href=&#34;https://kep.k8s.io/3751&#34;&gt;VolumeAttributesClass KEP&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;你可以在&lt;a href=&#34;https://github.com/orgs/kubernetes-csi/projects/72&#34;&gt;项目看板&lt;/a&gt;上查看或评论 VolumeAttributesClass&lt;/li&gt;
&lt;li&gt;为了将此功能推向 Beta 版本，我们需要社区的反馈，因此这里有一个行动倡议：为 CSI 驱动程序添加支持，
尝试此功能，考虑它如何帮助解决你的用户遇到的问题...&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Getting involved

We always welcome new contributors. So, if you would like to get involved, you can join our [Kubernetes Storage Special Interest Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG).
--&gt;
&lt;h2 id=&#34;参与其中&#34;&gt;参与其中&lt;/h2&gt;
&lt;p&gt;我们始终欢迎新的贡献者。因此，如果你想参与其中，可以加入我们的
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;Kubernetes 存储特别兴趣小组&lt;/a&gt; (SIG)。&lt;/p&gt;
&lt;!--
If you would like to share feedback, you can do so on our [public Slack channel](https://app.slack.com/client/T09NY5SBT/C09QZFCE5).
--&gt;
&lt;p&gt;如果你想分享反馈意见，可以在我们的&lt;a href=&#34;https://app.slack.com/client/T09NY5SBT/C09QZFCE5&#34;&gt;公共 Slack 频道&lt;/a&gt; 上留言。&lt;/p&gt;
&lt;!--
Special thanks to all the contributors that provided great reviews, shared valuable insight and helped implement this feature (alphabetical order):
--&gt;
&lt;p&gt;特别感谢所有为此功能提供了很好的评论、分享了宝贵见解并帮助实现此功能的贡献者（按字母顺序）：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Baofa Fan (calory)&lt;/li&gt;
&lt;li&gt;Ben Swartzlander (bswartz)&lt;/li&gt;
&lt;li&gt;Connor Catlett (ConnorJC3)&lt;/li&gt;
&lt;li&gt;Hemant Kumar (gnufied)&lt;/li&gt;
&lt;li&gt;Jan Šafránek (jsafrane)&lt;/li&gt;
&lt;li&gt;Joe Betz (jpbetz)&lt;/li&gt;
&lt;li&gt;Jordan Liggitt (liggitt)&lt;/li&gt;
&lt;li&gt;Matthew Cary (mattcary)&lt;/li&gt;
&lt;li&gt;Michelle Au (msau42)&lt;/li&gt;
&lt;li&gt;Xing Yang (xing-yang)&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>聚焦 SIG Testing</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/11/24/sig-testing-spotlight-2023/</link>
      <pubDate>Fri, 24 Nov 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/11/24/sig-testing-spotlight-2023/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Spotlight on SIG Testing&#34;
slug: sig-testing-spotlight-2023
date: 2023-11-24
canonicalUrl: https://www.kubernetes.dev/blog/2023/11/24/sig-testing-spotlight-2023/
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Sandipan Panda&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href=&#34;https://github.com/windsonsea&#34;&gt;Michael Yao&lt;/a&gt;&lt;/p&gt;
&lt;!--
Welcome to another edition of the _SIG spotlight_ blog series, where we
highlight the incredible work being done by various Special Interest
Groups (SIGs) within the Kubernetes project. In this edition, we turn
our attention to [SIG Testing](https://github.com/kubernetes/community/tree/master/sig-testing#readme),
a group interested in effective testing of Kubernetes and automating
away project toil. SIG Testing focus on creating and running tools and
infrastructure that make it easier for the community to write and run
tests, and to contribute, analyze and act upon test results.
--&gt;
&lt;p&gt;欢迎阅读又一期的 “SIG 聚光灯” 系列博客，这些博客重点介绍 Kubernetes
项目中各个特别兴趣小组（SIG）所从事的令人赞叹的工作。这篇博客将聚焦
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-testing#readme&#34;&gt;SIG Testing&lt;/a&gt;，
这是一个致力于有效测试 Kubernetes，让此项目的繁琐工作实现自动化的兴趣小组。
SIG Testing 专注于创建和运行工具和基础设施，使社区更容易编写和运行测试，并对测试结果做贡献、分析和处理。&lt;/p&gt;
&lt;!--
To gain some insights into SIG Testing, [Sandipan
Panda](https://github.com/sandipanpanda) spoke with [Michelle Shepardson](https://github.com/michelle192837),
a senior software engineer at Google and a chair of SIG Testing, and
[Patrick Ohly](https://github.com/pohly), a software engineer and architect at
Intel and a SIG Testing Tech Lead.
--&gt;
&lt;p&gt;为了深入了解 SIG Testing 的情况，
&lt;a href=&#34;https://github.com/sandipanpanda&#34;&gt;Sandipan Panda&lt;/a&gt;
采访了 Google 高级软件工程师兼 SIG Testing 主席
&lt;a href=&#34;https://github.com/michelle192837&#34;&gt;Michelle Shepardson&lt;/a&gt;
以及英特尔软件工程师、架构师兼 SIG Testing 技术负责人
&lt;a href=&#34;https://github.com/pohly&#34;&gt;Patrick Ohly&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## Meet the contributors

**Sandipan:** Could you tell us a bit about yourself, your role, and
how you got involved in the Kubernetes project and SIG Testing?
--&gt;
&lt;h2 id=&#34;meet-the-contributors&#34;&gt;会见贡献者  &lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Sandipan:&lt;/strong&gt; 你能简单介绍一下自己吗，谈谈你的职责角色以及你是如何参与
Kubernetes 项目和 SIG Testing 的？&lt;/p&gt;
&lt;!--
**Michelle:** Hi! I&#39;m Michelle, a senior software engineer at
Google. I first got involved in Kubernetes through working on tooling
for SIG Testing, like the external instance of TestGrid. I&#39;m part of
oncall for TestGrid and Prow, and am now a chair for the SIG.
--&gt;
&lt;p&gt;&lt;strong&gt;Michelle:&lt;/strong&gt; 嗨！我是 Michelle，是 Google 高级软件工程师。
我最初是为 SIG Testing 开发工具（如 TestGrid 的外部实例）而参与到 Kubernetes 项目的。
我是 TestGrid 和 Prow 的轮值人员，现在也是这个 SIG 的主席。&lt;/p&gt;
&lt;!--
**Patrick:** Hello! I work as a software engineer and architect in a
team at Intel which focuses on open source Cloud Native projects. When
I ramped up on Kubernetes to develop a storage driver, my very first
question was &#34;how do I test it in a cluster and how do I log
information?&#34; That interest led to various enhancement proposals until
I had (re)written enough code that also took over official roles as
SIG Testing Tech Lead (for the [E2E framework](https://github.com/kubernetes-sigs/e2e-framework)) and
structured logging WG lead.
--&gt;
&lt;p&gt;&lt;strong&gt;Patrick:&lt;/strong&gt; 你好！我在英特尔的一个团队中担任软件工程师和架构师，专注于开源云原生项目。
当我开始学习 Kubernetes 开发存储驱动时，我最初的问题是“如何在集群中进行测试以及如何记录信息？”
这个兴趣点引发了各种增强提案，直到我（重新）编写了足够多的代码，也正式担任了 SIG Testing 技术负责人
（负责 &lt;a href=&#34;https://github.com/kubernetes-sigs/e2e-framework&#34;&gt;E2E 框架&lt;/a&gt;）兼结构化日志工作组负责人。&lt;/p&gt;
&lt;!--
## Testing practices and tools

**Sandipan:** Testing is a field in which multiple approaches and
tools exist; how did you arrive at the existing practices?
--&gt;
&lt;h2 id=&#34;testing-practices-and-tools&#34;&gt;测试实践和工具   &lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Sandipan:&lt;/strong&gt; 测试是一个存在多种方法和工具的领域，你们是如何形成现有实践方式的？&lt;/p&gt;
&lt;!--
**Patrick:** I can’t speak about the early days because I wasn’t
around yet 😆, but looking back at some of the commit history it’s
pretty obvious that developers just took what was available and
started using it. For E2E testing, that was
[Ginkgo+Gomega](https://github.com/onsi/ginkgo). Some hacks were
necessary, for example around cleanup after a test run and for
categorising tests. Eventually this led to Ginkgo v2 and [revised best
practices for E2E testing](https://www.kubernetes.dev/blog/2023/04/12/e2e-testing-best-practices-reloaded/).
Regarding unit testing opinions are pretty diverse: some maintainers
prefer to use just the Go standard library with hand-written
checks. Others use helper packages like stretchr/testify. That
diversity is okay because unit tests are self-contained - contributors
just have to be flexible when working on many different areas.
Integration testing falls somewhere in the middle. It’s based on Go
unit tests, but needs complex helper packages to bring up an apiserver
and other components, then runs tests that are more like E2E tests.
--&gt;
&lt;p&gt;&lt;strong&gt;Patrick:&lt;/strong&gt; 我没法谈论早期情况，因为那时我还未参与其中 😆，但回顾一些提交历史可以明显看出，
当时开发人员只是看看有什么可用的工具并开始使用这些工具。对于 E2E 测试来说，使用的是
&lt;a href=&#34;https://github.com/onsi/ginkgo&#34;&gt;Ginkgo + Gomega&lt;/a&gt;。集成一些黑科技是必要的，
例如在测试运行后进行清理和对测试进行分类。最终形成了 Ginkgo v2
和&lt;a href=&#34;https://www.kubernetes.dev/blog/2023/04/12/e2e-testing-best-practices-reloaded/&#34;&gt;重新修订的 E2E 测试最佳实践&lt;/a&gt;。
关于单元测试，意见非常多样化：一些维护者倾向于只使用 Go 标准库和手动检查。
而其他人使用 stretchr/testify 这类辅助工具包。这种多样性是可以接受的，因为单元测试是自包含的：
贡献者只需在处理许多不同领域时保持灵活。集成测试介于二者之间，它基于 Go 单元测试，
但需要复杂的辅助工具包来启动 API 服务器和其他组件，然后运行更像是 E2E 测试的测试。&lt;/p&gt;
&lt;!--
## Subprojects owned by SIG Testing

**Sandipan:** SIG Testing is pretty diverse. Can you give a brief
overview of the various subprojects owned by SIG Testing?
--&gt;
&lt;h2 id=&#34;subprojects-owned-by-sig-testing&#34;&gt;SIG Testing 拥有的子项目   &lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Sandipan:&lt;/strong&gt; SIG Testing 非常多样化。你能简要介绍一下 SIG Testing 拥有的各个子项目吗？&lt;/p&gt;
&lt;!--
**Michelle:** Broadly, we have subprojects related to testing
frameworks, and infrastructure, though they definitely overlap.  So
for the former, there&#39;s
[e2e-framework](https://pkg.go.dev/sigs.k8s.io/e2e-framework) (used
externally),
[test/e2e/framework](https://pkg.go.dev/k8s.io/kubernetes/test/e2e/framework)
(used for Kubernetes itself) and kubetest2 for end-to-end testing,
as well as boskos (resource rental for e2e tests),
[KIND](https://kind.sigs.k8s.io/) (Kubernetes-in-Docker, for local
testing and development), and the cloud provider for KIND.  For the
latter, there&#39;s [Prow](https://docs.prow.k8s.io/) (K8s-based CI/CD and
chatops), and a litany of other tools and utilities for triage,
analysis, coverage, Prow/TestGrid config generation, and more in the
test-infra repo.
--&gt;
&lt;p&gt;&lt;strong&gt;Michelle:&lt;/strong&gt; 广义上来说，我们拥有与测试框架相关的子项目和基础设施，尽管它们肯定存在重叠。
我们的子项目包括：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://pkg.go.dev/sigs.k8s.io/e2e-framework&#34;&gt;e2e-framework&lt;/a&gt;（外部使用）&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://pkg.go.dev/k8s.io/kubernetes/test/e2e/framework&#34;&gt;test/e2e/framework&lt;/a&gt;
（用于 Kubernetes 本身）&lt;/li&gt;
&lt;li&gt;kubetest2（用于端到端测试）&lt;/li&gt;
&lt;li&gt;boskos（用于 e2e 测试的资源租赁）&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kind.sigs.k8s.io/&#34;&gt;KIND&lt;/a&gt;（在 Docker 中运行 Kubernetes，用于本地测试和开发）&lt;/li&gt;
&lt;li&gt;以及 KIND 的云驱动。&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;我们的基础设施包括：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://docs.prow.k8s.io/&#34;&gt;Prow&lt;/a&gt;（基于 K8s 的 CI/CD 和 chatops）&lt;/li&gt;
&lt;li&gt;test-infra 仓库中用于分类、分析、覆盖率、Prow/TestGrid 配置生成等的其他工具和实用程序。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
*If you are willing to learn more and get involved with any of the SIG
Testing subprojects, check out the [SIG Testing README](https://github.com/kubernetes/community/tree/master/sig-testing#subprojects).*
--&gt;
&lt;p&gt;&lt;strong&gt;如果你有兴趣了解更多并参与到 SIG Testing 的任何子项目中，查阅
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-testing#subprojects&#34;&gt;SIG Testing 的 README&lt;/a&gt;。&lt;/strong&gt;&lt;/p&gt;
&lt;!--
## Key challenges and accomplishments

**Sandipan:** What are some of the key challenges you face?
--&gt;
&lt;h2 id=&#34;key-challenges-and-accomplishments&#34;&gt;主要挑战和成就   &lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Sandipan:&lt;/strong&gt; 你们面临的一些主要挑战是什么？&lt;/p&gt;
&lt;!--
**Michelle:** Kubernetes is a gigantic project in every aspect, from
contributors to code to users and more. Testing and infrastructure
have to meet that scale, keeping up with every change from every repo
under Kubernetes while facilitating developing, improving, and
releasing the project as much as possible, though of course, we&#39;re not
the only SIG involved in that.  I think another other challenge is
staffing subprojects. SIG Testing has a number of subprojects that
have existed for years, but many of the original maintainers for them
have moved on to other areas or no longer have the time to maintain
them. We need to grow long-term expertise and owners in those
subprojects.
--&gt;
&lt;p&gt;&lt;strong&gt;Michelle:&lt;/strong&gt; Kubernetes 从贡献者到代码再到用户等各方面看都是一个庞大的项目。
测试和基础设施必须满足这种规模，跟上 Kubernetes 每个仓库的所有变化，
同时尽可能地促进开发、改进和发布项目，尽管当然我们并不是唯一参与其中的 SIG。
我认为另一个挑战是子项目的人员配置。SIG Testing 有一些已经存在多年的子项目，
但其中许多最初的维护者已经转到其他领域或者没有时间继续维护它们。
我们需要在这些子项目中培养长期的专业知识和 Owner。&lt;/p&gt;
&lt;!--
**Patrick:** As Michelle said, the sheer size can be a challenge. It’s
not just the infrastructure, also our processes must scale with the
number of contributors. It’s good to document best practices, but not
good enough: we have many new contributors, which is good, but having
reviewers explain best practices doesn’t scale - assuming that the
reviewers even know about them! It also doesn’t help that existing
code cannot get updated immediately because there is so much of it, in
particular for E2E testing. The initiative to [apply stricter linting to new or modified code](https://groups.google.com/a/kubernetes.io/g/dev/c/myGiml72IbM/m/QdO5bgQiAQAJ)
while accepting that existing code doesn’t pass those same linter
checks helps a bit.
--&gt;
&lt;p&gt;&lt;strong&gt;Patrick:&lt;/strong&gt; 正如 Michelle 所说，规模本身可能就是一个挑战。
不仅基础设施要与之匹配，我们的流程也必须与贡献者数量相匹配。
记录最佳实践是好的，但还不够好：我们有许多新的贡献者，这是好事，
但是让 Reviewer 靠人工解释最佳实践并不可行，这前提是 Reviewer 了解这些最佳实践！
如果现有代码不能被立即更新也无济于事，因为代码实在太多了，特别是对于 E2E 测试来说更是如此。
在接受现有代码无法通过同样的 linter 检查的同时，
&lt;a href=&#34;https://groups.google.com/a/kubernetes.io/g/dev/c/myGiml72IbM/m/QdO5bgQiAQAJ&#34;&gt;为新代码或代码修改应用更严格的 lint 检查&lt;/a&gt;对于改善情况会有所帮助。&lt;/p&gt;
&lt;!--
**Sandipan:** Any SIG accomplishments that you are proud of and would
like to highlight?
--&gt;
&lt;p&gt;&lt;strong&gt;Sandipan:&lt;/strong&gt; 有没有一些 SIG 成就使你感到自豪，想要重点说一下？&lt;/p&gt;
&lt;!--
**Patrick:** I am biased because I have been driving this, but I think
that the [E2E framework](https://github.com/kubernetes-sigs/e2e-framework) and linting are now in a much better shape than
they used to be. We may soon be able to run integration tests with
race detection enabled, which is important because we currently only
have that for unit tests and those tend to be less complex.
--&gt;
&lt;p&gt;&lt;strong&gt;Patrick:&lt;/strong&gt; 我有一些拙见，因为我一直在推动这个项目，但我认为现在
&lt;a href=&#34;https://github.com/kubernetes-sigs/e2e-framework&#34;&gt;E2E 框架&lt;/a&gt;和 lint 机制比以前好得多。
我们可能很快就能在启用竞争检测的情况下运行集成测试，这很重要，
因为目前我们只能对单元测试进行竞争检测，而那些往往不太复杂。&lt;/p&gt;
&lt;!--
**Sandipan:** Testing is always important, but is there anything
specific to your work in terms of the Kubernetes release process?
--&gt;
&lt;p&gt;&lt;strong&gt;Sandipan:&lt;/strong&gt; 测试始终很重要，但在 Kubernetes 发布过程中，你的工作是否有任何特殊之处？&lt;/p&gt;
&lt;!--
**Patrick:** [test flakes](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-testing/flaky-tests.md)…
if we have too many of those, development velocity goes down because
PRs cannot be merged without clean test runs and those become less
likely. Developers also lose trust in testing and just &#34;retest&#34; until
they have a clean run, without checking whether failures might indeed
be related to a regression in their current change.
--&gt;
&lt;p&gt;&lt;strong&gt;Patrick:&lt;/strong&gt; &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/contributors/devel/sig-testing/flaky-tests.md&#34;&gt;测试不稳定&lt;/a&gt;……
如果我们有太多这样的不稳定测试，开发速度就会下降，因为我们无法在没有干净测试运行环境的情况下合并 PR，
并且这些环境会越来越少。开发者也会失去对测试的信任，只是“重新测试”直到有了一个干净的运行环境为止，
而不会检查失败是否确实与当前更改中的回归有关。&lt;/p&gt;
&lt;!--
## The people and the scope

**Sandipan:** What are some of your favourite things about this SIG?
--&gt;
&lt;h2 id=&#34;the-people-and-the-scope&#34;&gt;人员和范围   &lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Sandipan:&lt;/strong&gt; 这个 SIG 中有哪些让你热爱的？&lt;/p&gt;
&lt;!--
**Michelle:** The people, of course 🙂. Aside from that, I like the
broad scope SIG Testing has. I feel like even small changes can make a
big difference for fellow contributors, and even if my interests
change over time, I&#39;ll never run out of projects to work on.
--&gt;
&lt;p&gt;&lt;strong&gt;Michelle:&lt;/strong&gt; 当然是人 🙂。除此之外，我喜欢 SIG Testing 的宽广范围。
我觉得即使是小的改动也可以对其他贡献者产生重大影响，即使随着时间的推移我的兴趣发生变化，
我也永远不会缺少项目可供我参与。&lt;/p&gt;
&lt;!--
**Patrick:** I can work on things that make my life and the life of my
fellow developers better, like the tooling that we have to use every
day while working on some new feature elsewhere.

**Sandipan:** Are there any funny / cool / TIL anecdotes that you
could tell us?
--&gt;
&lt;p&gt;&lt;strong&gt;Patrick:&lt;/strong&gt; 我的工作是为了让我和其他开发人员的工作变得更好，
比如建设在其他地方开发新特性时每天必须使用的工具。&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Sandipan:&lt;/strong&gt; 你们有没有任何好玩/酷炫/日常趣事可以告诉我们？&lt;/p&gt;
&lt;!--
**Patrick:** I started working on E2E framework enhancements five
years ago, then was less active there for a while. When I came back
and wanted to test some new enhancement, I asked about how to write
unit tests for the new code and was pointed to some existing tests
which looked vaguely familiar, as if I had *seen* them before. I
looked at the commit history and found that I had *written* them! I’ll
let you decide whether that says something about my failing long-term
memory or simply is normal… Anyway, folks, remember to write good
commit messages and comments; someone will need them at some point -
it might even be yourself!
--&gt;
&lt;p&gt;&lt;strong&gt;Patrick:&lt;/strong&gt; 五年前，我开始致力于 E2E 框架的增强，然后在一段时间内参与活动较少。
当我回来并想要测试一些新的增强功能时，我询问如何为新代码编写单元测试，
并被指向了一些看起来有些熟悉的、好像以前&lt;strong&gt;见过&lt;/strong&gt;的现有测试。
我查看了提交历史，发现这些测试是我自己&lt;strong&gt;编写的&lt;/strong&gt;！
你可以决定这是否说明了我的长期记忆力衰退还是这很正常...
无论如何，伙计们，要谨记让每个 Commit 的消息和注释明确、友好；
某一刻会有人需要看这些消息和注释 - 甚至可能就是你自己！&lt;/p&gt;
&lt;!--
## Looking ahead

**Sandipan:** What areas and/or subprojects does your SIG need help with?
--&gt;
&lt;h2 id=&#34;looking-ahead&#34;&gt;展望未来   &lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Sandipan:&lt;/strong&gt; 在哪些领域和/或子项目上，你们的 SIG 需要帮助？&lt;/p&gt;
&lt;!--
**Michelle:** Some subprojects aren&#39;t staffed at the moment and could
use folks willing to learn more about
them. [boskos](https://github.com/kubernetes-sigs/boskos#boskos) and
[kubetest2](https://github.com/kubernetes-sigs/kubetest2#kubetest2)
especially stand out to me, since both are important for testing but
lack dedicated owners.
--&gt;
&lt;p&gt;&lt;strong&gt;Michelle:&lt;/strong&gt; 目前有一些子项目没有人员配置，需要有意愿了解更多的人参与进来。
&lt;a href=&#34;https://github.com/kubernetes-sigs/boskos#boskos&#34;&gt;boskos&lt;/a&gt; 和
&lt;a href=&#34;https://github.com/kubernetes-sigs/kubetest2#kubetest2&#34;&gt;kubetest2&lt;/a&gt; 对我来说尤其突出，
因为它们对于测试非常重要，但却缺乏专门的负责人。&lt;/p&gt;
&lt;!--
**Sandipan:** Are there any useful skills that new contributors to SIG
Testing can bring to the table? What are some things that people can
do to help this SIG if they come from a background that isn’t directly
linked to programming?
--&gt;
&lt;p&gt;&lt;strong&gt;Sandipan:&lt;/strong&gt; 新的 SIG Testing 贡献者可以带来哪些有用的技能？
如果他们的背景与编程没有直接关联，有哪些方面可以帮助到这个 SIG？&lt;/p&gt;
&lt;!--
**Michelle:** I think user empathy, writing clear feedback, and
recognizing patterns are really useful. Someone who uses the test
framework or tooling and can outline pain points with clear examples,
or who can recognize a wider issue in the project and pull data to
inform solutions for it.
--&gt;
&lt;p&gt;&lt;strong&gt;Michelle:&lt;/strong&gt; 我认为具备用户共情、清晰反馈和识别模式的能力非常有用。
有人使用测试框架或工具，并能用清晰的示例概述痛点，或者能够识别项目中的更广泛的问题并提供数据来支持解决方案。&lt;/p&gt;
&lt;!--
**Sandipan:** What’s next for SIG Testing?

**Patrick:** Stricter linting will soon become mandatory for new
code. There are several E2E framework sub-packages that could be
modernised, if someone wants to take on that work. I also see an
opportunity to unify some of our helper code for E2E and integration
testing, but that needs more thought and discussion.
--&gt;
&lt;p&gt;&lt;strong&gt;Sandipan:&lt;/strong&gt; SIG Testing 的下一步是什么？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Patrick:&lt;/strong&gt; 对于新代码，更严格的 lint 检查很快将成为强制要求。
如果有人愿意承担这项工作，我们可以对一些 E2E 框架的子工具包进行现代化改造。
我还看到一个机会，可以统一一些 E2E 和集成测试的辅助代码，但这需要更多的思考和讨论。&lt;/p&gt;
&lt;!--
**Michelle:** I&#39;m looking forward to making some usability
improvements for some of our tools and infra, and to supporting more
long-term contributions and growth of contributors into long-term
roles within the SIG. If you&#39;re interested, hit us up!
--&gt;
&lt;p&gt;&lt;strong&gt;Michelle:&lt;/strong&gt; 我期待为我们的工具和基础设施进行一些可用性改进，
并支持更多长期贡献者的贡献和成长，使他们在 SIG 中担任长期角色。如果你有兴趣，请联系我们！&lt;/p&gt;
&lt;!--
Looking ahead, SIG Testing has exciting plans in store. You can get in
touch with the folks at SIG Testing in their [Slack channel](https://kubernetes.slack.com/messages/sig-testing) or attend
one of their regular [bi-weekly meetings on Tuesdays](https://github.com/kubernetes/community/tree/master/sig-testing#meetings). If
you are interested in making it easier for the community to run tests
and contribute test results, to ensure Kubernetes is stable across a
variety of cluster configurations and cloud providers, join the SIG
Testing community today!
--&gt;
&lt;p&gt;展望未来，SIG Testing 有令人兴奋的计划。你可以通过他们的
&lt;a href=&#34;https://kubernetes.slack.com/messages/sig-testing&#34;&gt;Slack 频道&lt;/a&gt;与 SIG Testing 的人员取得联系，
或参加他们定期举行的&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-testing#meetings&#34;&gt;每两周的周二会议&lt;/a&gt;。
如果你有兴趣为社区更轻松地运行测试并贡献测试结果，确保 Kubernetes
在各种集群配置和云驱动中保持稳定，请立即加入 SIG Testing 社区！&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.29 中的移除、弃用和主要变更</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/11/16/kubernetes-1-29-upcoming-changes/</link>
      <pubDate>Thu, 16 Nov 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/11/16/kubernetes-1-29-upcoming-changes/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#39;Kubernetes Removals, Deprecations, and Major Changes in Kubernetes 1.29&#39;
date: 2023-11-16
slug: kubernetes-1-29-upcoming-changes
--&gt;
&lt;!--
**Authors:** Carol Valencia, Kristin Martin, Abigail McCarthy, James Quigley, Hosam Kamel
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Carol Valencia, Kristin Martin, Abigail McCarthy, James Quigley, Hosam Kamel&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href=&#34;https://github.com/windsonsea&#34;&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
As with every release, Kubernetes v1.29 will introduce feature deprecations and removals. Our continued ability to produce high-quality releases is a testament to our robust development cycle and healthy community. The following are some of the deprecations and removals coming in the Kubernetes 1.29 release.
--&gt;
&lt;p&gt;和其他每次发布一样，Kubernetes v1.29 将弃用和移除一些特性。
一贯以来生成高质量发布版本的能力是开发周期稳健和社区健康的证明。
下文列举即将发布的 Kubernetes 1.29 中的一些弃用和移除事项。&lt;/p&gt;
&lt;!--
## The Kubernetes API removal and deprecation process

The Kubernetes project has a well-documented deprecation policy for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API is one that has been marked for removal in a future Kubernetes release; it will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement.
--&gt;
&lt;h2 id=&#34;kubernetes-api-移除和弃用流程&#34;&gt;Kubernetes API 移除和弃用流程&lt;/h2&gt;
&lt;p&gt;Kubernetes 项目对特性有一个文档完备的弃用策略。此策略规定，只有当同一 API 有了较新的、稳定的版本可用时，
原有的稳定 API 才可以被弃用，各个不同稳定级别的 API 都有一个最短的生命周期。
弃用的 API 指的是已标记为将在后续某个 Kubernetes 发行版本中被移除的 API；
移除之前该 API 将继续发挥作用（从被弃用起至少一年时间），但使用时会显示一条警告。
被移除的 API 将在当前版本中不再可用，此时你必须转为使用替代的 API。&lt;/p&gt;
&lt;!--
* Generally available (GA) or stable API versions may be marked as deprecated, but must not be removed within a major version of Kubernetes.
* Beta or pre-release API versions must be supported for 3 releases after deprecation.
* Alpha or experimental API versions may be removed in any release without prior deprecation notice.
--&gt;
&lt;ul&gt;
&lt;li&gt;正式发布（GA）或稳定的 API 版本可能被标记为已弃用，但只有在 Kubernetes 主版本变化时才会被移除。&lt;/li&gt;
&lt;li&gt;测试版（Beta）或预发布 API 版本在弃用后必须在后续 3 个版本中继续支持。&lt;/li&gt;
&lt;li&gt;Alpha 或实验性 API 版本可以在任何版本中被移除，不另行通知。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
Whether an API is removed as a result of a feature graduating from beta to stable or because that API simply did not succeed, all removals comply with this deprecation policy. Whenever an API is removed, migration options are communicated in the documentation.
--&gt;
&lt;p&gt;无论一个 API 是因为某特性从 Beta 进阶至稳定阶段而被移除，还是因为该 API 根本没有成功，
所有移除均遵从上述弃用策略。无论何时移除一个 API，文档中都会列出迁移选项。&lt;/p&gt;
&lt;!--
## A note about the k8s.gcr.io redirect to registry.k8s.io

To host its container images, the Kubernetes project uses a community-owned image registry called registry.k8s.io. Starting last March traffic to the old k8s.gcr.io registry began being redirected to registry.k8s.io. The deprecated k8s.gcr.io registry will eventually be phased out. For more details on this change or to see if you are impacted, please read [k8s.gcr.io Redirect to registry.k8s.io - What You Need to Know](/blog/2023/03/10/image-registry-redirect/).
--&gt;
&lt;h2 id=&#34;k8s-gcr-io-重定向到-registry-k8s-io-相关说明&#34;&gt;k8s.gcr.io 重定向到 registry.k8s.io 相关说明&lt;/h2&gt;
&lt;p&gt;Kubernetes 项目为了托管其容器镜像，使用社区自治的一个名为 registry.k8s.io 的镜像仓库。
从最近的 3 月份起，所有流向 k8s.gcr.io 旧仓库的请求开始被重定向到 registry.k8s.io。
已弃用的 k8s.gcr.io 仓库最终将被淘汰。有关这一变更的细节或若想查看你是否受到影响，参阅
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/03/10/image-registry-redirect/&#34;&gt;k8s.gcr.io 重定向到 registry.k8s.io - 用户须知&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## A note about the Kubernetes community-owned package repositories

Earlier in 2023, the Kubernetes project [introduced](/blog/2023/08/15/pkgs-k8s-io-introduction/) `pkgs.k8s.io`, community-owned software repositories for Debian and RPM packages. The community-owned repositories replaced the legacy Google-owned repositories (`apt.kubernetes.io` and `yum.kubernetes.io`).
On September 13, 2023, those legacy repositories were formally deprecated and their contents frozen.
--&gt;
&lt;h2 id=&#34;kubernetes-社区自治软件包仓库相关说明&#34;&gt;Kubernetes 社区自治软件包仓库相关说明&lt;/h2&gt;
&lt;p&gt;在 2023 年年初，Kubernetes 项目&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/15/pkgs-k8s-io-introduction/&#34;&gt;引入了&lt;/a&gt; &lt;code&gt;pkgs.k8s.io&lt;/code&gt;,
这是 Debian 和 RPM 软件包所用的社区自治软件包仓库。这些社区自治的软件包仓库取代了先前由 Google 管理的仓库
（&lt;code&gt;apt.kubernetes.io&lt;/code&gt; 和 &lt;code&gt;yum.kubernetes.io&lt;/code&gt;）。在 2023 年 9 月 13 日，这些老旧的仓库被正式弃用，其内容被冻结。&lt;/p&gt;
&lt;!--
For more information on this change or to see if you are impacted, please read the [deprecation announcement](/blog/2023/08/31/legacy-package-repository-deprecation/).
--&gt;
&lt;p&gt;有关这一变更的细节或你若想查看是否受到影响，
请参阅&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/&#34;&gt;弃用公告&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## Deprecations and removals for Kubernetes v1.29

See the official list of [API removals](/docs/reference/using-api/deprecation-guide/#v1-29) for a full list of planned deprecations for Kubernetes v1.29.
--&gt;
&lt;h2 id=&#34;kubernetes-v1-29-的弃用和移除说明&#34;&gt;Kubernetes v1.29 的弃用和移除说明&lt;/h2&gt;
&lt;p&gt;有关 Kubernetes v1.29 计划弃用的完整列表，
参见官方 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/using-api/deprecation-guide/#v1-29&#34;&gt;API 移除&lt;/a&gt;列表。&lt;/p&gt;
&lt;!--
### Removal of in-tree integrations with cloud providers ([KEP-2395](https://kep.k8s.io/2395))

The [feature gates](/docs/reference/command-line-tools-reference/feature-gates/) `DisableCloudProviders` and `DisableKubeletCloudCredentialProviders` will both be set to `true` by default for Kubernetes v1.29. This change will require that users who are currently using in-tree cloud provider integrations (Azure, GCE, or vSphere) enable external cloud controller managers, or opt in to the legacy integration by setting the associated feature gates to `false`.
--&gt;
&lt;h3 id=&#34;移除与云驱动的内部集成-kep-2395-https-kep-k8s-io-2395&#34;&gt;移除与云驱动的内部集成（&lt;a href=&#34;https://kep.k8s.io/2395&#34;&gt;KEP-2395&lt;/a&gt;）&lt;/h3&gt;
&lt;p&gt;对于 Kubernetes v1.29，默认特性门控 &lt;code&gt;DisableCloudProviders&lt;/code&gt; 和 &lt;code&gt;DisableKubeletCloudCredentialProviders&lt;/code&gt;
都将被设置为 &lt;code&gt;true&lt;/code&gt;。这个变更将要求当前正在使用内部云驱动集成（Azure、GCE 或 vSphere）的用户启用外部云控制器管理器，
或者将关联的特性门控设置为 &lt;code&gt;false&lt;/code&gt; 以选择传统的集成方式。&lt;/p&gt;
&lt;!--
Enabling external cloud controller managers means you must run a suitable cloud controller manager within your cluster&#39;s control plane; it also requires setting the command line argument `--cloud-provider=external` for the kubelet (on every relevant node), and across the control plane (kube-apiserver and kube-controller-manager).
--&gt;
&lt;p&gt;启用外部云控制器管理器意味着你必须在集群的控制平面中运行一个合适的云控制器管理器；
同时还需要为 kubelet（在每个相关节点上）及整个控制平面（kube-apiserver 和 kube-controller-manager）
设置命令行参数 &lt;code&gt;--cloud-provider=external&lt;/code&gt;。&lt;/p&gt;
&lt;!--
For more information about how to enable and run external cloud controller managers, read [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/) and [Migrate Replicated Control Plane To Use Cloud Controller Manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/).

For general information about cloud controller managers, please see
[Cloud Controller Manager](/docs/concepts/architecture/cloud-controller/) in the Kubernetes documentation.
--&gt;
&lt;p&gt;有关如何启用和运行外部云控制器管理器的细节，
参阅&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/tasks/administer-cluster/running-cloud-controller/&#34;&gt;管理云控制器管理器&lt;/a&gt;和
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/tasks/administer-cluster/controller-manager-leader-migration/&#34;&gt;迁移多副本的控制面以使用云控制器管理器&lt;/a&gt;。&lt;/p&gt;
&lt;p&gt;有关云控制器管理器的常规信息，请参阅 Kubernetes
文档中的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/architecture/cloud-controller/&#34;&gt;云控制器管理器&lt;/a&gt;。&lt;/p&gt;
&lt;!--
### Removal of the `v1beta2` flow control API group

The _flowcontrol.apiserver.k8s.io/v1beta2_ API version of FlowSchema and PriorityLevelConfiguration will [no longer be served](/docs/reference/using-api/deprecation-guide/#v1-29) in Kubernetes v1.29. 

To prepare for this, you can edit your existing manifests and rewrite client software to use the `flowcontrol.apiserver.k8s.io/v1beta3` API version, available since v1.26. All existing persisted objects are accessible via the new API. Notable changes in `flowcontrol.apiserver.k8s.io/v1beta3` include
that the PriorityLevelConfiguration `spec.limited.assuredConcurrencyShares` field was renamed to `spec.limited.nominalConcurrencyShares`.
--&gt;
&lt;h3 id=&#34;移除-v1beta2-流量控制-api-组&#34;&gt;移除 &lt;code&gt;v1beta2&lt;/code&gt; 流量控制 API 组&lt;/h3&gt;
&lt;p&gt;在 Kubernetes v1.29 中，将&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/using-api/deprecation-guide/#v1-29&#34;&gt;不再提供&lt;/a&gt;
FlowSchema 和 PriorityLevelConfiguration 的 &lt;strong&gt;flowcontrol.apiserver.k8s.io/v1beta2&lt;/strong&gt; API 版本。&lt;/p&gt;
&lt;p&gt;为了做好准备，你可以编辑现有的清单（Manifest）并重写客户端软件，以使用自 v1.26 起可用的
&lt;code&gt;flowcontrol.apiserver.k8s.io/v1beta3&lt;/code&gt; API 版本。所有现有的持久化对象都可以通过新的 API 访问。
&lt;code&gt;flowcontrol.apiserver.k8s.io/v1beta3&lt;/code&gt; 中的显著变化包括将 PriorityLevelConfiguration 的
&lt;code&gt;spec.limited.assuredConcurrencyShares&lt;/code&gt; 字段更名为 &lt;code&gt;spec.limited.nominalConcurrencyShares&lt;/code&gt;。&lt;/p&gt;
&lt;!--
### Deprecation of the `status.nodeInfo.kubeProxyVersion` field for Node

The `.status.kubeProxyVersion` field for Node objects will be [marked as deprecated](https://github.com/kubernetes/enhancements/issues/4004) in v1.29 in preparation for its removal in a future release. This field is not accurate and is set by kubelet, which does not actually know the kube-proxy version, or even if kube-proxy is running.
--&gt;
&lt;h3 id=&#34;弃用针对-node-的-status-nodeinfo-kubeproxyversion-字段&#34;&gt;弃用针对 Node 的 &lt;code&gt;status.nodeInfo.kubeProxyVersion&lt;/code&gt; 字段&lt;/h3&gt;
&lt;p&gt;在 v1.29 中，针对 Node 对象的 &lt;code&gt;.status.kubeProxyVersion&lt;/code&gt; 字段将被
&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4004&#34;&gt;标记为弃用&lt;/a&gt;，
准备在未来某个发行版本中移除。这是因为此字段并不准确，它由 kubelet 设置，
而 kubelet 实际上并不知道 kube-proxy 版本，甚至不知道 kube-proxy 是否在运行。&lt;/p&gt;
&lt;!--
## Want to know more?

Deprecations are announced in the Kubernetes release notes. You can see the announcements of pending deprecations in the release notes for:
--&gt;
&lt;h2 id=&#34;了解更多&#34;&gt;了解更多&lt;/h2&gt;
&lt;p&gt;弃用信息是在 Kubernetes 发布说明（Release Notes）中公布的。你可以在以下版本的发布说明中看到待弃用的公告：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#deprecation&#34;&gt;Kubernetes v1.25&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#deprecation&#34;&gt;Kubernetes v1.26&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#deprecation&#34;&gt;Kubernetes v1.27&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#deprecation&#34;&gt;Kubernetes v1.28&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
We will formally announce the deprecations that come with [Kubernetes v1.29](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#deprecation) as part of the CHANGELOG for that release.

For information on the deprecation and removal process, refer to the official Kubernetes [deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api) document.
--&gt;
&lt;p&gt;我们将在
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#deprecation&#34;&gt;Kubernetes v1.29&lt;/a&gt;
的 CHANGELOG 中正式宣布与该版本相关的弃用信息。&lt;/p&gt;
&lt;p&gt;有关弃用和移除流程的细节，参阅 Kubernetes
官方&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api&#34;&gt;弃用策略&lt;/a&gt;文档。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>介绍 SIG etcd</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/11/07/introducing-sig-etcd/</link>
      <pubDate>Tue, 07 Nov 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/11/07/introducing-sig-etcd/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Introducing SIG etcd&#34;
slug: introducing-sig-etcd
date: 2023-11-07
canonicalUrl: https://etcd.io/blog/2023/introducing-sig-etcd/
--&gt;
&lt;!--
**Authors**:  Han Kang (Google), Marek Siarkowicz (Google), Frederico Muñoz (SAS Institute)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Han Kang (Google), Marek Siarkowicz (Google), Frederico Muñoz (SAS Institute)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Xin Li (Daocloud)&lt;/p&gt;
&lt;!--
Special Interest Groups (SIGs) are a fundamental part of the Kubernetes project,
with a substantial share of the community activity happening within them.
When the need arises, [new SIGs can be created](https://github.com/kubernetes/community/blob/master/sig-wg-lifecycle.md),
and that was precisely what happened recently.
--&gt;
&lt;p&gt;特殊兴趣小组（SIG）是 Kubernetes 项目的基本组成部分，很大一部分的 Kubernetes 社区活动都在其中进行。
当有需要时，可以创建&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-wg-lifecycle.md&#34;&gt;新的 SIG&lt;/a&gt;，
而这正是最近发生的事情。&lt;/p&gt;
&lt;!--
[SIG etcd](https://github.com/kubernetes/community/blob/master/sig-etcd/README.md)
is the most recent addition to the list of Kubernetes SIGs.
In this article we will get to know it a bit better, understand its origins, scope, and plans.
--&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-etcd/README.md&#34;&gt;SIG etcd&lt;/a&gt;
是 Kubernetes SIG 列表中的最新成员。在这篇文章中，我们将更好地认识它，了解它的起源、职责和计划。&lt;/p&gt;
&lt;!--
## The critical role of etcd

If we look inside the control plane of a Kubernetes cluster, we will find
[etcd](https://kubernetes.io/docs/concepts/overview/components/#etcd),
a consistent and highly-available key value store used as Kubernetes&#39; backing
store for all cluster data -- this description alone highlights the critical role that etcd plays,
and the importance of it within the Kubernetes ecosystem.
--&gt;
&lt;h2 id=&#34;etcd-的关键作用&#34;&gt;etcd 的关键作用&lt;/h2&gt;
&lt;p&gt;如果我们查看 Kubernetes 集群的控制平面内部，我们会发现
&lt;a href=&#34;https://kubernetes.io/zh-cn/docs/concepts/overview/components/#etcd&#34;&gt;etcd&lt;/a&gt;，
一个一致且高可用的键值存储，用作 Kubernetes 所有集群数据的后台数据库 -- 仅此描述就突出了
etcd 所扮演的关键角色，以及它在 Kubernetes 生态系统中的重要性。&lt;/p&gt;
&lt;!--
This critical role makes the health of the etcd project and community an important consideration,
and [concerns about the state of the project](https://groups.google.com/a/kubernetes.io/g/steering/c/e-O-tVSCJOk/m/N9IkiWLEAgAJ)
in early 2022 did not go unnoticed. The changes in the maintainer team, amongst other factors,
contributed to a situation that needed to be addressed.
--&gt;
&lt;p&gt;由于 etcd 在生态中的关键作用，其项目和社区的健康成为了一个重要的考虑因素，
并且人们 2022 年初&lt;a href=&#34;https://groups.google.com/a/kubernetes.io/g/steering/c/e-O-tVSCJOk/m/N9IkiWLEAgAJ&#34;&gt;对项目状态的担忧&lt;/a&gt;
并没有被忽视。维护团队的变化以及其他因素导致了一些情况需要被解决。&lt;/p&gt;
&lt;!--
## Why a special interest group

With the critical role of etcd in mind, it was proposed that the way forward would
be to create a new special interest group. If etcd was already at the heart of Kubernetes,
creating a dedicated SIG not only recognises that role, it would make etcd a first-class citizen of the Kubernetes community.
--&gt;
&lt;h2 id=&#34;为什么要设立特殊兴趣小组&#34;&gt;为什么要设立特殊兴趣小组&lt;/h2&gt;
&lt;p&gt;考虑到 etcd 的关键作用，有人提出未来的方向是创建一个新的特殊兴趣小组。
如果 etcd 已经成为 Kubernetes 的核心，创建专门的 SIG 不仅是对这一角色的认可，
还会使 etcd 成为 Kubernetes 社区的一等公民。&lt;/p&gt;
&lt;!--
Establishing SIG etcd creates a dedicated space to make explicit the contract
between etcd and Kubernetes api machinery and to prevent, on the etcd level,
changes which violate this contract. Additionally, etcd will be able to adop
the processes that Kubernetes offers its SIGs ([KEPs](https://www.kubernetes.dev/resources/keps/),
[PRR](https://github.com/kubernetes/community/blob/master/sig-architecture/production-readiness.md),
[phased feature gates](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/),
amongst others) in order to improve the consistency and reliability of the codebase. Being able to use these processes will be a substantial benefit to the etcd community.
--&gt;
&lt;p&gt;SIG etcd 的成立为明确 etcd 和 Kubernetes API 机制之间的契约关系创造了一个专门的空间，
并防止在 etcd 级别上发生违反此契约的更改。此外，etcd 将能够采用 Kubernetes 提供的 SIG
流程（&lt;a href=&#34;https://www.kubernetes.dev/resources/keps/&#34;&gt;KEP&lt;/a&gt;、
&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-architecture/production-readiness.md&#34;&gt;PRR&lt;/a&gt;、
&lt;a href=&#34;https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;分阶段特性门控&lt;/a&gt;以及其他流程）
以提高代码库的一致性和可靠性，这将为 etcd 社区带来巨大的好处。&lt;/p&gt;
&lt;!--
As a SIG, etcd will also be able to draw contributor support from Kubernetes proper:
active contributions to etcd from Kubernetes maintainers would decrease the likelihood
of breaking Kubernetes changes, through the increased number of potential reviewers
and the integration with existing testing framework. This will not only benefit Kubernetes,
which will be able to better participate and shape the direction of etcd in terms of the critical role it plays,
but also etcd as a whole.
--&gt;
&lt;p&gt;作为 SIG，etcd 还能够从 Kubernetes 获得贡献者的支持：Kubernetes 维护者对 etcd
的积极贡献将通过增加潜在审核者数量以及与现有测试框架的集成来降低破坏 Kubernetes 更改的可能性。
这不仅有利于 Kubernetes，由于它能够更好地参与并塑造 etcd 所发挥的关键作用，从而也将有利于整个 etcd。&lt;/p&gt;
&lt;!--
## About SIG etcd

The recently created SIG is already working towards its goals, defined in its
[Charter](https://github.com/kubernetes/community/blob/master/sig-etcd/charter.md)
and [Vision](https://github.com/kubernetes/community/blob/master/sig-etcd/vision.md).
The purpose is clear: to ensure etcd is a reliable, simple, and scalable production-ready
store for building cloud-native distributed systems and managing cloud-native infrastructure
via orchestrators like Kubernetes.
--&gt;
&lt;h2 id=&#34;关于-sig-etcd&#34;&gt;关于 SIG etcd&lt;/h2&gt;
&lt;p&gt;最近创建的 SIG 已经在努力实现其&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-etcd/charter.md&#34;&gt;章程&lt;/a&gt;
和&lt;a href=&#34;https:///github.com/kubernetes/community/blob/master/sig-etcd/vision.md&#34;&gt;愿景&lt;/a&gt;中定义的目标。
其目的很明确：确保 etcd 是一个可靠、简单且可扩展的生产就绪存储，用于构建云原生分布式系统并通过 Kubernetes 等编排器管理云原生基础设施。&lt;/p&gt;
&lt;!--
The scope of SIG etcd is not exclusively about etcd as a Kubernetes component,
it also covers etcd as a standard solution. Our goal is to make etcd the most
reliable key-value storage to be used anywhere, unconstrained by any Kubernetes-specific
limits and scaling to meet the requirements of many diverse use-cases.
--&gt;
&lt;p&gt;SIG etcd 的范围不仅仅涉及将 etcd 作为 Kubernetes 组件，还涵盖将 etcd 作为标准解决方案。
我们的目标是使 etcd 成为可在任何地方使用的最可靠的键值存储，不受任何 kubernetes 特定限制的约束，并且可以扩展以满足许多不同用例的需求。&lt;/p&gt;
&lt;!--
We are confident that the creation of SIG etcd constitutes an important milestone
in the lifecycle of the project, simultaneously improving etcd itself,
and also the integration of etcd with Kubernetes. We invite everyone interested in etcd to
[visit our page](https://github.com/kubernetes/community/blob/master/sig-etcd/README.md),
[join us at our Slack channel](https://kubernetes.slack.com/messages/etcd),
and get involved in this new stage of etcd&#39;s life.
--&gt;
&lt;p&gt;我们相信，SIG etcd 的创建将成为项目生命周期中的一个重要里程碑，同时改进 etcd 本身以及
etcd 与 Kubernetes 的集成。我们欢迎所有对 etcd
感兴趣的人&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-etcd/README.md&#34;&gt;访问我们的页面&lt;/a&gt;、
&lt;a href=&#34;https://kubernetes.slack.com/messages/etcd&#34;&gt;加入我们的 Slack 频道&lt;/a&gt;，并参与 etcd 生命的新阶段。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Gateway API v1.0：正式发布（GA）</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/10/31/gateway-api-ga/</link>
      <pubDate>Tue, 31 Oct 2023 10:00:00 -0800</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/10/31/gateway-api-ga/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Gateway API v1.0: GA Release&#34;
date: 2023-10-31T10:00:00-08:00
slug: gateway-api-ga
--&gt;
&lt;!--
**Authors:** Shane Utt (Kong), Nick Young (Isovalent), Rob Scott (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Shane Utt (Kong), Nick Young (Isovalent), Rob Scott (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (Daocloud)&lt;/p&gt;
&lt;!--
On behalf of Kubernetes SIG Network, we are pleased to announce the v1.0 release of [Gateway
API](https://gateway-api.sigs.k8s.io/)! This release marks a huge milestone for
this project. Several key APIs are graduating to GA (generally available), while
other significant features have been added to the Experimental channel.
--&gt;
&lt;p&gt;我们代表 Kubernetes SIG Network 很高兴地宣布 &lt;a href=&#34;https://gateway-api.sigs.k8s.io/&#34;&gt;Gateway API&lt;/a&gt;
v1.0 版本发布！此版本是该项目的一个重要里程碑。几个关键的 API 正在逐步进入 GA（正式发布）阶段，
同时其他重要特性已添加到实验（Experimental）通道中。&lt;/p&gt;
&lt;!--
## What&#39;s new

### Graduation to v1
This release includes the graduation of
[Gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/),
[GatewayClass](https://gateway-api.sigs.k8s.io/api-types/gatewayclass/), and
[HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/) to v1, which
means they are now generally available (GA). This API version denotes a high
level of confidence in the API surface and provides guarantees of backwards
compatibility. Note that although, the version of these APIs included in the
Standard channel are now considered stable, that does not mean that they are
complete. These APIs will continue to receive new features via the Experimental
channel as they meet graduation criteria. For more information on how all of
this works, refer to the [Gateway API Versioning
Policy](https://gateway-api.sigs.k8s.io/concepts/versioning/).
--&gt;
&lt;h2 id=&#34;新增内容&#34;&gt;新增内容&lt;/h2&gt;
&lt;h3 id=&#34;升级到-v1&#34;&gt;升级到 v1&lt;/h3&gt;
&lt;p&gt;此版本将 &lt;a href=&#34;https://gateway-api.sigs.k8s.io/api-types/gateway/&#34;&gt;Gateway&lt;/a&gt;、
&lt;a href=&#34;https://gateway-api.sigs.k8s.io/api-types/gatewayclass/&#34;&gt;GatewayClass&lt;/a&gt; 和
&lt;a href=&#34;https://gateway-api.sigs.k8s.io/api-types/httproute/&#34;&gt;HTTPRoute&lt;/a&gt; 升级到 v1 版本，
这意味着它们现在是正式发布（GA）的版本。这个 API 版本表明我们对 API 的可感知方面具有较强的信心，并提供向后兼容的保证。
需要注意的是，虽然标准（Standard）通道中所包含的这个版本的 API 集合现在被认为是稳定的，但这并不意味着它们是完整的。
即便这些 API 已满足毕业标准，仍将继续通过实验（Experimental）通道接收新特性。要了解相关工作的组织方式的进一步信息，请参阅
&lt;a href=&#34;https://gateway-api.sigs.k8s.io/concepts/versioning/&#34;&gt;Gateway API 版本控制策略&lt;/a&gt;。&lt;/p&gt;
&lt;!--
### Logo
Gateway API now has a logo! This logo was designed through a collaborative
process, and is intended to represent the idea that this is a set of Kubernetes
APIs for routing traffic both north-south and east-west:
--&gt;
&lt;p&gt;Gateway API 现在有了自己的 Logo！这个 Logo 是通过协作方式设计的，
旨在表达这是一组用于路由南北向和东西向流量的 Kubernetes API：&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;gateway-api-logo.png&#34; alt=&#34;Gateway API Logo&#34; title=&#34;Gateway API Logo&#34;&gt;&lt;/p&gt;
&lt;!--
### CEL Validation
Historically, Gateway API has bundled a validating webhook as part of installing
the API. Starting in v1.0, webhook installation is optional and only recommended
for Kubernetes 1.24. Gateway API now includes
[CEL](/docs/reference/using-api/cel/) validation rules as
part of the
[CRDs](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
This new form of validation is supported in Kubernetes 1.25+, and thus the
validating webhook is no longer required in most installations.
--&gt;
&lt;h3 id=&#34;cel-验证&#34;&gt;CEL 验证&lt;/h3&gt;
&lt;p&gt;过去，Gateway API 在安装 API 时绑定了一个验证性质（Validation）的 Webhook。
从 v1.0 开始，Webhook 的安装是可选的，仅建议在 Kubernetes 1.24 版本上使用。
Gateway API 现在将 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/using-api/cel/&#34;&gt;CEL&lt;/a&gt; 验证规则包含在
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/&#34;&gt;CRD&lt;/a&gt;
中。Kubernetes 1.25 及以上版本支持这种新形式的验证，因此大多数安装中不再需要验证性质的 Webhook。&lt;/p&gt;
&lt;!--
### Standard channel
This release was primarily focused on ensuring that the existing beta APIs were
well defined and sufficiently stable to graduate to GA. That led to a variety of
spec clarifications, as well as some improvements to status to improve the
overall UX when interacting with Gateway API.
--&gt;
&lt;h3 id=&#34;标准-standard-通道&#34;&gt;标准（Standard）通道&lt;/h3&gt;
&lt;p&gt;此发行版本主要侧重于确保现有 Beta 级别 API 定义良好且足够稳定，可以升级为 GA。
其背后意味着为了提高与 Gateway API 交互时的整体用户体验而作的各种规范的澄清以及一些改进。&lt;/p&gt;
&lt;!--
### Experimental channel
Most of the changes included in this release were limited to the experimental
channel. These include HTTPRoute timeouts, TLS config from Gateways to backends,
WebSocket support, Gateway infrastructure labels, and more. Stay tuned for a
follow up blog post that will cover each of these new features in detail.
--&gt;
&lt;h2 id=&#34;实验-experimental-通道&#34;&gt;实验（Experimental）通道&lt;/h2&gt;
&lt;p&gt;此发行版本中包含的大部分更改都限于实验通道。这些更改包括 HTTPRoute
超时、用于 Gateway 访问后端的 TLS 配置、WebSocket 支持、Gateway 基础设施的标签等等。
请继续关注后续博客，我们将详细介绍这些新特性。&lt;/p&gt;
&lt;!--
### Everything else
For a full list of the changes included in this release, please refer to the
[v1.0.0 release
notes](https://github.com/kubernetes-sigs/gateway-api/releases/tag/v1.0.0).
--&gt;
&lt;h2 id=&#34;其他内容&#34;&gt;其他内容&lt;/h2&gt;
&lt;p&gt;有关此版本中包含的所有更改的完整列表，请参阅
&lt;a href=&#34;https://github.com/kubernetes-sigs/gateway-api/releases/tag/v1.0.0&#34;&gt;v1.0.0 版本说明&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## How we got here

The idea of Gateway API was initially [proposed](https://youtu.be/Ne9UJL6irXY?si=wgtC9w8PMB5ZHil2)
4 years ago at KubeCon San Diego as the next generation
of Ingress API. Since then, an incredible community has formed to develop what
has likely become the most collaborative API in Kubernetes history. Over 170
people have contributed to this API so far, and that number continues to grow.
--&gt;
&lt;h2 id=&#34;发展历程&#34;&gt;发展历程&lt;/h2&gt;
&lt;p&gt;Gateway API 的想法最初是在 4 年前的 KubeCon 圣地亚哥&lt;a href=&#34;https://youtu.be/Ne9UJL6irXY?si=wgtC9w8PMB5ZHil2&#34;&gt;提出&lt;/a&gt;的，
下一代 Ingress API。那次会议之后，诞生了一个令人难以置信的社区，致力于开发一种可能是 Kubernetes
历史上协作关系最密切的 API。
迄今为止，已有超过 170 人为此 API 做出了贡献，而且这个数字还在不断增长。&lt;/p&gt;
&lt;!--
A special thank you to the 20+ [community members who agreed to take on an
official role in the
project](https://github.com/kubernetes-sigs/gateway-api/blob/main/OWNERS_ALIASES),
providing some time for reviews and sharing the load of maintaining the project!

We especially want to highlight the emeritus maintainers that played a pivotal
role in the early development of this project:
--&gt;
&lt;p&gt;特别感谢 20 多位&lt;a href=&#34;https://github.com/kubernetes-sigs/gateway-api/blob/main/OWNERS_ALIASES&#34;&gt;愿意在项目中担任正式角色&lt;/a&gt;的社区成员，
他们付出了时间进行评审并分担项目维护的负担！&lt;/p&gt;
&lt;p&gt;我们特别要强调那些在项目早期发展中起到关键作用的荣誉维护者：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/bowei&#34;&gt;Bowei Du&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/danehans&#34;&gt;Daneyon Hansen&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/hbagdi&#34;&gt;Harry Bagdi&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Try it out

Unlike other Kubernetes APIs, you don&#39;t need to upgrade to the latest version of
Kubernetes to get the latest version of Gateway API. As long as you&#39;re running
one of the 5 most recent minor versions of Kubernetes (1.24+), you&#39;ll be able to
get up and running with the latest version of Gateway API.

To try out the API, follow our [Getting Started
guide](https://gateway-api.sigs.k8s.io/guides/).
--&gt;
&lt;h2 id=&#34;尝试一下&#34;&gt;尝试一下&lt;/h2&gt;
&lt;p&gt;与其他 Kubernetes API 不同，你无需升级到最新版本的 Kubernetes 即可获取最新版本的
Gateway API。只要运行的是 Kubernetes 最新的 5 个次要版本之一（1.24+），
就可以使用最新版本的 Gateway API。&lt;/p&gt;
&lt;p&gt;要尝试此 API，请参照我们的&lt;a href=&#34;https://gateway-api.sigs.k8s.io/guides/&#34;&gt;入门指南&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## What&#39;s next

This release is just the beginning of a much larger journey for Gateway API, and
there are still plenty of new features and new ideas in flight for future
releases of the API.
--&gt;
&lt;h2 id=&#34;下一步&#34;&gt;下一步&lt;/h2&gt;
&lt;p&gt;此版本只是 Gateway API 更广泛前景的开始，将来的 API 版本中还有很多新特性和新想法。&lt;/p&gt;
&lt;!--
One of our key goals going forward is to work to stabilize and graduate other
experimental features of the API. These include [support for service
mesh](https://gateway-api.sigs.k8s.io/concepts/gamma/), additional route types
([GRPCRoute](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.GRPCRoute),
[TCPRoute](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.TCPRoute),
[TLSRoute](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.TLSRoute),
[UDPRoute](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.UDPRoute)),
and a variety of experimental features.
--&gt;
&lt;p&gt;我们未来的一个关键目标是努力稳定和升级 API 的其他实验级特性。
这些特性包括支持&lt;a href=&#34;https://gateway-api.sigs.k8s.io/concepts/gamma/&#34;&gt;服务网格&lt;/a&gt;、
额外的路由类型（&lt;a href=&#34;https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.GRPCRoute&#34;&gt;GRPCRoute&lt;/a&gt;、
&lt;a href=&#34;https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.TCPRoute&#34;&gt;TCPRoute&lt;/a&gt;、
&lt;a href=&#34;https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.TLSRoute&#34;&gt;TLSRoute&lt;/a&gt;、
&lt;a href=&#34;https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1alpha2.UDPRoute&#34;&gt;UDPRoute&lt;/a&gt;）以及各种实验级特性。&lt;/p&gt;
&lt;!--
We&#39;ve also been working towards moving
[ReferenceGrant](https://gateway-api.sigs.k8s.io/api-types/referencegrant/) into
a built-in Kubernetes API that can be used for more than just Gateway API.
Within Gateway API, we&#39;ve used this resource to safely enable cross-namespace
references, and that concept is now being adopted by other SIGs. The new version
of this API will be owned by SIG Auth and will likely include at least some
modifications as it migrates to a built-in Kubernetes API.
--&gt;
&lt;p&gt;我们还致力于将 &lt;a href=&#34;https://gateway-api.sigs.k8s.io/api-types/referencegrant/&#34;&gt;ReferenceGrant&lt;/a&gt;
移入内置的 Kubernetes API 中，使其不仅仅可用于网关 API。在 Gateway API 中，我们使用这个资源来安全地实现跨命名空间引用，
而这个概念现在被其他 SIG 采纳。这个 API 的新版本将归 SIG Auth 所有，在移到内置的
Kubernetes API 时可能至少包含一些修改。&lt;/p&gt;
&lt;!--
### Gateway API at KubeCon + CloudNativeCon

At [KubeCon North America
(Chicago)](https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/)
and the adjacent [Contributor
Summit](https://www.kubernetes.dev/events/2023/kcsna/) there are several talks
related to Gateway API that will go into more detail on these topics. If you&#39;re
attending either of these events this year, considering adding these to your
schedule.
--&gt;
&lt;h3 id=&#34;gateway-api-现身于-kubecon-cloudnativecon&#34;&gt;Gateway API 现身于 KubeCon + CloudNativeCon&lt;/h3&gt;
&lt;p&gt;在 &lt;a href=&#34;https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/&#34;&gt;KubeCon 北美（芝加哥）&lt;/a&gt;
和同场的&lt;a href=&#34;https://www.kubernetes.dev/events/2023/kcsna/&#34;&gt;贡献者峰会&lt;/a&gt;上，
有几个与 Gateway API 相关的演讲将详细介绍这些主题。如果你今年要参加其中的一场活动，
请考虑将它们添加到你的日程安排中。&lt;/p&gt;
&lt;!--
**Contributor Summit:**

- [Lessons Learned Building a GA API with CRDs](https://sched.co/1Sp9u)
- [Conformance Profiles: Building a generic conformance test reporting framework](https://sched.co/1Sp9l)
- [Gateway API: Beyond GA](https://sched.co/1SpA9)
--&gt;
&lt;p&gt;&lt;strong&gt;贡献者峰会：&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://sched.co/1Sp9u&#34;&gt;使用 CRD 构建 GA API 的经验教训&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://sched.co/1Sp9l&#34;&gt;合规性配置文件：构建通用合规性测试报告框架&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://sched.co/1SpA9&#34;&gt;Gateway API：GA 以后&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
**KubeCon Main Event:**

- [Gateway API: The Most Collaborative API in Kubernetes History Is GA](https://sched.co/1R2qM)
--&gt;
&lt;p&gt;&lt;strong&gt;KubeCon 主要活动：&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://sched.co/1R2qM&#34;&gt;Gateway API：Kubernetes 历史上协作性最强的 API 已经正式发布&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
**KubeCon Office Hours:**

Gateway API maintainers will be holding office hours sessions at KubeCon if
you&#39;d like to discuss or brainstorm any related topics. To get the latest
updates on these sessions, join the `#sig-network-gateway-api` channel on
[Kubernetes Slack](https://slack.kubernetes.io/).
--&gt;
&lt;p&gt;&lt;strong&gt;KubeCon 办公时间：&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;如果你想就相关主题发起讨论或参与头脑风暴，请参加 Gateway API 维护人员在 KubeCon 上举行办公时间会议。
要获取有关这些会议的最新更新，请加入 &lt;a href=&#34;https://slack.kubernetes.io/&#34;&gt;Kubernetes Slack&lt;/a&gt;
上的 &lt;code&gt;#sig-network-gateway-api&lt;/code&gt; 频道。&lt;/p&gt;
&lt;!--
## Get involved

We&#39;ve only barely scratched the surface of what&#39;s in flight with Gateway API.
There are lots of opportunities to get involved and help define the future of
Kubernetes routing APIs for both Ingress and Mesh.
--&gt;
&lt;h2 id=&#34;参与其中&#34;&gt;参与其中&lt;/h2&gt;
&lt;p&gt;我们只是初步介绍了 Gateway API 正在进行的工作。
有很多机会参与并帮助定义 Ingress 和 Mesh 的 Kubernetes 路由 API 的未来。&lt;/p&gt;
&lt;!--
If this is interesting to you, please [join us in the
community](https://gateway-api.sigs.k8s.io/contributing/) and help us build the
future of Gateway API together!
--&gt;
&lt;p&gt;如果你对此感兴趣，请&lt;a href=&#34;https://gateway-api.sigs.k8s.io/contributing/&#34;&gt;加入我们的社区&lt;/a&gt;并帮助我们共同构建
Gateway API 的未来！&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 中 PersistentVolume 的最后阶段转换时间</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/10/23/persistent-volume-last-phase-transition-time/</link>
      <pubDate>Mon, 23 Oct 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/10/23/persistent-volume-last-phase-transition-time/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: PersistentVolume Last Phase Transition Time in Kubernetes
date: 2023-10-23
slug: persistent-volume-last-phase-transition-time
--&gt;
&lt;!--
**Author:** Roman Bednář (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Roman Bednář (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
In the recent Kubernetes v1.28 release, we (SIG Storage) introduced a new alpha feature that aims to improve PersistentVolume (PV)
storage management and help cluster administrators gain better insights into the lifecycle of PVs.
With the addition of the `lastPhaseTransitionTime` field into the status of a PV,
cluster administrators are now able to track the last time a PV transitioned to a different
[phase](/docs/concepts/storage/persistent-volumes/#phase), allowing for more efficient
and informed resource management.
--&gt;
&lt;p&gt;在最近的 Kubernetes v1.28 版本中，我们（SIG Storage）引入了一项新的 Alpha 级别特性，
旨在改进 PersistentVolume（PV）存储管理并帮助集群管理员更好地了解 PV 的生命周期。
通过将 &lt;code&gt;lastPhaseTransitionTime&lt;/code&gt; 字段添加到 PV 的状态中，集群管理员现在可以跟踪
PV 上次转换到不同&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#phase&#34;&gt;阶段&lt;/a&gt;的时间，
从而实现更高效、更明智的资源管理。&lt;/p&gt;
&lt;!--
## Why do we need new PV field? {#why-new-field}

PersistentVolumes in Kubernetes play a crucial role in providing storage resources to workloads running in the cluster.
However, managing these PVs effectively can be challenging, especially when it comes
to determining the last time a PV transitioned between different phases, such as
`Pending`, `Bound` or `Released`.
Administrators often need to know when a PV was last used or transitioned to certain
phases; for instance, to implement retention policies, perform cleanup, or monitor storage health.
--&gt;
&lt;h2 id=&#34;why-new-field&#34;&gt;我们为什么需要新的 PV 字段？ &lt;/h2&gt;
&lt;p&gt;Kubernetes 中的 PersistentVolume 在为集群中运行的工作负载提供存储资源方面发挥着至关重要的作用。
然而，有效管理这些 PV 可能具有挑战性，特别是在确定 PV 在不同阶段（&lt;code&gt;Pending&lt;/code&gt;、&lt;code&gt;Bound&lt;/code&gt; 或 &lt;code&gt;Released&lt;/code&gt;）之间转换的最后时间时。
管理员通常需要知道 PV 上次使用或转换到某些阶段的时间；例如，实施保留策略、执行清理或监控存储运行状况时。&lt;/p&gt;
&lt;!--
In the past, Kubernetes users have faced data loss issues when using the `Delete` retain policy and had to resort to the safer `Retain` policy.
When we planned the work to introduce the new `lastPhaseTransitionTime` field, we
wanted to provide a more generic solution that can be used for various use cases,
including manual cleanup based on the time a volume was last used or producing alerts based on phase transition times.
--&gt;
&lt;p&gt;过去，Kubernetes 用户在使用 &lt;code&gt;Delete&lt;/code&gt; 保留策略时面临数据丢失问题，不得不使用更安全的 &lt;code&gt;Retain&lt;/code&gt; 策略。
当我们计划引入新的 &lt;code&gt;lastPhaseTransitionTime&lt;/code&gt; 字段时，我们希望提供一个更通用的解决方案，
可用于各种用例，包括根据卷上次使用时间进行手动清理或根据状态转变时间生成警报。&lt;/p&gt;
&lt;!--
## How lastPhaseTransitionTime helps

Provided you&#39;ve enabled the feature gate (see [How to use it](#how-to-use-it), the new `.status.lastPhaseTransitionTime` field of a PersistentVolume (PV)
is updated every time that PV transitions from one phase to another.
--&gt;
&lt;h2 id=&#34;lastphasetransitiontime-如何提供帮助&#34;&gt;lastPhaseTransitionTime 如何提供帮助&lt;/h2&gt;
&lt;p&gt;如果你已启用特性门控（请参阅&lt;a href=&#34;#how-to-use-it&#34;&gt;如何使用它&lt;/a&gt;），则每次 PV 从一个阶段转换到另一阶段时，
PersistentVolume（PV）的新字段 &lt;code&gt;.status.lastPhaseTransitionTime&lt;/code&gt; 都会被更新。&lt;/p&gt;
&lt;!--
Whether it&#39;s transitioning from `Pending` to `Bound`, `Bound` to `Released`, or any other phase transition, the `lastPhaseTransitionTime` will be recorded.
For newly created PVs the phase will be set to `Pending` and the `lastPhaseTransitionTime` will be recorded as well.
--&gt;
&lt;p&gt;无论是从 &lt;code&gt;Pending&lt;/code&gt; 转换到 &lt;code&gt;Bound&lt;/code&gt;、&lt;code&gt;Bound&lt;/code&gt; 到 &lt;code&gt;Released&lt;/code&gt;，还是任何其他阶段转换，都会记录 &lt;code&gt;lastPhaseTransitionTime&lt;/code&gt;。
对于新创建的 PV，将被声明为处于 &lt;code&gt;Pending&lt;/code&gt; 阶段，并且 &lt;code&gt;lastPhaseTransitionTime&lt;/code&gt; 也将被记录。&lt;/p&gt;
&lt;!--
This feature allows cluster administrators to:
--&gt;
&lt;p&gt;此功能允许集群管理员：&lt;/p&gt;
&lt;!--
1. Implement Retention Policies

   With the `lastPhaseTransitionTime`, administrators can now track when a PV was last used or transitioned to the `Released` phase.
   This information can be crucial for implementing retention policies to clean up resources that have been in the `Released` phase for a specific duration.
   For example, it is now trivial to write a script or a policy that deletes all PVs that have been in the `Released` phase for a week.
--&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;实施保留政策&lt;/p&gt;
&lt;p&gt;通过 &lt;code&gt;lastPhaseTransitionTime&lt;/code&gt;，管理员可以跟踪 PV 上次使用或转换到 &lt;code&gt;Released&lt;/code&gt; 阶段的时间。
此信息对于实施保留策略以清理在特定时间内处于 &lt;code&gt;Released&lt;/code&gt; 阶段的资源至关重要。
例如，现在编写一个脚本或一个策略来删除一周内处于 &lt;code&gt;Released&lt;/code&gt; 阶段的所有 PV 是很简单的。&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
2. Monitor Storage Health

   By analyzing the phase transition times of PVs, administrators can monitor storage health more effectively.
   For example, they can identify PVs that have been in the `Pending` phase for an unusually long time, which may indicate underlying issues with the storage provisioner.
--&gt;
&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;
&lt;p&gt;监控存储运行状况&lt;/p&gt;
&lt;p&gt;通过分析 PV 的相变时间，管理员可以更有效地监控存储运行状况。
例如，他们可以识别处于 &lt;code&gt;Pending&lt;/code&gt; 阶段时间异常长的 PV，这可能表明存储配置程序存在潜在问题。&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
## How to use it

The `lastPhaseTransitionTime` field is alpha starting from Kubernetes v1.28, so it requires
the `PersistentVolumeLastPhaseTransitionTime` feature gate to be enabled.
--&gt;
&lt;h2 id=&#34;如何使用它&#34;&gt;如何使用它&lt;/h2&gt;
&lt;p&gt;从 Kubernetes v1.28 开始，&lt;code&gt;lastPhaseTransitionTime&lt;/code&gt; 为 Alpha 特性字段，因此需要启用
&lt;code&gt;PersistentVolumeLastPhaseTransitionTime&lt;/code&gt; 特性门控。&lt;/p&gt;
&lt;!--
If you want to test the feature whilst it&#39;s alpha, you need to enable this feature gate on the `kube-controller-manager` and the `kube-apiserver`.

Use the `--feature-gates` command line argument:
--&gt;
&lt;p&gt;如果你想在该特性处于 Alpha 阶段时对其进行测试，则需要在 &lt;code&gt;kube-controller-manager&lt;/code&gt;
和 &lt;code&gt;kube-apiserver&lt;/code&gt; 上启用此特性门控。&lt;/p&gt;
&lt;p&gt;使用 &lt;code&gt;--feature-gates&lt;/code&gt; 命令行参数：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;--feature-gates&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;...,PersistentVolumeLastPhaseTransitionTime=true&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Keep in mind that the feature enablement does not have immediate effect; the new field will be populated whenever a PV is updated and transitions between phases.
Administrators can then access the new field through the PV status, which can be retrieved using standard Kubernetes API calls or through Kubernetes client libraries.
--&gt;
&lt;p&gt;请记住，该特性启用后不会立即生效；而是在 PV 更新以及阶段之间转换时，填充新字段。
然后，管理员可以通过查看 PV 状态访问新字段，此状态可以使用标准 Kubernetes API
调用或通过 Kubernetes 客户端库进行检索。&lt;/p&gt;
&lt;!--
Here is an example of how to retrieve the `lastPhaseTransitionTime` for a specific PV using the `kubectl` command-line tool:
--&gt;
&lt;p&gt;以下示例展示了如何使用 &lt;code&gt;kubectl&lt;/code&gt; 命令行工具检索特定 PV 的 &lt;code&gt;lastPhaseTransitionTime&lt;/code&gt;：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl get pv &amp;lt;pv-name&amp;gt; -o &lt;span style=&#34;color:#b8860b&#34;&gt;jsonpath&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;{.status.lastPhaseTransitionTime}&amp;#39;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
## Going forward

This feature was initially introduced as an alpha feature, behind a feature gate that is disabled by default.
During the alpha phase, we (Kubernetes SIG Storage) will collect feedback from the end user community and address any issues or improvements identified.

Once sufficient feedback has been received, or no complaints are received the feature can move to beta.
The beta phase will allow us to further validate the implementation and ensure its stability.
--&gt;
&lt;h2 id=&#34;未来发展&#34;&gt;未来发展&lt;/h2&gt;
&lt;p&gt;此特性最初是作为 Alpha 特性引入的，位于默认情况下禁用的特性门控之下。
在 Alpha 阶段，我们（Kubernetes SIG Storage）将收集最终用户的反馈并解决发现的任何问题或改进。&lt;/p&gt;
&lt;p&gt;一旦收到足够的反馈，或者没有收到投诉，该特性就可以进入 Beta 阶段。
Beta 阶段将使我们能够进一步验证实施并确保其稳定性。&lt;/p&gt;
&lt;!--
At least two Kubernetes releases will happen between the release where this field graduates
to beta and the release that graduates the field to general availability (GA). That means that
the earliest release where this field could be generally available is Kubernetes 1.32,
likely to be scheduled for early 2025.
--&gt;
&lt;p&gt;在该字段升级到 Beta 级别和将该字段升级为通用版 (GA) 的版本之间，至少会经过两个 Kubernetes 版本。
这意味着该字段 GA 的最早版本是 Kubernetes 1.32，可能计划于 2025 年初发布。&lt;/p&gt;
&lt;!--
## Getting involved

We always welcome new contributors so if you would like to get involved you can
join our [Kubernetes Storage Special-Interest-Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG).
--&gt;
&lt;h2 id=&#34;欢迎参与&#34;&gt;欢迎参与&lt;/h2&gt;
&lt;p&gt;我们始终欢迎新的贡献者，因此如果你想参与其中，可以加入我们的
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;Kubernetes 存储特殊兴趣小组&lt;/a&gt;（SIG）。&lt;/p&gt;
&lt;!--
If you would like to share feedback, you can do so on our
[public Slack channel](https://app.slack.com/client/T09NY5SBT/C09QZFCE5).
If you&#39;re not already part of that Slack workspace, you can visit https://slack.k8s.io/ for an invitation.
--&gt;
&lt;p&gt;如果你想分享反馈，可以在我们的 &lt;a href=&#34;https://app.slack.com/client/T09NY5SBT/C09QZFCE5&#34;&gt;公共 Slack 频道&lt;/a&gt;上分享。
如果你尚未加入 Slack 工作区，可以访问 &lt;a href=&#34;https://slack.k8s.io/&#34;&gt;https://slack.k8s.io/&lt;/a&gt; 获取邀请。&lt;/p&gt;
&lt;!--
Special thanks to all the contributors that provided great reviews, shared valuable insight and helped implement this feature (alphabetical order):
--&gt;
&lt;p&gt;特别感谢所有提供精彩评论、分享宝贵意见并帮助实现此特性的贡献者（按字母顺序排列）：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Han Kang (&lt;a href=&#34;https://github.com/logicalhan&#34;&gt;logicalhan&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Jan Šafránek (&lt;a href=&#34;https://github.com/jsafrane&#34;&gt;jsafrane&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Jordan Liggitt (&lt;a href=&#34;https://github.com/liggitt&#34;&gt;liggitt&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Kiki (&lt;a href=&#34;https://github.com/carlory&#34;&gt;carlory&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Michelle Au (&lt;a href=&#34;https://github.com/msau42&#34;&gt;msau42&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Tim Bannister (&lt;a href=&#34;https://github.com/sftim&#34;&gt;sftim&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Wojciech Tyczynski (&lt;a href=&#34;https://github.com/wojtek-t&#34;&gt;wojtek-t&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Xing Yang (&lt;a href=&#34;https://github.com/xing-yang&#34;&gt;xing-yang&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>2023 中国 Kubernetes 贡献者峰会简要回顾</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/10/20/kcs-shanghai/</link>
      <pubDate>Fri, 20 Oct 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/10/20/kcs-shanghai/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;A Quick Recap of 2023 China Kubernetes Contributor Summit&#34;
slug: kcs-shanghai
date: 2023-10-20
canonicalUrl: https://www.kubernetes.dev/blog/2023/10/20/kcs-shanghai/
--&gt;
&lt;!--
**Author:** Paco Xu and Michael Yao (DaoCloud)

On September 26, 2023, the first day of
[KubeCon + CloudNativeCon + Open Source Summit China 2023](https://www.lfasiallc.com/kubecon-cloudnativecon-open-source-summit-china/),
nearly 50 contributors gathered in Shanghai for the Kubernetes Contributor Summit.
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Paco Xu 和 Michael Yao (DaoCloud)&lt;/p&gt;
&lt;p&gt;2023 年 9 月 26 日，即
&lt;a href=&#34;https://www.lfasiallc.com/kubecon-cloudnativecon-open-source-summit-china/&#34;&gt;KubeCon + CloudNativeCon + Open Source Summit China 2023&lt;/a&gt;
第一天，近 50 位社区贡献者济济一堂，在上海聚首 Kubernetes 贡献者峰会。&lt;/p&gt;
&lt;!--


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/10/20/kcs-shanghai/kcs04.jpeg&#34;
         alt=&#34;All participants in the 2023 Kubernetes Contributor Summit&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;All participants in the 2023 Kubernetes Contributor Summit&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
--&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/10/20/kcs-shanghai/kcs04.jpeg&#34;
         alt=&#34;2023 Kubernetes 贡献者峰会与会者集体合影&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;2023 Kubernetes 贡献者峰会与会者集体合影&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;!--
This marked the first in-person offline gathering held in China after three years of the pandemic.

## A joyful meetup

The event began with welcome speeches from [Kevin Wang](https://github.com/kevin-wangzefeng) from Huawei Cloud,
one of the co-chairs of KubeCon, and [Puja](https://github.com/puja108) from Giant Swarm.
--&gt;
&lt;p&gt;这是疫情三年之后，首次在中国本土召开的面对面线下聚会。&lt;/p&gt;
&lt;h2 id=&#34;开心遇见&#34;&gt;开心遇见&lt;/h2&gt;
&lt;p&gt;首先是本次 KubeCon 活动的联席主席、来自华为云的 &lt;a href=&#34;https://github.com/kevin-wangzefeng&#34;&gt;Kevin Wang&lt;/a&gt;
和来自 Gaint Swarm 的 &lt;a href=&#34;https://github.com/puja108&#34;&gt;Puja&lt;/a&gt; 做了欢迎致辞。&lt;/p&gt;
&lt;!--
Following the opening remarks, the contributors introduced themselves briefly. Most attendees were from China,
while some contributors had made the journey from Europe and the United States specifically for the conference.
Technical experts from companies such as Microsoft, Intel, Huawei, as well as emerging forces like DaoCloud,
were present. Laughter and cheerful voices filled the room, regardless of whether English was spoken with
European or American accents or if conversations were carried out in authentic Chinese language. This created
an atmosphere of comfort, joy, respect, and anticipation. Past contributions brought everyone closer, and
mutual recognition and accomplishments made this offline gathering possible.
--&gt;
&lt;p&gt;随后在座的几十位贡献者分别做了简单的自我介绍，80% 以上的与会者来自中国，还有一些贡献者专程从欧美飞到上海参会。
其中不乏来自微软、Intel、华为的技术大咖，也有来自 DaoCloud 这样的新锐中坚力量。
欢声笑语齐聚一堂，无论是操着欧美口音的英语，还是地道的中国话，都在诠释着舒心与欢畅，表达着尊敬和憧憬。
是曾经做出的贡献拉近了彼此，是互相的肯定和成就赋予了这次线下聚会的可能。&lt;/p&gt;
&lt;!--


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/10/20/kcs-shanghai/kcs06.jpeg&#34;
         alt=&#34;Face to face meeting in Shanghai&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;Face to face meeting in Shanghai&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
--&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/10/20/kcs-shanghai/kcs06.jpeg&#34;
         alt=&#34;Face to face meeting in Shanghai&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;Face to face meeting in Shanghai&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;!--
The attending contributors were no longer just GitHub IDs; they transformed into vivid faces.
From sitting together and capturing group photos to attempting to identify &#34;Who is who,&#34;
a loosely connected collective emerged. This team structure, although loosely knit and free-spirited,
was established to pursue shared dreams.

As the saying goes, &#34;You reap what you sow.&#34; Each effort has been diligently documented within
the Kubernetes community contributions. Regardless of the passage of time, the community will
not erase those shining traces. Brilliance can be found in your PRs, issues, or comments.
It can also be seen in the smiling faces captured in meetup photos or heard through stories
passed down among contributors.
--&gt;
&lt;p&gt;与会的贡献者不再是简单的 GitHub ID，而是进阶为一个个鲜活的面孔，
从静坐一堂，到合照留影，到寻觅彼此辨别 Who is Who 的那一刻起，我们事实上已形成了一个松散的集体。
这个 team 结构松散、自由开放，却是为了追逐梦想而成立。&lt;/p&gt;
&lt;p&gt;一分耕耘一分收获，每一份努力都已清晰地记录在 Kubernetes 社区贡献中。
无论时光如何流逝，社区中不会抹去那些发光的痕迹，璀璨可能是你的 PR、Issue 或 comments，
也可能是某次 Meetup 的合影笑脸，还可能是贡献者口口相传的故事。&lt;/p&gt;
&lt;!--
## Technical sharing and discussions

Next, there were three technical sharing sessions:

- [sig-multi-cluster](https://github.com/kubernetes/community/blob/master/sig-multicluster/README.md):
  [Hongcai Ren](https://github.com/RainbowMango), a maintainer of Karmada, provided an introduction to
  the responsibilities and roles of this SIG. Their focus is on designing, discussing, implementing,
  and maintaining APIs, tools, and documentation related to multi-cluster management.
  Cluster Federation, one of Karmada&#39;s core concepts, is also part of their work.
--&gt;
&lt;h2 id=&#34;技术分享和讨论&#34;&gt;技术分享和讨论&lt;/h2&gt;
&lt;p&gt;接下来是 3 个技术分享：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-multicluster/README.md&#34;&gt;sig-multi-cluster&lt;/a&gt;：
Karmada 的维护者 &lt;a href=&#34;https://github.com/RainbowMango&#34;&gt;Hongcai Ren&lt;/a&gt; 介绍了这个 SIG 的职责和作用。
这个 SIG 负责设计、讨论、实现和维护多集群管理相关的 API、工具和文档。
其中涉及的 Cluster Federation 也是 Karmada 的核心概念之一。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- [helmfile](https://github.com/helmfile/helmfile): [yxxhero](https://github.com/yxxhero)
  from [GitLab](https://gitlab.cn/) presented how to deploy Kubernetes manifests declaratively,
  customize configurations, and leverage the latest features of Helm, including Helmfile.

- [sig-scheduling](https://github.com/kubernetes/community/blob/master/sig-scheduling/README.md):
  [william-wang](https://github.com/william-wang) from Huawei Cloud shared the recent updates and
  future plans of SIG Scheduling. This SIG is responsible for designing, developing, and testing
  components related to Pod scheduling.
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/helmfile/helmfile&#34;&gt;helmfile&lt;/a&gt;：来自&lt;a href=&#34;https://gitlab.cn/&#34;&gt;极狐 GitLab&lt;/a&gt; 的
&lt;a href=&#34;https://github.com/yxxhero&#34;&gt;yxxhero&lt;/a&gt; 介绍了如何声明式部署 Kubernetes 清单，如何自定义配置，
如何使用 Helm 的最新特性 Helmfile 等内容。&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-scheduling/README.md&#34;&gt;sig-scheduling&lt;/a&gt;：
来自华为云的 &lt;a href=&#34;https://github.com/william-wang&#34;&gt;william-wang&lt;/a&gt; 介绍了
&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-scheduling/README.md&#34;&gt;SIG Scheduling&lt;/a&gt;
最近更新的特性以及未来的规划。SIG Scheduling 负责设计、开发和测试 Pod 调度相关的组件。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/10/20/kcs-shanghai/kcs03.jpeg&#34;
         alt=&#34;A technical session about sig-multi-cluster&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;A technical session about sig-multi-cluster&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
--&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/10/20/kcs-shanghai/kcs03.jpeg&#34;
         alt=&#34;有关 sig-multi-cluster 的技术主题演讲&#34;/&gt; &lt;figcaption&gt;
            &lt;p&gt;有关 sig-multi-cluster 的技术主题演讲&lt;/p&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;!--
Following the sessions, a video featuring a call for contributors by [Sergey Kanzhelev](https://github.com/SergeyKanzhelev),
the SIG-Node Chair, was played. The purpose was to encourage more contributors to join the Kubernetes community,
with a special emphasis on the popular SIG-Node.

Lastly, Kevin hosted an Unconference collective discussion session covering topics such as
multi-cluster management, scheduling, elasticity, AI, and more. For detailed minutes of
the Unconference meeting, please refer to &lt;https://docs.qq.com/doc/DY3pLWklzQkhjWHNT&gt;.
--&gt;
&lt;p&gt;随后播放了来自 SIG-Node Chair &lt;a href=&#34;https://github.com/SergeyKanzhelev&#34;&gt;Sergey Kanzhelev&lt;/a&gt;
的贡献者招募视频，希望更多贡献者参与到 Kubernetes 社区，特别是社区热门的 SIG-Node 方向。&lt;/p&gt;
&lt;p&gt;最后，Kevin 主持了 Unconference 的集体讨论活动，主要涉及到多集群、调度、弹性、AI 等方向。
有关 Unconference 会议纪要，参阅 &lt;a href=&#34;https://docs.qq.com/doc/DY3pLWklzQkhjWHNT&#34;&gt;https://docs.qq.com/doc/DY3pLWklzQkhjWHNT&lt;/a&gt;&lt;/p&gt;
&lt;!--
## China&#39;s contributor statistics

The contributor summit took place in Shanghai, with 90% of the attendees being Chinese.
Within the Cloud Native Computing Foundation (CNCF) ecosystem, contributions from China have been steadily increasing. Currently:

- Chinese contributors account for 9% of the total.
- Contributions from China make up 11.7% of the overall volume.
- China ranks second globally in terms of contributions.
--&gt;
&lt;h2 id=&#34;中国贡献者数据&#34;&gt;中国贡献者数据&lt;/h2&gt;
&lt;p&gt;本次贡献者峰会在上海举办，有 90% 的与会者为华人。而在 CNCF 生态体系中，来自中国的贡献数据也在持续增长，目前：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;中国贡献者占比 9%&lt;/li&gt;
&lt;li&gt;中国贡献量占比 11.7%&lt;/li&gt;
&lt;li&gt;全球贡献排名第 2&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&#34;alert alert-info&#34; role=&#34;alert&#34;&gt;&lt;h4 class=&#34;alert-heading&#34;&gt;说明：&lt;/h4&gt;&lt;!--
The data is from KubeCon keynotes by Chris Aniszczyk, CTO of Cloud Native Computing Foundation,
on September 26, 2023. This probably understates Chinese contributions. A lot of Chinese contributors
use VPNs and may not show up as being from China in the stats accurately.
--&gt;
&lt;p&gt;以上数据来自 CNCF 首席技术官 Chris Aniszczyk 在 2023 年 9 月 26 日 KubeCon 的主题演讲。
另外，由于大量中国贡献者使用 VPN 连接社区，这些统计数据可能与真实数据有所差异。&lt;/p&gt;&lt;/div&gt;

&lt;!--
The Kubernetes Contributor Summit is an inclusive meetup that welcomes all community contributors, including:

- New Contributors
- Current Contributors
  - docs
  - code
  - community management
- Subproject members
- Members of Special Interest Group (SIG) / Working Group (WG)
- Active Contributors
- Casual Contributors
--&gt;
&lt;p&gt;Kubernetes 贡献者峰会是一个自由开放的 Meetup，欢迎社区所有贡献者参与：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;新人&lt;/li&gt;
&lt;li&gt;老兵
&lt;ul&gt;
&lt;li&gt;文档&lt;/li&gt;
&lt;li&gt;代码&lt;/li&gt;
&lt;li&gt;社区管理&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;子项目 Owner 和参与者&lt;/li&gt;
&lt;li&gt;特别兴趣小组（SIG）或工作小组（WG）人员&lt;/li&gt;
&lt;li&gt;活跃的贡献者&lt;/li&gt;
&lt;li&gt;临时贡献者&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Acknowledgments

We would like to express our gratitude to the organizers of this event:

- [Kevin Wang](https://github.com/kevin-wangzefeng), the co-chair of KubeCon and the lead of the kubernetes contributor summit.
- [Paco Xu](https://github.com/pacoxu), who actively coordinated the venue, meals, invited contributors from both China and
  international sources, and established WeChat groups to collect agenda topics. They also shared details of the event
  before and after its occurrence through [pre and post announcements](https://github.com/kubernetes/community/issues/7510).
- [Mengjiao Liu](https://github.com/mengjiao-liu), who was responsible for organizing, coordinating,
  and facilitating various matters related to the summit.
--&gt;
&lt;h2 id=&#34;致谢&#34;&gt;致谢&lt;/h2&gt;
&lt;p&gt;感谢本次活动的组织者：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kevin-wangzefeng&#34;&gt;Kevin Wang&lt;/a&gt; 是本次 KubeCon 活动的联席主席，也是贡献者峰会的负责人&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/pacoxu&#34;&gt;Paco Xu&lt;/a&gt; 积极联络场地餐食，联系和邀请国内外贡献者，建立微信群征集议题，
&lt;a href=&#34;https://github.com/kubernetes/community/issues/7510&#34;&gt;会前会后公示活动细节&lt;/a&gt;等&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/mengjiao-liu&#34;&gt;Mengjiao Liu&lt;/a&gt; 负责组织协调和联络事宜&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
We extend our appreciation to all the contributors who attended the China Kubernetes Contributor Summit in Shanghai.
Your dedication and commitment to the Kubernetes community are invaluable.
Together, we continue to push the boundaries of cloud native technology and shape the future of this ecosystem.
--&gt;
&lt;p&gt;我们衷心感谢所有参加在上海举办的中国 Kubernetes 贡献者峰会的贡献者们。
你们对 Kubernetes 社区的奉献和承诺是无价之宝。
让我们携手共进，继续推动云原生技术的边界，塑造这个生态系统的未来。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>CRI-O 正迁移至 pkgs.k8s.io</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/10/10/cri-o-community-package-infrastructure/</link>
      <pubDate>Tue, 10 Oct 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/10/10/cri-o-community-package-infrastructure/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;CRI-O is moving towards pkgs.k8s.io&#34;
date: 2023-10-10
slug: cri-o-community-package-infrastructure
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Sascha Grunert&lt;/p&gt;
&lt;!--
**Author:** Sascha Grunert
--&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Wilson Wu (DaoCloud)&lt;/p&gt;
&lt;!--
The Kubernetes community [recently announced](/blog/2023/08/31/legacy-package-repository-deprecation/) that their legacy package repositories are frozen, and now they moved to [introduced community-owned package repositories](/blog/2023/08/15/pkgs-k8s-io-introduction) powered by the [OpenBuildService (OBS)](https://build.opensuse.org/project/subprojects/isv:kubernetes). CRI-O has a long history of utilizing [OBS for their package builds](https://github.com/cri-o/cri-o/blob/e292f17/install.md#install-packaged-versions-of-cri-o), but all of the packaging efforts have been done manually so far.
--&gt;
&lt;p&gt;Kubernetes 社区&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/&#34;&gt;最近宣布&lt;/a&gt;旧的软件包仓库已被冻结，
现在这些软件包将被迁移到由 &lt;a href=&#34;https://build.opensuse.org/project/subprojects/isv:kubernetes&#34;&gt;OpenBuildService（OBS）&lt;/a&gt;
提供支持的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/08/15/pkgs-k8s-io-introduction&#34;&gt;社区自治软件包仓库&lt;/a&gt;中。
很久以来，CRI-O 一直在利用 &lt;a href=&#34;https://github.com/cri-o/cri-o/blob/e292f17/install.md#install-packaged-versions-of-cri-o&#34;&gt;OBS 进行软件包构建&lt;/a&gt;，
但到目前为止，所有打包工作都是手动完成的。&lt;/p&gt;
&lt;!--
The CRI-O community absolutely loves Kubernetes, which means that they&#39;re delighted to announce that:
--&gt;
&lt;p&gt;CRI-O 社区非常喜欢 Kubernetes，这意味着他们很高兴地宣布：&lt;/p&gt;
&lt;!--
**All future CRI-O packages will be shipped as part of the officially supported Kubernetes infrastructure hosted on pkgs.k8s.io!**
--&gt;
&lt;p&gt;&lt;strong&gt;所有未来的 CRI-O 包都将作为在 pkgs.k8s.io 上托管的官方支持的 Kubernetes 基础设施的一部分提供！&lt;/strong&gt;&lt;/p&gt;
&lt;!--
There will be a deprecation phase for the existing packages, which is currently being [discussed in the CRI-O community](https://github.com/cri-o/cri-o/discussions/7315). The new infrastructure will only support releases of CRI-O `&gt;= v1.28.2` as well as release branches newer than `release-1.28`.
--&gt;
&lt;p&gt;现有软件包将进入一个弃用阶段，目前正在
&lt;a href=&#34;https://github.com/cri-o/cri-o/discussions/7315&#34;&gt;CRI-O 社区中讨论&lt;/a&gt;。
新的基础设施将仅支持 CRI-O &lt;code&gt;&amp;gt;= v1.28.2&lt;/code&gt; 的版本以及比 &lt;code&gt;release-1.28&lt;/code&gt; 新的版本分支。&lt;/p&gt;
&lt;!--
## How to use the new packages
--&gt;
&lt;h2 id=&#34;how-to-use-the-new-packages&#34;&gt;如何使用新软件包 &lt;/h2&gt;
&lt;!--
In the same way as the Kubernetes community, CRI-O provides `deb` and `rpm` packages as part of a dedicated subproject in OBS, called [`isv:kubernetes:addons:cri-o`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o). This project acts as an umbrella and provides `stable` (for CRI-O tags) as well as `prerelease` (for CRI-O `release-1.y` and `main` branches) package builds.
--&gt;
&lt;p&gt;与 Kubernetes 社区一样，CRI-O 提供 &lt;code&gt;deb&lt;/code&gt; 和 &lt;code&gt;rpm&lt;/code&gt; 软件包作为 OBS 中专用子项目的一部分，
被称为 &lt;a href=&#34;https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o&#34;&gt;&lt;code&gt;isv:kubernetes:addons:cri-o&lt;/code&gt;&lt;/a&gt;。
这个项目是一个集合，提供 &lt;code&gt;stable&lt;/code&gt;（针对 CRI-O 标记）以及 &lt;code&gt;prerelease&lt;/code&gt;（针对 CRI-O &lt;code&gt;release-1.y&lt;/code&gt; 和 &lt;code&gt;main&lt;/code&gt; 分支）版本的软件包。&lt;/p&gt;
&lt;!--
**Stable Releases:**
--&gt;
&lt;p&gt;&lt;strong&gt;稳定版本：&lt;/strong&gt;&lt;/p&gt;
&lt;!--
- [`isv:kubernetes:addons:cri-o:stable`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:stable): Stable Packages
  - [`isv:kubernetes:addons:cri-o:stable:v1.29`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:stable:v1.29): `v1.29.z` tags
  - [`isv:kubernetes:addons:cri-o:stable:v1.28`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:stable:v1.28): `v1.28.z` tags
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:stable&#34;&gt;&lt;code&gt;isv:kubernetes:addons:cri-o:stable&lt;/code&gt;&lt;/a&gt;：稳定软件包
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:stable:v1.29&#34;&gt;&lt;code&gt;isv:kubernetes:addons:cri-o:stable:v1.29&lt;/code&gt;&lt;/a&gt;：&lt;code&gt;v1.29.z&lt;/code&gt; 标记&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:stable:v1.28&#34;&gt;&lt;code&gt;isv:kubernetes:addons:cri-o:stable:v1.28&lt;/code&gt;&lt;/a&gt;：&lt;code&gt;v1.28.z&lt;/code&gt; 标记&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
**Prereleases:**
--&gt;
&lt;p&gt;&lt;strong&gt;预发布版本：&lt;/strong&gt;&lt;/p&gt;
&lt;!--
- [`isv:kubernetes:addons:cri-o:prerelease`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease): Prerelease Packages
  - [`isv:kubernetes:addons:cri-o:prerelease:main`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease:main): [`main`](https://github.com/cri-o/cri-o/commits/main) branch
  - [`isv:kubernetes:addons:cri-o:prerelease:v1.29`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease:v1.29): [`release-1.29`](https://github.com/cri-o/cri-o/commits/release-1.29) branch
  - [`isv:kubernetes:addons:cri-o:prerelease:v1.28`](https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease:v1.28): [`release-1.28`](https://github.com/cri-o/cri-o/commits/release-1.28) branch
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease&#34;&gt;&lt;code&gt;isv:kubernetes:addons:cri-o:prerelease&lt;/code&gt;&lt;/a&gt;：预发布软件包
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease:main&#34;&gt;&lt;code&gt;isv:kubernetes:addons:cri-o:prerelease:main&lt;/code&gt;&lt;/a&gt;：
&lt;a href=&#34;https://github.com/cri-o/cri-o/commits/main&#34;&gt;&lt;code&gt;main&lt;/code&gt;&lt;/a&gt; 分支&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease:v1.29&#34;&gt;&lt;code&gt;isv:kubernetes:addons:cri-o:prerelease:v1.29&lt;/code&gt;&lt;/a&gt;：
&lt;a href=&#34;https://github.com/cri-o/cri-o/commits/release-1.29&#34;&gt;&lt;code&gt;release-1.29&lt;/code&gt;&lt;/a&gt; 分支&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://build.opensuse.org/project/show/isv:kubernetes:addons:cri-o:prerelease:v1.28&#34;&gt;&lt;code&gt;isv:kubernetes:addons:cri-o:prerelease:v1.28&lt;/code&gt;&lt;/a&gt;：
&lt;a href=&#34;https://github.com/cri-o/cri-o/commits/release-1.28&#34;&gt;&lt;code&gt;release-1.28&lt;/code&gt;&lt;/a&gt; 分支&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
There are no stable releases available in the v1.29 repository yet, because v1.29.0 will be released in December. The CRI-O community will also **not** support release branches older than `release-1.28`, because there have been CI requirements merged into `main` which could be only backported to `release-1.28` with appropriate efforts.
--&gt;
&lt;p&gt;v1.29 仓库中尚无可用的稳定版本，因为 v1.29.0 将于 12 月发布。
CRI-O 社区也&lt;strong&gt;不&lt;/strong&gt;支持早于 &lt;code&gt;release-1.28&lt;/code&gt; 的版本分支，
因为已经有 CI 需求合并到 &lt;code&gt;main&lt;/code&gt; 中，只有通过适当的努力才能向后移植到 &lt;code&gt;release-1.28&lt;/code&gt;。&lt;/p&gt;
&lt;!--
For example, If an end-user would like to install the latest available version of the CRI-O `main` branch, then they can add the repository in the same way as they do for Kubernetes.
--&gt;
&lt;p&gt;例如，如果最终用户想要安装 CRI-O &lt;code&gt;main&lt;/code&gt; 分支的最新可用版本，
那么他们可以按照与 Kubernetes 相同的方式添加仓库。&lt;/p&gt;
&lt;!--
### `rpm` Based Distributions
--&gt;
&lt;h3 id=&#34;rpm-based-distributions&#34;&gt;基于 &lt;code&gt;rpm&lt;/code&gt; 的发行版 &lt;/h3&gt;
&lt;!--
For `rpm` based distributions, you can run the following commands as a `root` user to install CRI-O together with Kubernetes:
--&gt;
&lt;p&gt;对于基于 &lt;code&gt;rpm&lt;/code&gt; 的发行版，您可以以 &lt;code&gt;root&lt;/code&gt;
用户身份运行以下命令来将 CRI-O 与 Kubernetes 一起安装：&lt;/p&gt;
&lt;!--
#### Add the Kubernetes repo
--&gt;
&lt;h4 id=&#34;add-the-kubernetes-repo&#34;&gt;添加 Kubernetes 仓库 &lt;/h4&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;cat &lt;span style=&#34;color:#b44&#34;&gt;&amp;lt;&amp;lt;EOF | tee /etc/yum.repos.d/kubernetes.repo
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;[kubernetes]
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;name=Kubernetes
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;enabled=1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;gpgcheck=1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;EOF&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
#### Add the CRI-O repo
--&gt;
&lt;h4 id=&#34;add-the-cri-o-repo&#34;&gt;添加 CRI-O 仓库 &lt;/h4&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;cat &lt;span style=&#34;color:#b44&#34;&gt;&amp;lt;&amp;lt;EOF | tee /etc/yum.repos.d/cri-o.repo
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;[cri-o]
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;name=CRI-O
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;baseurl=https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/rpm/
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;enabled=1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;gpgcheck=1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;gpgkey=https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/rpm/repodata/repomd.xml.key
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;EOF&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
#### Install official package dependencies
--&gt;
&lt;h4 id=&#34;install-official-package-dependencies&#34;&gt;安装官方包依赖 &lt;/h4&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;dnf install -y &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;    conntrack &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;    container-selinux &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;    ebtables &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;    ethtool &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;    iptables &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;    socat
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
#### Install the packages from the added repos
--&gt;
&lt;h4 id=&#34;install-the-packages-from-the-added-repos&#34;&gt;从添加的仓库中安装软件包 &lt;/h4&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;dnf install -y --repo cri-o --repo kubernetes &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;    cri-o &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;    kubeadm &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;    kubectl &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;    kubelet
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
### `deb` Based Distributions
--&gt;
&lt;h3 id=&#34;deb-based-distributions&#34;&gt;基于 &lt;code&gt;deb&lt;/code&gt; 的发行版 &lt;/h3&gt;
&lt;!--
For `deb` based distributions, you can run the following commands as a `root` user:
--&gt;
&lt;p&gt;对于基于 &lt;code&gt;deb&lt;/code&gt; 的发行版，您可以以 &lt;code&gt;root&lt;/code&gt; 用户身份运行以下命令：&lt;/p&gt;
&lt;!--
#### Install dependencies for adding the repositories
--&gt;
&lt;h4 id=&#34;install-dependencies-for-adding-the-repositories&#34;&gt;安装用于添加仓库的依赖项 &lt;/h4&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;apt-get update
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;apt-get install -y software-properties-common curl
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
#### Add the Kubernetes repository
--&gt;
&lt;h4 id=&#34;add-the-kubernetes-repository&#34;&gt;添加 Kubernetes 仓库 &lt;/h4&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key |
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f&#34;&gt;echo&lt;/span&gt; &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /&amp;#34;&lt;/span&gt; |
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    tee /etc/apt/sources.list.d/kubernetes.list
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
#### Add the CRI-O repository
--&gt;
&lt;h4 id=&#34;add-the-cri-o-repository&#34;&gt;添加 CRI-O 仓库 &lt;/h4&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/Release.key |
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f&#34;&gt;echo&lt;/span&gt; &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/ /&amp;#34;&lt;/span&gt; |
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    tee /etc/apt/sources.list.d/cri-o.list
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
#### Install the packages
--&gt;
&lt;h4 id=&#34;install-the-packages&#34;&gt;安装软件包 &lt;/h4&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;apt-get update
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;apt-get install -y cri-o kubelet kubeadm kubectl
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
#### Start CRI-O
--&gt;
&lt;h4 id=&#34;start-cri-o&#34;&gt;启动 CRI-O &lt;/h4&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;systemctl start crio.service
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
The Project&#39;s `prerelease:/main` prefix at the CRI-O&#39;s package path, can be replaced with `stable:/v1.28`, `stable:/v1.29`, `prerelease:/v1.28` or `prerelease:/v1.29` if another stream package is used.
--&gt;
&lt;p&gt;如果使用的是另一个包序列，CRI-O 包路径中项目的 &lt;code&gt;prerelease:/main&lt;/code&gt;
前缀可以替换为 &lt;code&gt;stable:/v1.28&lt;/code&gt;、&lt;code&gt;stable:/v1.29&lt;/code&gt;、&lt;code&gt;prerelease:/v1.28&lt;/code&gt; 或 &lt;code&gt;prerelease :/v1.29&lt;/code&gt;。&lt;/p&gt;
&lt;!--
Bootstrapping [a cluster using `kubeadm`](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) can be done by running `kubeadm init` command, which automatically detects that CRI-O is running in the background. There are also `Vagrantfile` examples available for [Fedora 38](https://github.com/cri-o/packaging/blob/91df5f7/test/rpm/Vagrantfile) as well as [Ubuntu 22.04](https://github.com/cri-o/packaging/blob/91df5f7/test/deb/Vagrantfile) for testing the packages together with `kubeadm`.
--&gt;
&lt;p&gt;你可以使用 &lt;code&gt;kubeadm init&lt;/code&gt; 命令来&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/setup/product-environment/tools/kubeadm/install-kubeadm/&#34;&gt;引导集群&lt;/a&gt;，
该命令会自动检测后台正在运行 CRI-O。还有适用于
&lt;a href=&#34;https://github.com/cri-o/packaging/blob/91df5f7/test/rpm/Vagrantfile&#34;&gt;Fedora 38&lt;/a&gt;
以及 &lt;a href=&#34;https://github.com/cri-o/packaging/blob/91df5f7/test/deb/Vagrantfile&#34;&gt;Ubuntu 22.04&lt;/a&gt;
的 &lt;code&gt;Vagrantfile&lt;/code&gt; 示例，可在使用 &lt;code&gt;kubeadm&lt;/code&gt; 的场景中测试下载的软件包。&lt;/p&gt;
&lt;!--
## How it works under the hood
--&gt;
&lt;h2 id=&#34;how-it-works-under-the-hood&#34;&gt;它是如何工作的 &lt;/h2&gt;
&lt;!--
Everything related to these packages lives in the new [CRI-O packaging repository](https://github.com/cri-o/packaging). It contains a [daily reconciliation](https://github.com/cri-o/packaging/blob/91df5f7/.github/workflows/schedule.yml) GitHub action workflow, for all supported release branches as well as tags of CRI-O. A [test pipeline](https://github.com/cri-o/packaging/actions/workflows/obs.yml) in the OBS workflow ensures that the packages can be correctly installed and used before being published. All of the staging and publishing of the packages is done with the help of the [Kubernetes Release Toolbox (krel)](https://github.com/kubernetes/release/blob/1f85912/docs/krel/README.md), which is also used for the official Kubernetes `deb` and `rpm` packages.
--&gt;
&lt;p&gt;与这些包相关的所有内容都位于新的 &lt;a href=&#34;https://github.com/cri-o/packaging&#34;&gt;CRI-O 打包仓库&lt;/a&gt;中。
它包含 &lt;a href=&#34;https://github.com/cri-o/packaging/blob/91df5f7/.github/workflows/schedule.yml&#34;&gt;Daily Reconciliation&lt;/a&gt; GitHub 工作流，
支持所有发布分支以及 CRI-O 标签。
OBS 工作流程中的&lt;a href=&#34;https://github.com/cri-o/packaging/actions/workflows/obs.yml&#34;&gt;测试管道&lt;/a&gt;确保包在发布之前可以被正确安装和使用。
所有包的暂存和发布都是在 &lt;a href=&#34;https://github.com/kubernetes/release/blob/1f85912/docs/krel/README.md&#34;&gt;Kubernetes 发布工具箱（krel）&lt;/a&gt;的帮助下完成的，
这一工具箱也被用于官方 Kubernetes &lt;code&gt;deb&lt;/code&gt; 和 &lt;code&gt;rpm&lt;/code&gt; 软件包。&lt;/p&gt;
&lt;!--
The package build inputs will undergo daily reconciliation and will be supplied by CRI-O&#39;s static binary bundles. These bundles are built and signed for each commit in the CRI-O CI, and contain everything CRI-O requires to run on a certain architecture. The static builds are reproducible, powered by [nixpkgs](https://github.com/NixOS/nixpkgs) and available only for `x86_64`, `aarch64` and `ppc64le` architecture.
--&gt;
&lt;p&gt;包构建的输入每天都会被动态调整，并使用 CRI-O 的静态二进制包。
这些包是基于 CRI-O CI 中的每次提交来构建和签名的，
并且包含 CRI-O 在特定架构上运行所需的所有内容。静态构建是可重复的，
由 &lt;a href=&#34;https://github.com/NixOS/nixpkgs&#34;&gt;nixpkgs&lt;/a&gt; 提供支持，
并且仅适用于 &lt;code&gt;x86_64&lt;/code&gt;、&lt;code&gt;aarch64&lt;/code&gt; 以及 &lt;code&gt;ppc64le&lt;/code&gt; 架构。&lt;/p&gt;
&lt;!--
The CRI-O maintainers will be happy to listen to any feedback or suggestions on the new packaging efforts! Thank you for reading this blog post, feel free to reach out to the maintainers via the Kubernetes [Slack channel #crio](https://kubernetes.slack.com/messages/CAZH62UR1) or create an issue in the [packaging repository](https://github.com/cri-o/packaging/issues).
--&gt;
&lt;p&gt;CRI-O 维护者将很乐意听取有关新软件包工作情况的任何反馈或建议！
感谢您阅读本文，请随时通过 Kubernetes &lt;a href=&#34;https://kubernetes.slack.com/messages/CAZH62UR1&#34;&gt;Slack 频道 #crio&lt;/a&gt;
联系维护人员或在&lt;a href=&#34;https://github.com/cri-o/packaging/issues&#34;&gt;打包仓库&lt;/a&gt;中创建 Issue。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>聚焦 SIG Architecture: Conformance</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/10/05/sig-architecture-conformance-spotlight-2023/</link>
      <pubDate>Thu, 05 Oct 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/10/05/sig-architecture-conformance-spotlight-2023/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Spotlight on SIG Architecture: Conformance&#34;
slug: sig-architecture-conformance-spotlight-2023
date: 2023-10-05
canonicalUrl: https://www.k8s.dev/blog/2023/10/05/sig-architecture-conformance-spotlight-2023/
--&gt;
&lt;!--
**Author**: Frederico Muñoz (SAS Institute)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Frederico Muñoz (SAS Institute)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：&lt;a href=&#34;https://github.com/windsonsea&#34;&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
_This is the first interview of a SIG Architecture Spotlight series
that will cover the different subprojects. We start with the SIG
Architecture: Conformance subproject_

In this [SIG
Architecture](https://github.com/kubernetes/community/blob/master/sig-architecture/README.md)
spotlight, we talked with [Riaan
Kleinhans](https://github.com/Riaankl) (ii.nz), Lead for the
[Conformance
sub-project](https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#conformance-definition-1).
--&gt;
&lt;p&gt;&lt;strong&gt;这是 SIG Architecture 焦点访谈系列的首次采访，这一系列访谈将涵盖多个子项目。
我们从 SIG Architecture：Conformance 子项目开始。&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;在本次 &lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-architecture/README.md&#34;&gt;SIG Architecture&lt;/a&gt;
访谈中，我们与 &lt;a href=&#34;https://github.com/Riaankl&#34;&gt;Riaan Kleinhans&lt;/a&gt; (ii.nz) 进行了对话，他是
&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#conformance-definition-1&#34;&gt;Conformance 子项目&lt;/a&gt;的负责人。&lt;/p&gt;
&lt;!--
## About SIG Architecture and the Conformance subproject

**Frederico (FSM)**: Hello Riaan, and welcome! For starters, tell us a
bit about yourself, your role and how you got involved in Kubernetes.

**Riaan Kleinhans (RK)**: Hi! My name is Riaan Kleinhans and I live in
South Africa. I am the Project manager for the [ii.nz](https://ii.nz) in New
Zealand. When I joined ii the plan was to move to New Zealand in April
2020 and then Covid happened. Fortunately, being a flexible and
dynamic team we were able to make it work remotely and in very
different time zones.
--&gt;
&lt;h2 id=&#34;关于-sig-architecture-和-conformance-子项目&#34;&gt;关于 SIG Architecture 和 Conformance 子项目&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Frederico (FSM)&lt;/strong&gt;：你好 Riaan，欢迎！首先，请介绍一下你自己，你的角色以及你是如何参与 Kubernetes 的。&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Riaan Kleinhans (RK)&lt;/strong&gt;：嗨！我叫 Riaan Kleinhans，我住在南非。
我是新西兰 &lt;a href=&#34;https://ii.nz&#34;&gt;ii.nz&lt;/a&gt; 的项目经理。在我加入 ii 时，本来计划在 2020 年 4 月搬到新西兰，
然后新冠疫情爆发了。幸运的是，作为一个灵活和富有活力的团队，我们能够在各个不同的时区以远程方式协作。&lt;/p&gt;
&lt;!--
The ii team have been tasked with managing the Kubernetes Conformance
testing technical debt and writing tests to clear the technical
debt. I stepped into the role of project manager to be the link
between monitoring, test writing and the community. Through that work
I had the privilege of meeting [Dan Kohn](https://github.com/dankohn)
in those first months, his enthusiasm about the work we were doing was
a great inspiration.
--&gt;
&lt;p&gt;ii 团队负责管理 Kubernetes Conformance 测试的技术债务，并编写测试内容来消除这些技术债务。
我担任项目经理的角色，成为监控、测试内容编写和社区之间的桥梁。通过这项工作，我有幸在最初的几个月里结识了
&lt;a href=&#34;https://github.com/dankohn&#34;&gt;Dan Kohn&lt;/a&gt;，他对我们的工作充满热情，给了我很大的启发。&lt;/p&gt;
&lt;!--
**FSM**: Thank you - so, your involvement in SIG Architecture started
because of the conformance work?

**RK**: SIG Architecture is the home for the Kubernetes Conformance
subproject. Initially, most of my interactions were directly with SIG
Architecture through the Conformance sub-project. However, as we
began organizing the work by SIG, we started engaging directly with
each individual SIG. These engagements with the SIGs that own the
untested APIs have helped us accelerate our work.
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：谢谢！所以，你参与 SIG Architecture 是因为合规性的工作？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;RK&lt;/strong&gt;：SIG Architecture 负责管理 Kubernetes Conformance 子项目。
最初，我大部分时间直接与 SIG Architecture 交流 Conformance 子项目。
然而，随着我们开始按 SIG 来组织工作任务，我们开始直接与各个 SIG 进行协作。
与拥有未被测试的 API 的这些 SIG 的协作帮助我们加快了工作进度。&lt;/p&gt;
&lt;!--
**FSM**: How would you describe the main goals and
areas of intervention of the Conformance sub-project?

**RM**: The Kubernetes Conformance sub-project focuses on guaranteeing
compatibility and adherence to the Kubernetes specification by
developing and maintaining a comprehensive conformance test suite. Its
main goals include assuring compatibility across different Kubernetes
implementations, verifying adherence to the API specification,
supporting the ecosystem by encouraging conformance certification, and
fostering collaboration within the Kubernetes community. By providing
standardised tests and promoting consistent behaviour and
functionality, the Conformance subproject ensures a reliable and
compatible Kubernetes ecosystem for developers and users alike.
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：你如何描述 Conformance 子项目的主要目标和介入的领域？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;RM&lt;/strong&gt;: Kubernetes Conformance 子项目专注于通过开发和维护全面的合规性测试套件来确保兼容性并遵守
Kubernetes 规范。其主要目标包括确保不同 Kubernetes 实现之间的兼容性，验证 API 规范的遵守情况，
通过鼓励合规性认证来支持生态体系，并促进 Kubernetes 社区内的合作。
通过提供标准化的测试并促进一致的行为和功能，
Conformance 子项目为开发人员和用户提供了一个可靠且兼容的 Kubernetes 生态体系。&lt;/p&gt;
&lt;!--
## More on the Conformance Test Suite

**FSM**: A part of providing those standardised tests is, I believe,
the [Conformance Test
Suite](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md). Could
you explain what it is and its importance?

**RK**: The Kubernetes Conformance Test Suite checks if Kubernetes
distributions meet the project&#39;s specifications, ensuring
compatibility across different implementations. It covers various
features like APIs, networking, storage, scheduling, and
security. Passing the tests confirms proper implementation and
promotes a consistent and portable container orchestration platform.
--&gt;
&lt;h2 id=&#34;关于-conformance-test-suite-的更多内容&#34;&gt;关于 Conformance Test Suite 的更多内容&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：我认为，提供这些标准化测试的一部分工作在于
&lt;a href=&#34;https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md&#34;&gt;Conformance Test Suite&lt;/a&gt;。
你能解释一下它是什么以及其重要性吗？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;RK&lt;/strong&gt;：Kubernetes Conformance Test Suite 检查 Kubernetes 发行版是否符合项目的规范，
确保在不同的实现之间的兼容性。它涵盖了诸如 API、联网、存储、调度和安全等各个特性。
能够通过测试，则表示实现合理，便于推动构建一致且可移植的容器编排平台。&lt;/p&gt;
&lt;!--
**FSM**: Right, the tests are important in the way they define the
minimum features that any Kubernetes cluster must support. Could you
describe the process around determining which features are considered
for inclusion? Is there any tension between a more minimal approach,
and proposals from the other SIGs?

**RK**: The requirements for each endpoint that undergoes conformance
testing are clearly defined by SIG Architecture. Only API endpoints
that are generally available and non-optional features are eligible
for conformance. Over the years, there have been several discussions
regarding conformance profiles, exploring the possibility of including
optional endpoints like RBAC, which are widely used by most end users,
in specific profiles. However, this aspect is still a work in
progress.
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：是的，这些测试很重要，因为它们定义了所有 Kubernetes 集群必须支持的最小特性集合。
你能描述一下决定将哪些特性包含在内的过程吗？在最小特性集的思路与其他 SIG 提案之间是否有所冲突？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;RK&lt;/strong&gt;：SIG Architecture 针对经受合规性测试的每个端点的要求，都有明确的定义。
API 端点只有正式发布且不是可选的特性，才会被（进一步）考虑是否合规。
多年来，关于合规性配置文件已经进行了若干讨论，
探讨将被大多数终端用户广泛使用的可选端点（例如 RBAC）纳入特定配置文件中的可能性。
然而，这一方面仍在不断改进中。&lt;/p&gt;
&lt;!--
Endpoints that do not meet the conformance criteria are listed in
[ineligible_endpoints.yaml](https://github.com/kubernetes/kubernetes/blob/master/test/conformance/testdata/ineligible_endpoints.yaml),
which is publicly accessible in the Kubernetes repo. This file can be
updated to add or remove endpoints as their status or requirements
change. These ineligible endpoints are also visible on
[APISnoop](https://apisnoop.cncf.io/).

Ensuring transparency and incorporating community input regarding the
eligibility or ineligibility of endpoints is of utmost importance to
SIG Architecture.
--&gt;
&lt;p&gt;不满足合规性标准的端点被列在
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/master/test/conformance/testdata/ineligible_endpoints.yaml&#34;&gt;ineligible_endpoints.yaml&lt;/a&gt; 中，
该文件放在 Kubernetes 代码仓库中，是被公开访问的。
随着这些端点的状态或要求发生变化，此文件可能会被更新以添加或删除端点。
不合格的端点也可以在 &lt;a href=&#34;https://apisnoop.cncf.io/&#34;&gt;APISnoop&lt;/a&gt; 上看到。&lt;/p&gt;
&lt;p&gt;对于 SIG Architecture 来说，确保透明度并纳入社区意见以确定端点的合格或不合格状态是至关重要的。&lt;/p&gt;
&lt;!--
**FSM**: Writing tests for new features is something generally
requires some kind of enforcement. How do you see the evolution of
this in Kubernetes? Was there a specific effort to improve the process
in a way that required tests would be a first-class citizen, or was
that never an issue?

**RK**: When discussions surrounding the Kubernetes conformance
programme began in 2018, only approximately 11% of endpoints were
covered by tests. At that time, the CNCF&#39;s governing board requested
that if funding were to be provided for the work to cover missing
conformance tests, the Kubernetes Community should adopt a policy of
not allowing new features to be added unless they include conformance
tests for their stable APIs.
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：为新特性编写测试内容通常需要某种强制执行方式。
你如何看待 Kubernetes 中这方面的演变？是否有人在努力改进这个流程，
使得必须具备测试成为头等要务，或许这从来都不是一个问题？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;RK&lt;/strong&gt;：在 2018 年开始围绕 Kubernetes 合规性计划进行讨论时，只有大约 11% 的端点被测试所覆盖。
那时，CNCF 的管理委员会提出一个要求，如果要提供资金覆盖缺失的合规性测试，Kubernetes 社区应采取一个策略，
即如果新特性没有包含稳定 API 的合规性测试，则不允许添加此特性。&lt;/p&gt;
&lt;!--
SIG Architecture is responsible for stewarding this requirement, and
[APISnoop](https://apisnoop.cncf.io/) has proven to be an invaluable
tool in this regard. Through automation, APISnoop generates a pull
request every weekend to highlight any discrepancies in Conformance
coverage. If any endpoints are promoted to General Availability
without a conformance test, it will be promptly identified. This
approach helps prevent the accumulation of new technical debt.

Additionally, there are plans in the near future to create a release
informing job, which will add an additional layer to prevent any new
technical debt.
--&gt;
&lt;p&gt;SIG Architecture 负责监督这一要求，&lt;a href=&#34;https://apisnoop.cncf.io/&#34;&gt;APISnoop&lt;/a&gt;
在此方面被证明是一个非常有价值的工具。通过自动化流程，APISnoop 在每个周末生成一个 PR，
以突出 Conformance 覆盖范围的变化。如果有端点在没有进行合规性测试的情况下进阶至正式发布，
将会被迅速识别发现。这种方法有助于防止积累新的技术债务。&lt;/p&gt;
&lt;p&gt;此外，我们计划在不久的将来创建一个发布通知任务，作用是添加额外一层防护，以防止产生新的技术债务。&lt;/p&gt;
&lt;!--
**FSM**: I see, tooling and automation play an important role
there. What are, in your opinion, the areas that, conformance-wise,
still require some work to be done? In other words, what are the
current priority areas marked for improvement?

**RK**: We have reached the “100% Conformance Tested” milestone in
release 1.27!
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：我明白了，工具化和自动化在其中起着重要的作用。
在你看来，就合规性而言，还有哪些领域需要做一些工作？
换句话说，目前标记为优先改进的领域有哪些？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;RK&lt;/strong&gt;：在 1.27 版本中，我们已完成了 “100% 合规性测试” 的里程碑！&lt;/p&gt;
&lt;!--
At that point, the community took another look at all the endpoints
that were listed as ineligible for conformance. The list was populated
through community input over several years.  Several endpoints
that were previously deemed ineligible for conformance have been
identified and relocated to a new dedicated list, which is currently
receiving focused attention for conformance test development. Again,
that list can also be checked on apisnoop.cncf.io.
--&gt;
&lt;p&gt;当时，社区重新审视了所有被列为不合规的端点。这个列表是收集多年的社区意见后填充的。
之前被认为不合规的几个端点已被挑选出来并迁移到一个新的专用列表中，
该列表中包含目前合规性测试开发的焦点。同样，可以在 apisnoop.cncf.io 上查阅此列表。&lt;/p&gt;
&lt;!--
To ensure the avoidance of new technical debt in the conformance
project, there are upcoming plans to establish a release informing job
as an additional preventive measure.

While APISnoop is currently hosted on CNCF infrastructure, the project
has been generously donated to the Kubernetes community. Consequently,
it will be transferred to community-owned infrastructure before the
end of 2023.
--&gt;
&lt;p&gt;为了确保在合规性项目中避免产生新的技术债务，我们计划建立一个发布通知任务作为额外的预防措施。&lt;/p&gt;
&lt;p&gt;虽然 APISnoop 目前被托管在 CNCF 基础设施上，但此项目已慷慨地捐赠给了 Kubernetes 社区。
因此，它将在 2023 年底之前转移到社区自治的基础设施上。&lt;/p&gt;
&lt;!--
**FSM**: That&#39;s great news! For anyone wanting to help, what are the
venues for collaboration that you would highlight? Do all of them
require solid knowledge of Kubernetes as a whole, or are there ways
someone newer to the project can contribute?

**RK**: Contributing to conformance testing is akin to the task of
&#34;washing the dishes&#34; – it may not be highly visible, but it remains
incredibly important. It necessitates a strong understanding of
Kubernetes, particularly in the areas where the endpoints need to be
tested. This is why working with each SIG that owns the API endpoint
being tested is so important.
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：这是个好消息！对于想要提供帮助的人们，你能否重点说明一下协作的价值所在？
参与贡献是否需要对 Kubernetes 有很扎实的知识，或否有办法让一些新人也能为此项目做出贡献？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;RK&lt;/strong&gt;：参与合规性测试就像 &amp;quot;洗碗&amp;quot; 一样，它可能不太显眼，但仍然非常重要。
这需要对 Kubernetes 有深入的理解，特别是在需要对端点进行测试的领域。
这就是为什么与负责测试 API 端点的每个 SIG 进行协作会如此重要。&lt;/p&gt;
&lt;!--
As part of our commitment to making test writing accessible to
everyone, the ii team is currently engaged in the development of a
&#34;click and deploy&#34; solution. This solution aims to enable anyone to
swiftly create a working environment on real hardware within
minutes. We will share updates regarding this development as soon as
we are ready.
--&gt;
&lt;p&gt;我们的承诺是让所有人都能参与测试内容编写，作为这一承诺的一部分，
ii 团队目前正在开发一个 “点击即部署（click and deploy）” 的解决方案。
此解决方案旨在使所有人都能在几分钟内快速创建一个在真实硬件上工作的环境。
我们将在准备好后分享有关此项开发的更新。&lt;/p&gt;
&lt;!--
**FSM**: That&#39;s very helpful, thank you. Any final comments you would
like to share with our readers?

**RK**: Conformance testing is a collaborative community endeavour that
involves extensive cooperation among SIGs. SIG Architecture has
spearheaded the initiative and provided guidance. However, the
progress of the work relies heavily on the support of all SIGs in
reviewing, enhancing, and endorsing the tests.
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：那会非常有帮助，谢谢。最后你还想与我们的读者分享些什么见解吗？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;RK&lt;/strong&gt;：合规性测试是一个协作性的社区工作，涉及各个 SIG 之间的广泛合作。
SIG Architecture 在推动倡议并提供指导方面起到了领头作用。然而，
工作的进展在很大程度上依赖于所有 SIG 在审查、增强和认可测试方面的支持。&lt;/p&gt;
&lt;!--
I would like to extend my sincere appreciation to the ii team for
their unwavering commitment to resolving technical debt over the
years. In particular, [Hippie Hacker](https://github.com/hh)&#39;s
guidance and stewardship of the vision has been
invaluable. Additionally, I want to give special recognition to
Stephen Heywood for shouldering the majority of the test writing
workload in recent releases, as well as to Zach Mandeville for his
contributions to APISnoop.
--&gt;
&lt;p&gt;我要衷心感谢 ii 团队多年来对解决技术债务的坚定承诺。
特别要感谢 &lt;a href=&#34;https://github.com/hh&#34;&gt;Hippie Hacker&lt;/a&gt; 的指导和对愿景的引领作用，这是非常宝贵的。
此外，我还要特别表扬 Stephen Heywood 在最近几个版本中承担了大部分测试内容编写工作而做出的贡献，
还有 Zach Mandeville 对 APISnoop 也做了很好的贡献。&lt;/p&gt;
&lt;!--
**FSM**: Many thanks for your availability and insightful comments,
I&#39;ve personally learned quite a bit with it and I&#39;m sure our readers
will as well.
--&gt;
&lt;p&gt;&lt;strong&gt;FSM&lt;/strong&gt;：非常感谢你参加本次访谈并分享你的深刻见解，我本人从中获益良多，我相信读者们也会同样受益。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>公布 2023 年指导委员会选举结果</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/10/02/steering-committee-results-2023/</link>
      <pubDate>Mon, 02 Oct 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/10/02/steering-committee-results-2023/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Announcing the 2023 Steering Committee Election Results&#34;
date: 2023-10-02
slug: steering-committee-results-2023
--&gt;
&lt;!--
**Author**: Kaslin Fields
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Kaslin Fields&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Xin Li(DaoCloud)&lt;/p&gt;
&lt;!--
The [2023 Steering Committee Election](https://github.com/kubernetes/community/tree/master/events/elections/2023) is now complete.
The Kubernetes Steering Committee consists of 7 seats, 4 of which were up for election in 2023.
Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.
--&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/events/elections/2023&#34;&gt;2023 年指导委员会选举&lt;/a&gt;现已完成。
Kubernetes 指导委员会由 7 个席位组成，其中 4 个席位于 2023 年进行选举。
新任委员会成员的任期为 2 年，所有成员均由 Kubernetes 社区选举产生。&lt;/p&gt;
&lt;!--
This community body is significant since it oversees the governance of the entire Kubernetes project.
With that great power comes great responsibility. You can learn more about the steering committee’s role in their [charter](https://github.com/kubernetes/steering/blob/master/charter.md).
--&gt;
&lt;p&gt;这个社区机构非常重要，因为它负责监督整个 Kubernetes 项目的治理。
权力越大责任越大，你可以在其
&lt;a href=&#34;https://github.com/kubernetes/steering/blob/master/charter.md&#34;&gt;章程&lt;/a&gt;中了解有关指导委员会角色的更多信息。&lt;/p&gt;
&lt;!--
Thank you to everyone who voted in the election; your participation helps support the community’s continued health and success.
--&gt;
&lt;p&gt;感谢所有在选举中投票的人；你们的参与有助于支持社区的持续健康和成功。&lt;/p&gt;
&lt;!--
## Results

Congratulations to the elected committee members whose two year terms begin immediately (listed in alphabetical order by GitHub handle):
--&gt;
&lt;h2 id=&#34;结果&#34;&gt;结果&lt;/h2&gt;
&lt;p&gt;祝贺当选的委员会成员，其两年任期立即开始（按 GitHub 名称字母顺序列出）：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Stephen Augustus (&lt;a href=&#34;https://github.com/justaugustus&#34;&gt;@justaugustus&lt;/a&gt;), Cisco&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Paco Xu 徐俊杰 (&lt;a href=&#34;https://github.com/pacoxu&#34;&gt;@pacoxu&lt;/a&gt;), DaoCloud&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Patrick Ohly (&lt;a href=&#34;https://github.com/pohly&#34;&gt;@pohly&lt;/a&gt;), Intel&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Maciej Szulik (&lt;a href=&#34;https://github.com/soltysh&#34;&gt;@soltysh&lt;/a&gt;), Red Hat&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
They join continuing members:
--&gt;
&lt;p&gt;他们将与一下连任成员一起工作：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Benjamin Elder (&lt;a href=&#34;https://github.com/bentheelder&#34;&gt;@bentheelder&lt;/a&gt;), Google&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bob Killen (&lt;a href=&#34;https://github.com/mrbobbytables&#34;&gt;@mrbobbytables&lt;/a&gt;), Google&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Nabarun Pal (&lt;a href=&#34;https://github.com/palnabarun&#34;&gt;@palnabarun&lt;/a&gt;), VMware&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
Stephen Augustus is a returning Steering Committee Member.
--&gt;
&lt;p&gt;Stephen Augustus 是回归的指导委员会成员。&lt;/p&gt;
&lt;!--
## Big Thanks!

Thank you and congratulations on a successful election to this round’s election officers:
--&gt;
&lt;h2 id=&#34;十分感谢&#34;&gt;十分感谢！&lt;/h2&gt;
&lt;p&gt;感谢并祝贺本轮选举官员成功完成选举工作：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Bridget Kromhout (&lt;a href=&#34;https://github.com/bridgetkromhout&#34;&gt;@bridgetkromhout&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Davanum Srinavas (&lt;a href=&#34;https://github.com/dims&#34;&gt;@dims&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Kaslin Fields (&lt;a href=&#34;https://github.com/kaslin&#34;&gt;@kaslin&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
Thanks to the Emeritus Steering Committee Members. Your service is appreciated by the community:
--&gt;
&lt;p&gt;感谢名誉指导委员会成员，你们的服务受到社区的赞赏：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Christoph Blecker (&lt;a href=&#34;https://github.com/cblecker&#34;&gt;@cblecker&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Carlos Tadeu Panato Jr. (&lt;a href=&#34;https://github.com/cpanato&#34;&gt;@cpanato&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Tim Pepper (&lt;a href=&#34;https://github.com/tpepper&#34;&gt;@tpepper&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
And thank you to all the candidates who came forward to run for election.
--&gt;
&lt;p&gt;感谢所有前来竞选的候选人。&lt;/p&gt;
&lt;!--
## Get Involved with the Steering Committee

This governing body, like all of Kubernetes, is open to all.
You can follow along with Steering Committee [backlog items](https://github.com/orgs/kubernetes/projects/40) and weigh in by filing an issue or creating a PR against their [repo](https://github.com/kubernetes/steering).
They have an open meeting on [the first Monday at 9:30am PT of every month](https://github.com/kubernetes/steering).
They can also be contacted at their public mailing list steering@kubernetes.io.
--&gt;
&lt;h2 id=&#34;参与指导委员会&#34;&gt;参与指导委员会&lt;/h2&gt;
&lt;p&gt;你可以关注指导委员会&lt;a href=&#34;https://github.com/orgs/kubernetes/projects/40&#34;&gt;积压的项目&lt;/a&gt;，
并通过提交 Issue 或针对其 &lt;a href=&#34;https://github.com/kubernetes/steering&#34;&gt;repo&lt;/a&gt; 创建 PR 来参与。
他们在&lt;a href=&#34;https://github.com/kubernetes/steering&#34;&gt;太平洋时间每月第一个周一上午 9:30&lt;/a&gt; 举行开放的会议。
你还可以通过其公共邮件列表 &lt;a href=&#34;mailto:steering@kubernetes.io&#34;&gt;steering@kubernetes.io&lt;/a&gt; 与他们联系。&lt;/p&gt;
&lt;!--
You can see what the Steering Committee meetings are all about by watching past meetings on the [YouTube Playlist](https://www.youtube.com/playlist?list=PL69nYSiGNLP1yP1B_nd9-drjoxp0Q14qM).

If you want to meet some of the newly elected Steering Committee members, join us for the Steering AMA at the [Kubernetes Contributor Summit in Chicago](https://k8s.dev/summit).
--&gt;
&lt;p&gt;你可以通过在 &lt;a href=&#34;https://www.youtube.com/playlist?list=PL69nYSiGNLP1yP1B_nd9-drjoxp0Q14qM&#34;&gt;YouTube 播放列表&lt;/a&gt;上观看过去的会议来了解指导委员会会议的全部内容。&lt;/p&gt;
&lt;p&gt;如果你想认识一些新当选的指导委员会成员，请参加我们在&lt;a href=&#34;https://k8s.dev/summit&#34;&gt;芝加哥 Kubernetes 贡献者峰会&lt;/a&gt;举行的 Steering AMA。&lt;/p&gt;
&lt;hr&gt;
&lt;!--
_This post was written by the [Contributor Comms Subproject](https://github.com/kubernetes/community/tree/master/communication/contributor-comms).
If you want to write stories about the Kubernetes community, learn more about us._
--&gt;
&lt;p&gt;&lt;strong&gt;这篇文章是由&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/communication/contributor-comms&#34;&gt;贡献者通信子项目&lt;/a&gt;撰写的。
如果你想撰写有关 Kubernetes 社区的故事，请了解有关我们的更多信息。&lt;/strong&gt;&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>kubeadm 七周年生日快乐！</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/09/26/happy-7th-birthday-kubeadm/</link>
      <pubDate>Tue, 26 Sep 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/09/26/happy-7th-birthday-kubeadm/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#39;Happy 7th Birthday kubeadm!&#39;
date: 2023-09-26
slug: happy-7th-birthday-kubeadm
--&gt;
&lt;!--
**Author:** Fabrizio Pandini (VMware)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Fabrizio Pandini (VMware)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href=&#34;https://github.com/windsonsea&#34;&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
What a journey so far!

Starting from the initial blog post [“How we made Kubernetes insanely easy to install”](/blog/2016/09/how-we-made-kubernetes-easy-to-install/) in September 2016, followed by an exciting growth that lead to general availability / [“Production-Ready Kubernetes Cluster Creation with kubeadm”](/blog/2018/12/04/production-ready-kubernetes-cluster-creation-with-kubeadm/) two years later.

And later on a continuous, steady and reliable flow of small improvements that is still going on as of today.
--&gt;
&lt;p&gt;回首向来萧瑟处，七年光阴风雨路！&lt;/p&gt;
&lt;p&gt;从 2016 年 9 月发表第一篇博文
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2016/09/how-we-made-kubernetes-easy-to-install/&#34;&gt;How we made Kubernetes insanely easy to install&lt;/a&gt;
开始，kubeadm 经历了令人激动的成长旅程，两年后随着
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2018/12/04/production-ready-kubernetes-cluster-creation-with-kubeadm/&#34;&gt;Production-Ready Kubernetes Cluster Creation with kubeadm&lt;/a&gt;
这篇博文的发表进阶为正式发布。&lt;/p&gt;
&lt;p&gt;此后，持续、稳定且可靠的系列小幅改进一直延续至今。&lt;/p&gt;
&lt;!--
## What is kubeadm? (quick refresher)

kubeadm is focused on bootstrapping Kubernetes clusters on existing infrastructure and performing an essential set of maintenance tasks. The core of the kubeadm interface is quite simple: new control plane nodes
are created by running [`kubeadm init`](/docs/reference/setup-tools/kubeadm/kubeadm-init/) and
worker nodes are joined to the control plane by running
[`kubeadm join`](/docs/reference/setup-tools/kubeadm/kubeadm-join/).
Also included are utilities for managing already bootstrapped clusters, such as control plane upgrades
and token and certificate renewal.
--&gt;
&lt;h2 id=&#34;什么是-kubeadm-简要回顾&#34;&gt;什么是 kubeadm？（简要回顾）&lt;/h2&gt;
&lt;p&gt;kubeadm 专注于在现有基础设施上启动引导 Kubernetes 集群并执行一组重要的维护任务。
kubeadm 接口的核心非常简单：通过运行
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/&#34;&gt;&lt;code&gt;kubeadm init&lt;/code&gt;&lt;/a&gt;
创建新的控制平面节点，通过运行
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-join/&#34;&gt;&lt;code&gt;kubeadm join&lt;/code&gt;&lt;/a&gt;
将工作节点加入控制平面。此外还有用于管理已启动引导的集群的实用程序，例如控制平面升级、令牌和证书续订等。&lt;/p&gt;
&lt;!--
To keep kubeadm lean, focused, and vendor/infrastructure agnostic, the following tasks are out of its scope:
- Infrastructure provisioning
- Third-party networking
- Non-critical add-ons, e.g. for monitoring, logging, and visualization
- Specific cloud provider integrations
--&gt;
&lt;p&gt;为了使 kubeadm 精简、聚焦且与供应商/基础设施无关，以下任务不包括在其范围内：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;基础设施制备&lt;/li&gt;
&lt;li&gt;第三方联网&lt;/li&gt;
&lt;li&gt;例如监视、日志记录和可视化等非关键的插件&lt;/li&gt;
&lt;li&gt;特定云驱动集成&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
Infrastructure provisioning, for example, is left to other SIG Cluster Lifecycle projects, such as the
[Cluster API](https://cluster-api.sigs.k8s.io/). Instead, kubeadm covers only the common denominator
in every Kubernetes cluster: the
[control plane](/docs/concepts/overview/components/#control-plane-components).
The user may install their preferred networking solution and other add-ons on top of Kubernetes
*after* cluster creation.
--&gt;
&lt;p&gt;例如，基础设施制备留给 SIG Cluster Lifecycle 等其他项目来处理，
比如 &lt;a href=&#34;https://cluster-api.sigs.k8s.io/&#34;&gt;Cluster API&lt;/a&gt;。
kubeadm 仅涵盖每个 Kubernetes 集群中的共同要素：
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/overview/components/#control-plane-components&#34;&gt;控制平面&lt;/a&gt;。
用户可以在集群创建后安装其偏好的联网方案和其他插件。&lt;/p&gt;
&lt;!--
Behind the scenes, kubeadm does a lot. The tool makes sure you have all the key components:
etcd, the API server, the scheduler, the controller manager. You can join more control plane nodes
for improving resiliency or join worker nodes for running your workloads. You get cluster DNS
and kube-proxy set up for you. TLS between components is enabled and used for encryption in transit.
--&gt;
&lt;p&gt;kubeadm 在幕后做了大量工作。它确保你拥有所有关键组件：etcd、API 服务器、调度器、控制器管理器。
你可以加入更多的控制平面节点以提高容错性，或者加入工作节点以运行你的工作负载。
kubeadm 还为你设置好了集群 DNS 和 kube-proxy；在各组件之间启用 TLS 用于传输加密。&lt;/p&gt;
&lt;!--
## Let&#39;s celebrate! Past, present and future of kubeadm

In all and for all kubeadm&#39;s story is tightly coupled with Kubernetes&#39; story, and with this amazing community.

Therefore celebrating kubeadm is first of all celebrating this community, a set of people, who joined forces in finding a common ground, a minimum viable tool, for bootstrapping Kubernetes clusters.
--&gt;
&lt;h2 id=&#34;庆祝-kubeadm-的过去-现在和未来&#34;&gt;庆祝 kubeadm 的过去、现在和未来！&lt;/h2&gt;
&lt;p&gt;总之，kubeadm 的故事与 Kubernetes 深度耦合，也离不开这个令人惊叹的社区。&lt;/p&gt;
&lt;p&gt;因此庆祝 kubeadm 首先是庆祝这个社区，一群人共同努力寻找一个共同点，一个最小可行工具，用于启动引导 Kubernetes 集群。&lt;/p&gt;
&lt;!--
This tool, was instrumental to the Kubernetes success back in time as well as it is today, and the silver line of kubeadm&#39;s value proposition can be summarized in two points

- An obsession in making things deadly simple for the majority of the users: kubeadm init &amp; kubeadm join, that&#39;s all you need! 

- A sharp focus on a well-defined problem scope: bootstrapping Kubernetes clusters on existing infrastructure. As our slogan says: *keep it simple, keep it extensible!*
--&gt;
&lt;p&gt;kubeadm 这个工具对 Kubernetes 的成功起到了关键作用，其价值主张可以概括为两点：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;极致的简单：只需两个命令 kubeadm init 和 kubeadm join 即可完成初始化和接入集群的操作！让大多数用户轻松上手。&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;明确定义的问题范围：专注于在现有基础设施上启动引导 Kubernetes 集群。正如我们的口号所说：&lt;strong&gt;保持简单，保持可扩展！&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
This silver line, this clear contract, is the foundation the entire kubeadm user base relies on, and this post is a celebration for kubeadm&#39;s users as well.

We are deeply thankful for any feedback from our users, for the enthusiasm that they are continuously showing for this tool via Slack, GitHub, social media, blogs, in person at every KubeCon or at the various meet ups around the world. Keep going!
--&gt;
&lt;p&gt;这个明确的约定是整个 kubeadm 用户群体所依赖的基石，同时本文也是为了与 kubeadm 的使用者们共同欢庆。&lt;/p&gt;
&lt;p&gt;我们由衷感谢用户给予的反馈，感谢他们通过 Slack、GitHub、社交媒体、博客、每次 KubeCon
会面以及各种聚会上持续展现的热情。来看看后续的发展！&lt;/p&gt;
&lt;!--
What continues to amaze me after all those years is the great things people are building on top of kubeadm, and as of today there is a strong and very active list of projects doing so:
- [minikube](https://minikube.sigs.k8s.io/)
- [kind](https://kind.sigs.k8s.io/)
- [Cluster API](https://cluster-api.sigs.k8s.io/)
- [Kubespray](https://kubespray.io/)
- and many more; if you are using Kubernetes today, there is a good chance that you are using kubeadm even without knowing it 😜
--&gt;
&lt;p&gt;这么多年来，对人们基于 kubeadm 构建的诸多项目我感到惊叹。迄今已经有很多强大而活跃的项目，例如：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://minikube.sigs.k8s.io/&#34;&gt;minikube&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kind.sigs.k8s.io/&#34;&gt;kind&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://cluster-api.sigs.k8s.io/&#34;&gt;Cluster API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kubespray.io/&#34;&gt;Kubespray&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;还有更多；如果你正在使用 Kubernetes，很可能你甚至不知道自己正在使用 kubeadm 😜&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
This community, the kubeadm’s users, the projects building on top of kubeadm are the highlights of kubeadm’s 7th birthday celebration and the foundation for what will come next!
--&gt;
&lt;p&gt;这个社区、kubeadm 的用户以及基于 kubeadm 构建的项目，是 kubeadm 七周年庆典的亮点，也是未来怎么发展的基础！&lt;/p&gt;
&lt;!--
Stay tuned, and feel free to reach out to us!
- Try [kubeadm](/docs/setup/) to install Kubernetes today
- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
- Connect with the community on [Slack](http://slack.k8s.io/)
- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
--&gt;
&lt;p&gt;请继续关注我们，并随时与我们联系！&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;现在尝试使用 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/setup/&#34;&gt;kubeadm&lt;/a&gt; 安装 Kubernetes&lt;/li&gt;
&lt;li&gt;在 &lt;a href=&#34;https://github.com/kubernetes/kubernetes&#34;&gt;GitHub&lt;/a&gt; 参与 Kubernetes 项目&lt;/li&gt;
&lt;li&gt;在 &lt;a href=&#34;http://slack.k8s.io/&#34;&gt;Slack&lt;/a&gt; 与社区交流&lt;/li&gt;
&lt;li&gt;关注我们的 Twitter 账号 &lt;a href=&#34;https://twitter.com/kubernetesio&#34;&gt;@Kubernetesio&lt;/a&gt;，获取最近更新信息&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>kubeadm：使用 etcd Learner 安全地接入控制平面节点</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/09/25/kubeadm-use-etcd-learner-mode/</link>
      <pubDate>Mon, 25 Sep 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/09/25/kubeadm-use-etcd-learner-mode/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#39;kubeadm: Use etcd Learner to Join a Control Plane Node Safely&#39;
date: 2023-09-25
slug: kubeadm-use-etcd-learner-mode
--&gt;
&lt;!--
**Author:** Paco Xu (DaoCloud)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Paco Xu (DaoCloud)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href=&#34;https://github.com/windsonsea&#34;&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
The [`kubeadm`](/docs/reference/setup-tools/kubeadm/) tool now supports etcd learner mode, which
allows you to enhance the resilience and stability
of your Kubernetes clusters by leveraging the [learner mode](https://etcd.io/docs/v3.4/learning/design-learner/#appendix-learner-implementation-in-v34)
feature introduced in etcd version 3.4.
This guide will walk you through using etcd learner mode with kubeadm. By default, kubeadm runs
a local etcd instance on each control plane node.
--&gt;
&lt;p&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/setup-tools/kubeadm/&#34;&gt;&lt;code&gt;kubeadm&lt;/code&gt;&lt;/a&gt; 工具现在支持 etcd learner 模式，
借助 etcd 3.4 版本引入的
&lt;a href=&#34;https://etcd.io/docs/v3.4/learning/design-learner/#appendix-learner-implementation-in-v34&#34;&gt;learner 模式&lt;/a&gt;特性，
可以提高 Kubernetes 集群的弹性和稳定性。本文将介绍如何在 kubeadm 中使用 etcd learner 模式。
默认情况下，kubeadm 在每个控制平面节点上运行一个本地 etcd 实例。&lt;/p&gt;
&lt;!--
In v1.27, kubeadm introduced a new feature gate `EtcdLearnerMode`. With this feature gate enabled,
when joining a new control plane node, a new etcd member will be created as a learner and
promoted to a voting member only after the etcd data are fully aligned.
--&gt;
&lt;p&gt;在 v1.27 中，kubeadm 引入了一个新的特性门控 &lt;code&gt;EtcdLearnerMode&lt;/code&gt;。
启用此特性门控后，在加入新的控制平面节点时，一个新的 etcd 成员将被创建为 learner，
只有在 etcd 数据被完全对齐后此成员才会晋升为投票成员。&lt;/p&gt;
&lt;!--
## What are the advantages of using learner mode?

etcd learner mode offers several compelling reasons to consider its adoption
in Kubernetes clusters:
--&gt;
&lt;h2 id=&#34;what-are-advantages-of-using-learner-mode&#34;&gt;使用 etcd learner 模式的优势是什么？  &lt;/h2&gt;
&lt;p&gt;在 Kubernetes 集群中采用 etcd learner 模式具有以下几个优点：&lt;/p&gt;
&lt;!--
1. **Enhanced Resilience**: etcd learner nodes are non-voting members that catch up with
   the leader&#39;s logs before becoming fully operational. This prevents new cluster members
   from disrupting the quorum or causing leader elections, making the cluster more resilient
   during membership changes.
1. **Reduced Cluster Unavailability**: Traditional approaches to adding new members often
   result in cluster unavailability periods, especially in slow infrastructure or misconfigurations.
   etcd learner mode minimizes such disruptions.
1. **Simplified Maintenance**: Learner nodes provide a safer and reversible way to add or replace
   cluster members. This reduces the risk of accidental cluster outages due to misconfigurations or
   missteps during member additions.
1. **Improved Network Tolerance**: In scenarios involving network partitions, learner mode allows
   for more graceful handling. Depending on the partition a new member lands, it can seamlessly
   integrate with the existing cluster without causing disruptions.
--&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;增强了弹性&lt;/strong&gt;：etcd learner 节点是非投票成员，在完全进入角色之前会追随领导者的日志。
这样可以防止新的集群成员干扰投票结果或引起领导者选举，从而使集群在成员变更期间更具弹性。&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;减少了集群不可用时间&lt;/strong&gt;：传统的添加新成员的方法通常会造成一段时间集群不可用，特别是在基础设施迟缓或误配的情况下更为明显。
而 etcd learner 模式可以最大程度地减少此类干扰。&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;简化了维护&lt;/strong&gt;：learner 节点提供了一种更安全、可逆的方式来添加或替换集群成员。
这降低了由于误配或在成员添加过程中出错而导致集群意外失效的风险。&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;改进了网络容错性&lt;/strong&gt;：在涉及网络分区的场景中，learner 模式允许更优雅的处理。
根据新成员所落入的分区，它可以无缝地与现有集群集成，而不会造成中断。&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
In summary, the etcd learner mode improves the reliability and manageability of Kubernetes clusters
during member additions and changes, making it a valuable feature for cluster operators.
--&gt;
&lt;p&gt;总之，etcd learner 模式可以在成员添加和变更期间提高 Kubernetes 集群的可靠性和可管理性，
这个特性对集群运营人员很有价值。&lt;/p&gt;
&lt;!--
## How nodes join a cluster that&#39;s using the new mode

### Create a Kubernetes cluster backed by etcd in learner mode {#create-K8s-cluster-etcd-learner-mode}
--&gt;
&lt;h2 id=&#34;how-nodes-join-cluster-that-using-new-node&#34;&gt;节点如何接入使用这种新模式的集群  &lt;/h2&gt;
&lt;h3 id=&#34;create-K8s-cluster-etcd-learner-mode&#34;&gt;创建以 etcd learner 模式支撑的 Kubernetes 集群 &lt;/h3&gt;
&lt;!--
For a general explanation about creating highly available clusters with kubeadm, you can refer to
[Creating Highly Available Clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/).

To create a Kubernetes cluster, backed by etcd in learner mode, using kubeadm, follow these steps:
--&gt;
&lt;p&gt;关于使用 kubeadm 创建高可用集群的通用说明，
请参阅&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/&#34;&gt;使用 kubeadm 创建高可用集群&lt;/a&gt;。&lt;/p&gt;
&lt;p&gt;要使用 kubeadm 创建一个后台是 learner 模式的 etcd 的 Kubernetes 集群，按照以下步骤操作：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# kubeadm init --feature-gates=EtcdLearnerMode=true ...&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubeadm init --config&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;kubeadm-config.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
The kubeadm configuration file is like below:
--&gt;
&lt;p&gt;kubeadm 配置文件如下：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;kubeadm.k8s.io/v1beta3&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;ClusterConfiguration&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;featureGates&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;EtcdLearnerMode&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;true&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
The kubeadm tool deploys a single-node Kubernetes cluster with etcd set to use learner mode.
--&gt;
&lt;p&gt;这里，kubeadm 工具部署单节点 Kubernetes 集群，其中的 etcd 被设置为 learner 模式。&lt;/p&gt;
&lt;!--
### Join nodes to the Kubernetes cluster

Before joining a control-plane node to the new Kubernetes cluster, ensure that the existing control plane nodes
and all etcd members are healthy.

Check the cluster health with `etcdctl`. If `etcdctl` isn&#39;t available, you can run this tool inside a container image.
You would do that directly with your container runtime using a tool such as `crictl run` and not through Kubernetes

Here is an example on a client command that uses secure communication to check the cluster health of the etcd cluster:
--&gt;
&lt;h3 id=&#34;join-nodes-to-the-kubernetes-cluster&#34;&gt;将节点接入 Kubernetes 集群  &lt;/h3&gt;
&lt;p&gt;在将控制平面节点接入新的 Kubernetes 集群之前，确保现有的控制平面节点和所有 etcd 成员都健康。&lt;/p&gt;
&lt;p&gt;使用 &lt;code&gt;etcdctl&lt;/code&gt; 检查集群的健康状况。如果 &lt;code&gt;etcdctl&lt;/code&gt; 不可用，你可以运行在容器镜像内的这个工具。
你可以直接使用 &lt;code&gt;crictl run&lt;/code&gt; 这类容器运行时工具而不是通过 Kubernetes 来执行此操作。&lt;/p&gt;
&lt;p&gt;以下是一个使用安全通信来检查 etcd 集群健康状况的客户端命令示例：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b8860b&#34;&gt;ETCDCTL_API&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;3&lt;/span&gt; etcdctl --endpoints 127.0.0.1:2379 &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;  --cert&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;/etc/kubernetes/pki/etcd/server.crt &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;  --key&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;/etc/kubernetes/pki/etcd/server.key &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;  --cacert&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;/etc/kubernetes/pki/etcd/ca.crt &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;  member list
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;...
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;dc543c4d307fadb9, started, node1, https://10.6.177.40:2380, https://10.6.177.40:2379, &lt;span style=&#34;color:#a2f&#34;&gt;false&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
To check if the Kubernetes control plane is healthy, run `kubectl get node -l node-role.kubernetes.io/control-plane=`
and check if the nodes are ready.
--&gt;
&lt;p&gt;要检查 Kubernetes 控制平面是否健康，运行 &lt;code&gt;kubectl get node -l node-role.kubernetes.io/control-plane=&lt;/code&gt;
并检查节点是否就绪。&lt;/p&gt;

&lt;div class=&#34;alert alert-info&#34; role=&#34;alert&#34;&gt;&lt;h4 class=&#34;alert-heading&#34;&gt;说明：&lt;/h4&gt;&lt;!--
It is recommended to have an odd number of members in an etcd cluster.
--&gt;
&lt;p&gt;建议在 etcd 集群中的成员个数为奇数。&lt;/p&gt;&lt;/div&gt;

&lt;!--
Before joining a worker node to the new Kubernetes cluster, ensure that the control plane nodes are healthy.
--&gt;
&lt;p&gt;在将工作节点接入新的 Kubernetes 集群之前，确保控制平面节点健康。&lt;/p&gt;
&lt;!--
## What&#39;s next

- The feature gate `EtcdLearnerMode` is alpha in v1.27 and we expect it to graduate to beta in the next
  minor release of Kubernetes (v1.29).
- etcd has an open issue that may make the process more automatic:
  [Support auto-promoting a learner member to a voting member](https://github.com/etcd-io/etcd/issues/15107).
- Learn more about the kubeadm [configuration format](/docs/reference/config-api/kubeadm-config.v1beta3/).
--&gt;
&lt;h2 id=&#34;whats-next&#34;&gt;接下来的步骤  &lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;特性门控 &lt;code&gt;EtcdLearnerMode&lt;/code&gt; 在 v1.27 中为 Alpha，预计会在 Kubernetes 的下一个小版本发布（v1.29）中进阶至 Beta。&lt;/li&gt;
&lt;li&gt;etcd 社区有一个开放问题，目的是使这个过程更加自动化：
&lt;a href=&#34;https://github.com/etcd-io/etcd/issues/15107&#34;&gt;支持自动将 learner 成员晋升为投票成员&lt;/a&gt;。&lt;/li&gt;
&lt;li&gt;更多细节参阅 kubeadm &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/&#34;&gt;配置格式&lt;/a&gt;。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Feedback

Was this guide helpful? If you have any feedback or encounter any issues, please let us know.
Your feedback is always welcome! Join the bi-weekly [SIG Cluster Lifecycle meeting](https://docs.google.com/document/d/1Gmc7LyCIL_148a9Tft7pdhdee0NBHdOfHS1SAF0duI4/edit)
or weekly [kubeadm office hours](https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit).
Or reach us via [Slack](https://slack.k8s.io/) (channel **#kubeadm**), or the
[SIG&#39;s mailing list](https://groups.google.com/g/kubernetes-sig-cluster-lifecycle).
--&gt;
&lt;h2 id=&#34;feedback&#34;&gt;反馈  &lt;/h2&gt;
&lt;p&gt;本文对你有帮助吗？如果你有任何反馈或遇到任何问题，请告诉我们。
非常欢迎你提出反馈！你可以参加 &lt;a href=&#34;https://docs.google.com/document/d/1Gmc7LyCIL_148a9Tft7pdhdee0NBHdOfHS1SAF0duI4/edit&#34;&gt;SIG Cluster Lifecycle 双周例会&lt;/a&gt;
或 &lt;a href=&#34;https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit&#34;&gt;kubeadm 每周讨论会&lt;/a&gt;。
你还可以通过 &lt;a href=&#34;https://slack.k8s.io/&#34;&gt;Slack&lt;/a&gt;（频道 &lt;strong&gt;#kubeadm&lt;/strong&gt;）或
&lt;a href=&#34;https://groups.google.com/g/kubernetes-sig-cluster-lifecycle&#34;&gt;SIG 邮件列表&lt;/a&gt;联系我们。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>用户命名空间：对运行有状态 Pod 的支持进入 Alpha 阶段!</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/09/13/userns-alpha/</link>
      <pubDate>Wed, 13 Sep 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/09/13/userns-alpha/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;User Namespaces: Now Supports Running Stateful Pods in Alpha!&#34;
date: 2023-09-13
slug: userns-alpha
--&gt;
&lt;!--
**Authors:** Rodrigo Campos Catelin (Microsoft), Giuseppe Scrivano (Red Hat), Sascha Grunert (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Rodrigo Campos Catelin (Microsoft), Giuseppe Scrivano (Red Hat), Sascha Grunert (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
Kubernetes v1.25 introduced support for user namespaces for only stateless
pods. Kubernetes 1.28 lifted that restriction, after some design changes were
done in 1.27.
--&gt;
&lt;p&gt;Kubernetes v1.25 引入用户命名空间（User Namespace）特性，仅支持无状态（Stateless）Pod。
Kubernetes 1.28 在 1.27 的基础上中进行了一些改进后，取消了这一限制。&lt;/p&gt;
&lt;!--
The beauty of this feature is that:
 * it is trivial to adopt (you just need to set a bool in the pod spec)
 * doesn&#39;t need any changes for **most** applications
 * improves security by _drastically_ enhancing the isolation of containers and
   mitigating CVEs rated HIGH and CRITICAL.
--&gt;
&lt;p&gt;此特性的精妙之处在于：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;使用起来很简单（只需在 Pod 规约（spec）中设置一个 bool）&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;大多数&lt;/strong&gt;应用程序不需要任何更改&lt;/li&gt;
&lt;li&gt;通过&lt;strong&gt;大幅度&lt;/strong&gt;加强容器的隔离性以及应对评级为高（HIGH）和关键（CRITICAL）的 CVE 来提高安全性。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
This post explains the basics of user namespaces and also shows:
 * the changes that arrived in the recent Kubernetes v1.28 release
 * a **demo of a vulnerability rated as HIGH** that is not exploitable with user namespaces
 * the runtime requirements to use this feature
 * what you can expect in future releases regarding user namespaces.
--&gt;
&lt;p&gt;这篇文章介绍了用户命名空间的基础知识，并展示了：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;最近的 Kubernetes v1.28 版本中出现的变化&lt;/li&gt;
&lt;li&gt;一个评级为&lt;strong&gt;高（HIGH）的漏洞的演示（Demo）&lt;/strong&gt;，该漏洞无法在用户命名空间中被利用&lt;/li&gt;
&lt;li&gt;使用此特性的运行时要求&lt;/li&gt;
&lt;li&gt;关于用户命名空间的未来版本中可以期待的内容&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## What is a user namespace?

A user namespace is a Linux feature that isolates the user and group identifiers
(UIDs and GIDs) of the containers from the ones on the host. The indentifiers
in the container can be mapped to indentifiers on the host in a way where the
host UID/GIDs used for different containers never overlap. Even more, the
identifiers can be mapped to *unprivileged* non-overlapping UIDs and GIDs on the
host. This basically means two things:
--&gt;
&lt;h2 id=&#34;用户命名空间是什么&#34;&gt;用户命名空间是什么？&lt;/h2&gt;
&lt;p&gt;用户命名空间是 Linux 的一项特性，它将容器的用户和组标识符（UID 和 GID）与宿主机上的标识符隔离开来。
容器中的标识符可以映射到宿主机上的标识符，其中用于不同容器的主机 UID/GID 从不重叠。
更重要的是，标识符可以映射到宿主机上的&lt;strong&gt;非特权&lt;/strong&gt;、非重叠的 UID 和 GID。这基本上意味着两件事：&lt;/p&gt;
&lt;!--
 * As the UIDs and GIDs for different containers are mapped to different UIDs
   and GIDs on the host, containers have a harder time to attack each other even
   if they escape the container boundaries. For example, if container A is running
   with different UIDs and GIDs on the host than container B, the operations it
   can do on container B&#39;s files and process are limited: only read/write what a
   file allows to others, as it will never have permission for the owner or
   group (the UIDs/GIDs on the host are guaranteed to be different for
   different containers).
--&gt;
&lt;ul&gt;
&lt;li&gt;由于不同容器的 UID 和 GID 映射到宿主机上不同的 UID 和 GID，因此即使它们逃逸出了容器的边界，也很难相互攻击。
例如，如果容器 A 在宿主机上使用与容器 B 不同的 UID 和 GID 运行，则它可以对容器 B
的文件和进程执行的操作受到限制：只能读/写允许其他人使用的文件，
因为它永远不会拥有所有者或组的权限（宿主机上的 UID/GID 保证对于不同的容器是不同的）。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
 * As the UIDs and GIDs are mapped to unprivileged users on the host, if a
   container escapes the container boundaries, even if it is running as root
   inside the container, it has no privileges on the host. This greatly
   protects what host files it can read/write, which process it can send signals
   to, etc.

Furthermore, capabilities granted are only valid inside the user namespace and
not on the host.
--&gt;
&lt;ul&gt;
&lt;li&gt;由于 UID 和 GID 映射到宿主机上的非特权用户，如果容器逃逸出了容器边界，
即使它在容器内以 root 身份运行，它在宿主机上也没有特权。
这极大地保护了它可以读/写哪些宿主机文件、可以向哪个进程发送信号等。&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;此外，所授予的权能（Capability）仅在用户命名空间内有效，而在宿主机上无效。&lt;/p&gt;
&lt;!--
Without using a user namespace a container running as root, in the case of a
container breakout, has root privileges on the node. And if some capabilities
were granted to the container, the capabilities are valid on the host too. None
of this is true when using user namespaces (modulo bugs, of course 🙂).
--&gt;
&lt;p&gt;在不使用用户命名空间的情况下，以 root 身份运行的容器在发生逃逸的情况下会获得节点上的
root 权限。如果某些权能被授予容器，那么这些权能在主机上也有效。
当使用用户命名空间时，这些情况都会被避免（当然，除非存在漏洞 🙂）。&lt;/p&gt;
&lt;!--
## Changes in 1.28

As already mentioned, starting from 1.28, Kubernetes supports user namespaces
with stateful pods. This means that pods with user namespaces can use any type
of volume, they are no longer limited to only some volume types as before.
--&gt;
&lt;h2 id=&#34;1-28-版本的变化&#34;&gt;1.28 版本的变化&lt;/h2&gt;
&lt;p&gt;正如之前提到的，从 1.28 版本开始，Kubernetes 支持有状态的 Pod 的用户命名空间。
这意味着具有用户命名空间的 Pod 可以使用任何类型的卷，不再仅限于以前的部分卷类型。&lt;/p&gt;
&lt;!--
The feature gate to activate this feature was renamed, it is no longer
`UserNamespacesStatelessPodsSupport` but from 1.28 onwards you should use
`UserNamespacesSupport`. There were many changes done and the requirements on
the node hosts changed. So with Kubernetes 1.28 the feature flag was renamed to
reflect this.
--&gt;
&lt;p&gt;从 1.28 版本开始，用于激活此特性的特性门控已被重命名，不再是 &lt;code&gt;UserNamespacesStatelessPodsSupport&lt;/code&gt;，
而应该使用 &lt;code&gt;UserNamespacesSupport&lt;/code&gt;。此特性经历了许多更改，
对节点主机的要求也发生了变化。因此，Kubernetes 1.28 版本将该特性标志重命名以反映这一变化。&lt;/p&gt;
&lt;!--
## Demo

Rodrigo created a demo which exploits [CVE 2022-0492][cve-link] and shows how
the exploit can occur without user namespaces. He also shows how it is not
possible to use this exploit from a Pod where the containers are using this
feature.
--&gt;
&lt;h2 id=&#34;演示&#34;&gt;演示&lt;/h2&gt;
&lt;p&gt;Rodrigo 创建了一个利用 &lt;a href=&#34;https://unit42.paloaltonetworks.com/cve-2022-0492-cgroups/&#34;&gt;CVE 2022-0492&lt;/a&gt; 的演示，
用以展现如何在没有用户命名空间的情况下利用该漏洞。
他还展示了在容器使用了此特性的 Pod 中无法利用此漏洞的情况。&lt;/p&gt;
&lt;!--
This vulnerability is rated **HIGH** and allows **a container with no special
privileges to read/write to any path on the host** and launch processes as root
on the host too.


&lt;div style=&#34;position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;&#34;&gt;
  &lt;iframe src=&#34;https://www.youtube.com/embed/M4a2b4KkXN8&#34; style=&#34;position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;&#34; allowfullscreen title=&#34;Mitigation of CVE-2022-0492 on Kubernetes by enabling User Namespace support&#34;&gt;&lt;/iframe&gt;
&lt;/div&gt;

--&gt;
&lt;p&gt;此漏洞被评为高危，允许一个没有特殊特权的容器读/写宿主机上的任何路径，并在宿主机上以 root 身份启动进程。&lt;/p&gt;

&lt;div style=&#34;position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;&#34;&gt;
  &lt;iframe src=&#34;https://www.youtube.com/embed/M4a2b4KkXN8&#34; style=&#34;position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;&#34; allowfullscreen title=&#34;Mitigation of CVE-2022-0492 on Kubernetes by enabling User Namespace support&#34;&gt;&lt;/iframe&gt;
&lt;/div&gt;

&lt;!--
Most applications in containers run as root today, or as a semi-predictable
non-root user (user ID 65534 is a somewhat popular choice). When you run a Pod
with containers using a userns, Kubernetes runs those containers as unprivileged
users, with no changes needed in your app.
--&gt;
&lt;p&gt;如今，容器中的大多数应用程序都以 root 身份运行，或者以半可预测的非 root
用户身份运行（用户 ID 65534 是一个比较流行的选择）。
当你运行某个 Pod，而其中带有使用用户名命名空间（userns）的容器时，Kubernetes
以非特权用户身份运行这些容器，无需在你的应用程序中进行任何更改。&lt;/p&gt;
&lt;!--
This means two containers running as user 65534 will effectively be mapped to
different users on the host, limiting what they can do to each other in case of
an escape, and if they are running as root, the privileges on the host are
reduced to the one of an unprivileged user.

[cve-link]: https://unit42.paloaltonetworks.com/cve-2022-0492-cgroups/
--&gt;
&lt;p&gt;这意味着两个以用户 65534 身份运行的容器实际上会被映射到宿主机上的不同用户，
从而限制了它们在发生逃逸的情况下能够对彼此执行的操作，如果它们以 root 身份运行，
宿主机上的特权也会降低到非特权用户的权限。&lt;/p&gt;
&lt;!--
## Node system requirements

There are requirements on the Linux kernel version as well as the container
runtime to use this feature.
--&gt;
&lt;h2 id=&#34;节点系统要求&#34;&gt;节点系统要求&lt;/h2&gt;
&lt;p&gt;要使用此功能，对 Linux 内核版本以及容器运行时有一定要求。&lt;/p&gt;
&lt;!--
On Linux you need Linux 6.3 or greater. This is because the feature relies on a
kernel feature named idmap mounts, and support to use idmap mounts with tmpfs
was merged in Linux 6.3.

If you are using CRI-O with crun, this is [supported in CRI-O
1.28.1][CRIO-release] and crun 1.9 or greater. If you are using CRI-O with runc,
this is still not supported.
--&gt;
&lt;p&gt;在 Linux上，你需要 Linux 6.3 或更高版本。这是因为该特性依赖于一个名为
idmap mounts 的内核特性，而 Linux 6.3 中合并了针对 tmpfs 使用 idmap mounts 的支持&lt;/p&gt;
&lt;p&gt;如果你使用 CRI-O 与 crun，这一特性在 &lt;a href=&#34;https://github.com/cri-o/cri-o/releases/tag/v1.28.1&#34;&gt;CRI-O 1.28.1&lt;/a&gt; 和 crun 1.9 或更高版本中受支持。
如果你使用 CRI-O 与 runc，目前仍不受支持。&lt;/p&gt;
&lt;!--
containerd support is currently targeted for containerd 2.0; it is likely that
it won&#39;t matter if you use it with crun or runc.

Please note that containerd 1.7 added _experimental_ support for user
namespaces as implemented in Kubernetes 1.25 and 1.26. The redesign done in 1.27
is not supported by containerd 1.7, therefore it only works, in terms of user
namespaces support, with Kubernetes 1.25 and 1.26.
--&gt;
&lt;p&gt;containerd 对此的支持目前设定的目标是 containerd 2.0；不管你是否与 crun 或 runc 一起使用，或许都不重要。&lt;/p&gt;
&lt;p&gt;请注意，containerd 1.7 添加了对用户命名空间的实验性支持，正如在 Kubernetes 1.25
和 1.26 中实现的那样。1.27 版本中进行的重新设计不受 containerd 1.7 支持，
因此它在用户命名空间支持方面仅适用于 Kubernetes 1.25 和 1.26。&lt;/p&gt;
&lt;!--
One limitation present in containerd 1.7 is that it needs to change the
ownership of every file and directory inside the container image, during Pod
startup. This means it has a storage overhead and can significantly impact the
container startup latency. Containerd 2.0 will probably include a implementation
that will eliminate the startup latency added and the storage overhead. Take
this into account if you plan to use containerd 1.7 with user namespaces in
production.

None of these containerd limitations apply to [CRI-O 1.28][CRIO-release].

[CRIO-release]: https://github.com/cri-o/cri-o/releases/tag/v1.28.1
--&gt;
&lt;p&gt;containerd 1.7 存在的一个限制是，在 Pod 启动期间需要更改容器镜像中每个文件和目录的所有权。
这意味着它具有存储开销，并且可能会显著影响容器启动延迟。containerd 2.0
可能会包括一个实现，可以消除增加的启动延迟和存储开销。如果计划在生产中使用
containerd 1.7 与用户命名空间，请考虑这一点。&lt;/p&gt;
&lt;p&gt;这些 Containerd 限制均不适用于 [CRI-O 1.28][CRIO 版本]。&lt;/p&gt;
&lt;!--
## What’s next?

Looking ahead to Kubernetes 1.29, the plan is to work with SIG Auth to integrate user
namespaces to Pod Security Standards (PSS) and the Pod Security Admission. For
the time being, the plan is to relax checks in PSS policies when user namespaces are
in use. This means that the fields `spec[.*].securityContext` `runAsUser`,
`runAsNonRoot`, `allowPrivilegeEscalation` and `capabilities` will not trigger a
violation if user namespaces are in use. The behavior will probably be controlled by
utilizing a API Server feature gate, like `UserNamespacesPodSecurityStandards`
or similar.
--&gt;
&lt;h2 id=&#34;接下来&#34;&gt;接下来？&lt;/h2&gt;
&lt;p&gt;展望 Kubernetes 1.29，计划是与 SIG Auth 合作，将用户命名空间集成到 Pod 安全标准（PSS）和 Pod 安全准入中。
目前的计划是在使用用户命名空间时放宽 Pod 安全标准（PSS）策略中的检查。这意味着如果使用用户命名空间，那么字段
&lt;code&gt;spec[.*].securityContext&lt;/code&gt;、&lt;code&gt;runAsUser&lt;/code&gt;、&lt;code&gt;runAsNonRoot&lt;/code&gt;、&lt;code&gt;allowPrivilegeEscalation和capabilities&lt;/code&gt;
将不会触发违规，此行为可能会通过使用 API Server 特性门控来控制，比如 &lt;code&gt;UserNamespacesPodSecurityStandards&lt;/code&gt; 或其他类似的。&lt;/p&gt;
&lt;!--
## How do I get involved?

You can reach SIG Node by several means:
- Slack: [#sig-node](https://kubernetes.slack.com/messages/sig-node)
- [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-node)
- [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fnode)

You can also contact us directly:
- GitHub: @rata @giuseppe @saschagrunert
- Slack: @rata @giuseppe @sascha
--&gt;
&lt;h2 id=&#34;我该如何参与&#34;&gt;我该如何参与？&lt;/h2&gt;
&lt;p&gt;你可以通过以下方式与 SIG Node 联系：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Slack：&lt;a href=&#34;https://kubernetes.slack.com/messages/sig-node&#34;&gt;#sig-node&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://groups.google.com/forum/#!forum/kubernetes-sig-node&#34;&gt;Mailing list&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/community/labels/sig%2Fnode&#34;&gt;Open Community Issues/PRs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;你还可以直接联系我们：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;GitHub：@rata @giuseppe @saschagrunert&lt;/li&gt;
&lt;li&gt;Slack：@rata @giuseppe @sascha&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>比较本地 Kubernetes 开发工具：Telepresence、Gefyra 和 mirrord</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/09/12/local-k8s-development-tools/</link>
      <pubDate>Tue, 12 Sep 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/09/12/local-k8s-development-tools/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#39;Comparing Local Kubernetes Development Tools: Telepresence, Gefyra, and mirrord&#39;
date: 2023-09-12
slug: local-k8s-development-tools
--&gt;
&lt;!--
**Author:** Eyal Bukchin (MetalBear)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Eyal Bukchin (MetalBear)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href=&#34;https://github.com/windsonsea&#34;&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
The Kubernetes development cycle is an evolving landscape with a myriad of tools seeking to streamline the process. Each tool has its unique approach, and the choice often comes down to individual project requirements, the team&#39;s expertise, and the preferred workflow.
--&gt;
&lt;p&gt;Kubernetes 的开发周期是一个不断演化的领域，有许多工具在寻求简化这个过程。
每个工具都有其独特的方法，具体选择通常取决于各个项目的要求、团队的专业知识以及所偏好的工作流。&lt;/p&gt;
&lt;!--
Among the various solutions, a category we dubbed “Local K8S Development tools” has emerged, which seeks to enhance the Kubernetes development experience by connecting locally running components to the Kubernetes cluster. This facilitates rapid testing of new code in cloud conditions, circumventing the traditional cycle of Dockerization, CI, and deployment.

In this post, we compare three solutions in this category: Telepresence, Gefyra, and our own contender, mirrord.
--&gt;
&lt;p&gt;在各种解决方案中，我们称之为“本地 K8S 开发工具”的一个类别已渐露端倪，
这一类方案通过将本地运行的组件连接到 Kubernetes 集群来提升 Kubernetes 开发体验。
这样可以在云环境中快速测试新代码，避开了 Docker 化、CI 和部署这样的传统周期。&lt;/p&gt;
&lt;p&gt;在本文中，我们将比较这个类别中的三个解决方案：Telepresence、Gefyra 和我们自己的挑战者 mirrord。&lt;/p&gt;
&lt;h2 id=&#34;telepresence&#34;&gt;Telepresence&lt;/h2&gt;
&lt;!--
The oldest and most well-established solution in the category, [Telepresence](https://www.telepresence.io/) uses a VPN (or more specifically, a `tun` device) to connect the user&#39;s machine (or a locally running container) and the cluster&#39;s network. It then supports the interception of incoming traffic to a specific service in the cluster, and its redirection to a local port. The traffic being redirected can also be filtered to avoid completely disrupting the remote service. It also offers complementary features to support file access (by locally mounting a volume mounted to a pod) and importing environment variables.
Telepresence requires the installation of a local daemon on the user&#39;s machine (which requires root privileges) and a Traffic Manager component on the cluster. Additionally, it runs an Agent as a sidecar on the pod to intercept the desired traffic.
--&gt;
&lt;p&gt;&lt;a href=&#34;https://www.telepresence.io/&#34;&gt;Telepresence&lt;/a&gt; 是这类工具中最早也最成熟的解决方案，
它使用 VPN（或更具体地说，一个 &lt;code&gt;tun&lt;/code&gt; 设备）将用户的机器（或本地运行的容器）与集群的网络相连。
它支持拦截发送到集群中特定服务的传入流量，并将其重定向到本地端口。
被重定向的流量还可以被过滤，以避免完全破坏远程服务。
它还提供了一些补充特性，如支持文件访问（通过本地挂载卷将其挂载到 Pod 上）和导入环境变量。
Telepresence 需要在用户的机器上安装一个本地守护进程（需要 root 权限），并在集群上运行一个
Traffic Manager 组件。此外，它在 Pod 上以边车的形式运行一个 Agent 来拦截所需的流量。&lt;/p&gt;
&lt;h2 id=&#34;gefyra&#34;&gt;Gefyra&lt;/h2&gt;
&lt;!--
[Gefyra](https://gefyra.dev/), similar to Telepresence, employs a VPN to connect to the cluster. However, it only supports connecting locally running Docker containers to the cluster. This approach enhances portability across different OSes and local setups. However, the downside is that it does not support natively run uncontainerized code.
--&gt;
&lt;p&gt;&lt;a href=&#34;https://gefyra.dev/&#34;&gt;Gefyra&lt;/a&gt; 与 Telepresence 类似，也采用 VPN 连接到集群。
但 Gefyra 只支持将本地运行的 Docker 容器连接到集群。
这种方法增强了在不同操作系统和本地设置环境之间的可移植性。
然而，它的缺点是不支持原生运行非容器化的代码。&lt;/p&gt;
&lt;!--
Gefyra primarily focuses on network traffic, leaving file access and environment variables unsupported. Unlike Telepresence, it doesn&#39;t alter the workloads in the cluster, ensuring a straightforward clean-up process if things go awry.
--&gt;
&lt;p&gt;Gefyra 主要关注网络流量，不支持文件访问和环境变量。
与 Telepresence 不同，Gefyra 不会改变集群中的工作负载，
因此如果发生意外情况，清理过程更加简单明了。&lt;/p&gt;
&lt;h2 id=&#34;mirrord&#34;&gt;mirrord&lt;/h2&gt;
&lt;!--
The newest of the three tools, [mirrord](https://mirrord.dev/) adopts a different approach by injecting itself
into the local binary (utilizing `LD_PRELOAD` on Linux or `DYLD_INSERT_LIBRARIES` on macOS),
and overriding libc function calls, which it then proxies a temporary agent it runs in the cluster.
For example, when the local process tries to read a file mirrord intercepts that call and sends it
to the agent, which then reads the file from the remote pod. This method allows mirrord to cover
all inputs and outputs to the process – covering network access, file access, and
environment variables uniformly.
--&gt;
&lt;p&gt;作为这三个工具中最新的工具，&lt;a href=&#34;https://mirrord.dev/&#34;&gt;mirrord&lt;/a&gt;采用了一种不同的方法，
它通过将自身注入到本地二进制文件中（在 Linux 上利用 &lt;code&gt;LD_PRELOAD&lt;/code&gt;，在 macOS 上利用 &lt;code&gt;DYLD_INSERT_LIBRARIES&lt;/code&gt;），
并重写 libc 函数调用，然后代理到在集群中运行的临时代理。
例如，当本地进程尝试读取一个文件时，mirrord 会拦截该调用并将其发送到该代理，
该代理再从远程 Pod 读取文件。这种方法允许 mirrord 覆盖进程的所有输入和输出，统一处理网络访问、文件访问和环境变量。&lt;/p&gt;
&lt;!--
By working at the process level, mirrord supports running multiple local processes simultaneously, each in the context of their respective pod in the cluster, without requiring them to be containerized and without needing root permissions on the user’s machine.
--&gt;
&lt;p&gt;通过在进程级别工作，mirrord 支持同时运行多个本地进程，每个进程都在集群中的相应 Pod 上下文中运行，
无需将这些进程容器化，也无需在用户机器上获取 root 权限。&lt;/p&gt;
&lt;!--
## Summary
--&gt;
&lt;h2 id=&#34;summary&#34;&gt;摘要  &lt;/h2&gt;
&lt;table&gt;
&lt;!--
&lt;caption&gt;Comparison of Telepresence, Gefyra, and mirrord&lt;/caption&gt;
--&gt;
&lt;caption&gt;比较 Telepresence、Gefyra 和 mirrord&lt;/caption&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;td class=&#34;empty&#34;&gt;&lt;/td&gt;
&lt;th&gt;Telepresence&lt;/th&gt;
&lt;th&gt;Gefyra&lt;/th&gt;
&lt;th&gt;mirrord&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;!--
&lt;th scope=&#34;row&#34;&gt;Cluster connection scope&lt;/th&gt;
&lt;td&gt;Entire machine or container&lt;/td&gt;
&lt;td&gt;Container&lt;/td&gt;
&lt;td&gt;Process&lt;/td&gt;
--&gt;
&lt;th scope=&#34;row&#34;&gt;集群连接作用域&lt;/th&gt;
&lt;td&gt;整台机器或容器&lt;/td&gt;
&lt;td&gt;容器&lt;/td&gt;
&lt;td&gt;进程&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;!--
&lt;th scope=&#34;row&#34;&gt;Developer OS support&lt;/th&gt;
&lt;td&gt;Linux, macOS, Windows&lt;/td&gt;
&lt;td&gt;Linux, macOS, Windows&lt;/td&gt;
&lt;td&gt;Linux, macOS, Windows (WSL)&lt;/td&gt;
--&gt;
&lt;th scope=&#34;row&#34;&gt;开发者操作系统支持&lt;/th&gt;
&lt;td&gt;Linux、macOS、Windows&lt;/td&gt;
&lt;td&gt;Linux、macOS、Windows&lt;/td&gt;
&lt;td&gt;Linux、macOS、Windows (WSL)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;!--
&lt;th scope=&#34;row&#34;&gt;Incoming traffic features&lt;/th&gt;
&lt;td&gt;Interception&lt;/td&gt;
&lt;td&gt;Interception&lt;/td&gt;
&lt;td&gt;Interception or mirroring&lt;/td&gt;
--&gt;
&lt;th scope=&#34;row&#34;&gt;传入的流量特性&lt;/th&gt;
&lt;td&gt;拦截&lt;/td&gt;
&lt;td&gt;拦截&lt;/td&gt;
&lt;td&gt;拦截或镜像&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;!--
&lt;th scope=&#34;row&#34;&gt;File access&lt;/th&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;td&gt;Unsupported&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
--&gt;
&lt;th scope=&#34;row&#34;&gt;文件访问&lt;/th&gt;
&lt;td&gt;已支持&lt;/td&gt;
&lt;td&gt;不支持&lt;/td&gt;
&lt;td&gt;已支持&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;!--
&lt;th scope=&#34;row&#34;&gt;Environment variables&lt;/th&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;td&gt;Unsupported&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
--&gt;
&lt;th scope=&#34;row&#34;&gt;环境变量&lt;/th&gt;
&lt;td&gt;已支持&lt;/td&gt;
&lt;td&gt;不支持&lt;/td&gt;
&lt;td&gt;已支持&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;!--
&lt;th scope=&#34;row&#34;&gt;Requires local root&lt;/th&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
--&gt;
&lt;th scope=&#34;row&#34;&gt;需要本地 root&lt;/th&gt;
&lt;td&gt;是&lt;/td&gt;
&lt;td&gt;否&lt;/td&gt;
&lt;td&gt;否&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;!--
&lt;th scope=&#34;row&#34;&gt;How to use&lt;/th&gt;
&lt;td&gt;&lt;ul&gt;&lt;li&gt;CLI&lt;/li&gt;&lt;li&gt;Docker Desktop extension&lt;/li&gt;&lt;/ul&gt;&lt;/td&gt;
&lt;td&gt;&lt;ul&gt;&lt;li&gt;CLI&lt;/li&gt;&lt;li&gt;Docker Desktop extension&lt;/li&gt;&lt;/ul&gt;&lt;/td&gt;
&lt;td&gt;&lt;ul&gt;&lt;li&gt;CLI&lt;/li&gt;&lt;li&gt;Visual Studio Code extension&lt;/li&gt;&lt;li&gt;IntelliJ plugin&lt;/li&gt;&lt;/ul&gt;&lt;/td&gt;
--&gt;
&lt;th scope=&#34;row&#34;&gt;如何使用&lt;/th&gt;
&lt;td&gt;&lt;ul&gt;&lt;li&gt;CLI&lt;/li&gt;&lt;li&gt;Docker Desktop 扩展&lt;/li&gt;&lt;/ul&gt;&lt;/td&gt;
&lt;td&gt;&lt;ul&gt;&lt;li&gt;CLI&lt;/li&gt;&lt;li&gt;Docker Desktop 扩展&lt;/li&gt;&lt;/ul&gt;&lt;/td&gt;
&lt;td&gt;&lt;ul&gt;&lt;li&gt;CLI&lt;/li&gt;&lt;li&gt;Visual Studio Code 扩展&lt;/li&gt;&lt;li&gt;IntelliJ 插件&lt;/li&gt;&lt;/ul&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;!--
## Conclusion

Telepresence, Gefyra, and mirrord each offer unique approaches to streamline the Kubernetes development cycle, each having its strengths and weaknesses. Telepresence is feature-rich but comes with complexities, mirrord offers a seamless experience and supports various functionalities, while Gefyra aims for simplicity and robustness.
--&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;结论  &lt;/h2&gt;
&lt;p&gt;Telepresence、Gefyra 和 mirrord 各自提供了独特的方法来简化 Kubernetes 开发周期，
每个工具都有其优缺点。Telepresence 功能丰富但复杂，mirrord 提供无缝体验并支持各种功能，
而 Gefyra 则追求简单和稳健。&lt;/p&gt;
&lt;!--
Your choice between them should depend on the specific requirements of your project, your team&#39;s familiarity with the tools, and the desired development workflow. Whichever tool you choose, we believe the local Kubernetes development approach can provide an easy, effective, and cheap solution to the bottlenecks of the Kubernetes development cycle, and will become even more prevalent as these tools continue to innovate and evolve.
--&gt;
&lt;p&gt;你的选择应取决于项目的具体要求、团队对工具的熟悉程度以及所需的开发工作流。
无论你选择哪个工具，我们相信本地 Kubernetes 开发方法都可以提供一种简单、有效和低成本的解决方案，
来应对 Kubernetes 开发周期中的瓶颈，并且随着这些工具的不断创新和发展，这种本地方法将变得更加普遍。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 旧版软件包仓库将于 2023 年 9 月 13 日被冻结</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/</link>
      <pubDate>Thu, 31 Aug 2023 15:30:00 -0700</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Kubernetes Legacy Package Repositories Will Be Frozen On September 13, 2023&#34;
date: 2023-08-31T15:30:00-07:00
slug: legacy-package-repository-deprecation
evergreen: true
--&gt;
&lt;!--
**Authors**: Bob Killen (Google), Chris Short (AWS), Jeremy Rickard (Microsoft), Marko Mudrinić (Kubermatic), Tim Bannister (The Scale Factory)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Bob Killen (Google), Chris Short (AWS), Jeremy Rickard (Microsoft), Marko Mudrinić (Kubermatic), Tim Bannister (The Scale Factory)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：&lt;a href=&#34;https://github.com/mengjiao-liu&#34;&gt;Mengjiao Liu&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
On August 15, 2023, the Kubernetes project announced the general availability of
the community-owned package repositories for Debian and RPM packages available
at `pkgs.k8s.io`. The new package repositories are replacement for the legacy
Google-hosted package repositories: `apt.kubernetes.io` and `yum.kubernetes.io`.
The
[announcement blog post for `pkgs.k8s.io`](/blog/2023/08/15/pkgs-k8s-io-introduction/)
highlighted that we will stop publishing packages to the legacy repositories in
the future.
--&gt;
&lt;p&gt;2023 年 8 月 15 日，Kubernetes 项目宣布社区拥有的 Debian 和 RPM
软件包仓库在 &lt;code&gt;pkgs.k8s.io&lt;/code&gt; 上正式提供。新的软件包仓库将取代旧的由
Google 托管的软件包仓库：&lt;code&gt;apt.kubernetes.io&lt;/code&gt; 和 &lt;code&gt;yum.kubernetes.io&lt;/code&gt;。
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/15/pkgs-k8s-io-introduction/&#34;&gt;&lt;code&gt;pkgs.k8s.io&lt;/code&gt; 的公告博客文章&lt;/a&gt;强调我们未来将停止将软件包发布到旧仓库。&lt;/p&gt;
&lt;!--
Today, we&#39;re formally deprecating the legacy package repositories (`apt.kubernetes.io`
and `yum.kubernetes.io`), and we&#39;re announcing our plans to freeze the contents of
the repositories as of **September 13, 2023**.
--&gt;
&lt;p&gt;今天，我们正式弃用旧软件包仓库（&lt;code&gt;apt.kubernetes.io&lt;/code&gt; 和 &lt;code&gt;yum.kubernetes.io&lt;/code&gt;），
并且宣布我们计划在 &lt;strong&gt;2023 年 9 月 13 日&lt;/strong&gt; 冻结仓库的内容。&lt;/p&gt;
&lt;!--
Please continue reading in order to learn what does this mean for you as an user or
distributor, and what steps you may need to take.
--&gt;
&lt;p&gt;请继续阅读以了解这对于作为用户或分发商的你意味着什么，
以及你可能需要采取哪些步骤。&lt;/p&gt;
&lt;!--
## How does this affect me as a Kubernetes end user?

This change affects users **directly installing upstream versions of Kubernetes**,
either manually by following the official
[installation](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) and
[upgrade](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) instructions, or
by **using a Kubernetes installer** that&#39;s using packages provided by the Kubernetes
project.
--&gt;
&lt;h2 id=&#34;how-does-this-affect-me-as-a-kubernetes-end-user&#34;&gt;作为 Kubernetes 最终用户，这对我有何影响？&lt;/h2&gt;
&lt;p&gt;此更改影响&lt;strong&gt;直接安装 Kubernetes 的上游版本&lt;/strong&gt;的用户，
无论是按照官方手动&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/setup/%E7%94%9F%E4%BA%A7%E7%8E%AF%E5%A2%83/%E5%B7%A5%E5%85%B7/kubeadm/install-kubeadm/&#34;&gt;安装&lt;/a&gt;
和&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/&#34;&gt;升级&lt;/a&gt;说明，
还是通过&lt;strong&gt;使用 Kubernetes 安装工具&lt;/strong&gt;，该安装工具使用 Kubernetes 项目提供的软件包。&lt;/p&gt;
&lt;!--
**This change also affects you if you run Linux on your own PC and have installed `kubectl` using the legacy package repositories**.
We&#39;ll explain later on how to [check](#check-if-affected) if you&#39;re affected.
--&gt;
&lt;p&gt;&lt;strong&gt;如果你在自己的 PC 上运行 Linux 并使用旧软件包仓库安装了 &lt;code&gt;kubectl&lt;/code&gt;，则此更改也会影响你&lt;/strong&gt;。
我们稍后将解释如何&lt;a href=&#34;#check-if-affected&#34;&gt;检查&lt;/a&gt;是否你会受到影响。&lt;/p&gt;
&lt;!--
If you use **fully managed** Kubernetes, for example through a service from a cloud
provider, you would only be affected by this change if you also installed `kubectl`
on your Linux PC using packages from the legacy repositories. Cloud providers are
generally using their own Kubernetes distributions and therefore they don&#39;t use
packages provided by the Kubernetes project; more importantly, if someone else is
managing Kubernetes for you, then they would usually take responsibility for that check.
--&gt;
&lt;p&gt;如果你使用&lt;strong&gt;完全托管的&lt;/strong&gt; Kubernetes，例如从云提供商获取服务，
那么只有在你还使用旧仓库中的软件包在你的 Linux PC 上安装 &lt;code&gt;kubectl&lt;/code&gt; 时，
你才会受到此更改的影响。云提供商通常使用他们自己的 Kubernetes 发行版，
因此他们不使用 Kubernetes 项目提供的软件包；更重要的是，如果有其他人为你管理 Kubernetes，
那么他们通常会负责该检查。&lt;/p&gt;
&lt;!--
If you have a managed [control plane](/docs/concepts/overview/components/#control-plane-components)
but you are responsible for **managing the nodes yourself**, and any of those nodes run Linux,
you should [check](#check-if-affected) whether you are affected.
--&gt;
&lt;p&gt;如果你使用的是托管的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/overview/components/#control-plane-components&#34;&gt;控制平面&lt;/a&gt;
但你负责&lt;strong&gt;自行管理节点&lt;/strong&gt;，并且每个节点都运行 Linux，
你应该&lt;a href=&#34;#check-if-affected&#34;&gt;检查&lt;/a&gt;你是否会受到影响。&lt;/p&gt;
&lt;!--
If you&#39;re managing your clusters on your own by following the official installation
and upgrade instructions, please follow the instructions in this blog post to migrate
to the (new) community-owned package repositories.
--&gt;
&lt;p&gt;如果你按照官方的安装和升级说明自己管理你的集群，
请按照本博客文章中的说明迁移到（新的）社区拥有的软件包仓库。&lt;/p&gt;
&lt;!--
If you&#39;re using a Kubernetes installer that&#39;s using packages provided by the
Kubernetes project, please check the installer tool&#39;s communication channels for
information about what steps you need to take, and eventually if needed, follow up
with maintainers to let them know about this change.
--&gt;
&lt;p&gt;如果你使用的 Kubernetes 安装程序使用 Kubernetes 项目提供的软件包，
请检查安装程序工具的通信渠道，了解有关你需要采取的步骤的信息，最后如果需要，
请与维护人员联系，让他们了解此更改。&lt;/p&gt;
&lt;!--
The following diagram shows who&#39;s affected by this change in a visual form
(click on diagram for the larger version):



&lt;figure class=&#34;diagram-large &#34;&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/08/31/legacy-package-repository-deprecation/flow.svg&#34;&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/08/31/legacy-package-repository-deprecation/flow.svg&#34;
         alt=&#34;Visual explanation of who&amp;#39;s affected by the legacy repositories being deprecated and frozen. Textual explanation is available above this diagram.&#34;/&gt; &lt;/a&gt;
&lt;/figure&gt;
--&gt;
&lt;p&gt;下图以可视化形式显示了谁受到此更改的影响（单击图表可查看大图）：&lt;/p&gt;


&lt;figure class=&#34;diagram-large &#34;&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/flow.svg&#34;&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/flow.svg&#34;
         alt=&#34;直观地解释谁受到弃用和冻结的遗留仓库的影响。图上提供了文字解释。&#34;/&gt; &lt;/a&gt;
&lt;/figure&gt;
&lt;!--
## How does this affect me as a Kubernetes distributor?

If you&#39;re using the legacy repositories as part of your project (e.g. a Kubernetes
installer tool), you should migrate to the community-owned repositories as soon as
possible and inform your users about this change and what steps they need to take.
--&gt;
&lt;h2 id=&#34;how-does-this-affect-me-as-a-kubernetes-distributor&#34;&gt;这对我作为 Kubernetes 分发商有何影响？ &lt;/h2&gt;
&lt;p&gt;如果你将旧仓库用作项目的一部分（例如 Kubernetes 安装程序工具），
则应尽快迁移到社区拥有的仓库，并告知用户此更改以及他们需要采取哪些步骤。&lt;/p&gt;
&lt;!--
## Timeline of changes

- **15th August 2023:**  
  Kubernetes announces a new, community-managed source for Linux software packages of Kubernetes components
- **31st August 2023:**  
  _(this announcement)_ Kubernetes formally deprecates the legacy
  package repositories
- **13th September 2023** (approximately):  
  Kubernetes will freeze the legacy package repositories,
  (`apt.kubernetes.io` and `yum.kubernetes.io`).
  The freeze will happen immediately following the patch releases that are scheduled for September, 2023.
--&gt;
&lt;h2 id=&#34;timeline-of-changes&#34;&gt;变更时间表 &lt;/h2&gt;
&lt;!-- note to maintainers - the trailing whitespace is significant --&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;2023 年 8 月 15 日：&lt;/strong&gt;&lt;br&gt;
Kubernetes 宣布推出一个新的社区管理的 Kubernetes 组件 Linux 软件包源&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;2023 年 8 月 31 日：&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;（本公告）&lt;/strong&gt; Kubernetes 正式弃用旧版软件包仓库&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;2023 年 9 月 13 日&lt;/strong&gt;（左右）：&lt;br&gt;
Kubernetes 将冻结旧软件包仓库（&lt;code&gt;apt.kubernetes.io&lt;/code&gt; 和 &lt;code&gt;yum.kubernetes.io&lt;/code&gt;）。
冻结将计划于 2023 年 9 月发布补丁版本后立即进行。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
The Kubernetes patch releases scheduled for September 2023 (v1.28.2, v1.27.6,
v1.26.9, v1.25.14) will have packages published **both** to the community-owned and
the legacy repositories.
--&gt;
&lt;p&gt;计划于 2023 年 9 月发布的 Kubernetes 补丁（v1.28.2、v1.27.6、v1.26.9、v1.25.14）
将把软件包发布到社区拥有的仓库和旧仓库。&lt;/p&gt;
&lt;!--
We&#39;ll freeze the legacy repositories after cutting the patch releases for September
which means that we&#39;ll completely stop publishing packages to the legacy repositories
at that point.
--&gt;
&lt;p&gt;在发布 9 月份的补丁版本后，我们将冻结旧仓库，这意味着届时我们将完全停止向旧仓库发布软件包。&lt;/p&gt;
&lt;!--
For the v1.28, v1.27, v1.26, and v1.25 patch releases from October 2023 and onwards,
we&#39;ll only publish packages to the new package repositories (`pkgs.k8s.io`).
--&gt;
&lt;p&gt;对于 2023 年 10 月及以后的 v1.28、v1.27、v1.26 和 v1.25 补丁版本，
我们仅将软件包发布到新的软件包仓库 (&lt;code&gt;pkgs.k8s.io&lt;/code&gt;)。&lt;/p&gt;
&lt;!--
### What about future minor releases?

Kubernetes 1.29 and onwards will have packages published **only** to the
community-owned repositories (`pkgs.k8s.io`).
--&gt;
&lt;h3 id=&#34;what-about-future-minor-releases&#34;&gt;未来的次要版本怎么样？ &lt;/h3&gt;
&lt;p&gt;Kubernetes 1.29 及以后的版本将&lt;strong&gt;仅&lt;/strong&gt;发布软件包到社区拥有的仓库（&lt;code&gt;pkgs.k8s.io&lt;/code&gt;）。&lt;/p&gt;
&lt;!--
## Can I continue to use the legacy package repositories?

The existing packages in the legacy repositories will be available for the foreseeable
future. However, the Kubernetes project can&#39;t provide _any_ guarantees on how long
is that going to be. The deprecated legacy repositories, and their contents, might
be removed at any time in the future and without a further notice period.

**UPDATE**: The legacy packages are expected to go away in January 2024.
--&gt;
&lt;h2 id=&#34;can-i-continue-to-use-the-legacy-package-repositories&#34;&gt;我可以继续使用旧软件包仓库吗？&lt;/h2&gt;
&lt;p&gt;&lt;del&gt;旧仓库中的现有软件包将在可预见的未来内保持可用。然而，
Kubernetes 项目无法对这会持续多久提供&lt;strong&gt;任何&lt;/strong&gt;保证。
已弃用的旧仓库及其内容可能会在未来随时删除，恕不另行通知。&lt;/del&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;更新&lt;/strong&gt;: 旧版软件包预计将于 2024 年 1 月被删除。&lt;/p&gt;
&lt;!--
The Kubernetes project **strongly recommends** migrating to the new community-owned
repositories **as soon as possible**.
--&gt;
&lt;p&gt;Kubernetes 项目&lt;strong&gt;强烈建议尽快&lt;/strong&gt;迁移到新的社区拥有的仓库。&lt;/p&gt;
&lt;!--
Given that no new releases will be published to the legacy repositories **after the September 13, 2023**
cut-off point, **you will not be able to upgrade to any patch or minor release made from that date onwards.**
--&gt;
&lt;p&gt;鉴于&lt;strong&gt;在 2023 年 9 月 13 日&lt;/strong&gt;截止时间点之后不会向旧仓库发布任何新版本，
&lt;strong&gt;你将无法升级到自该日期起发布的任何补丁或次要版本。&lt;/strong&gt;&lt;/p&gt;
&lt;!--
Whilst the project makes every effort to release secure software, there may one
day be a high-severity vulnerability in Kubernetes, and consequently an important
release to upgrade to. The advice we&#39;re announcing will help you be as prepared for
any future security update, whether trivial or urgent.
--&gt;
&lt;p&gt;尽管该项目会尽一切努力发布安全软件，但有一天 Kubernetes 可能会出现一个高危性漏洞，
因此需要升级到一个重要版本。我们所公开的建议将帮助你为未来的所有安全更新（无论是微不足道的还是紧急的）做好准备。&lt;/p&gt;
&lt;!--
## How can I check if I&#39;m using the legacy repositories? {#check-if-affected}

The steps to check if you&#39;re using the legacy repositories depend on whether you&#39;re
using Debian-based distributions (Debian, Ubuntu, and more) or RPM-based distributions
(CentOS, RHEL, Rocky Linux, and more) in your cluster.

Run these instructions on one of your nodes in the cluster.
--&gt;
&lt;h2 id=&#34;check-if-affected&#34;&gt;如何检查我是否正在使用旧仓库？&lt;/h2&gt;
&lt;p&gt;检查你是否使用旧仓库的步骤取决于你在集群中使用的是基于
Debian 的发行版（Debian、Ubuntu 等）还是基于 RPM
的发行版（CentOS、RHEL、Rocky Linux 等）。&lt;/p&gt;
&lt;p&gt;在集群中的一个节点上运行以下指令。&lt;/p&gt;
&lt;!--
### Debian-based Linux distributions

The repository definitions (sources) are located in `/etc/apt/sources.list` and `/etc/apt/sources.list.d/`
on Debian-based distributions. Inspect these two locations and try to locate a
package repository definition that looks like:
--&gt;
&lt;h3 id=&#34;debian-based-linux-distributions&#34;&gt;基于 Debian 的 Linux 发行版 &lt;/h3&gt;
&lt;p&gt;在基于 Debian 的发行版上，仓库定义（源）位于&lt;code&gt;/etc/apt/sources.list&lt;/code&gt;
和 &lt;code&gt;/etc/apt/sources.list.d/&lt;/code&gt;中。检查这两个位置并尝试找到如下所示的软件包仓库定义：&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main
&lt;/code&gt;&lt;/pre&gt;&lt;!--
**If you find a repository definition that looks like this, you&#39;re using the legacy repository and you need to migrate.**

If the repository definition uses `pkgs.k8s.io`, you&#39;re already using the
community-hosted repositories and you don&#39;t need to take any action.
--&gt;
&lt;p&gt;&lt;strong&gt;如果你发现像这样的仓库定义，则你正在使用旧仓库并且需要迁移。&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;如果仓库定义使用 &lt;code&gt;pkgs.k8s.io&lt;/code&gt;，则你已经在使用社区托管的仓库，无需执行任何操作。&lt;/p&gt;
&lt;!--
On most systems, this repository definition should be located in
`/etc/apt/sources.list.d/kubernetes.list` (as recommended by the Kubernetes
documentation), but on some systems it might be in a different location.
--&gt;
&lt;p&gt;在大多数系统上，此仓库定义应位于 &lt;code&gt;/etc/apt/sources.list.d/kubernetes.list&lt;/code&gt;
（按照 Kubernetes 文档的建议），但在某些系统上它可能位于不同的位置。&lt;/p&gt;
&lt;!--
If you can&#39;t find a repository definition related to Kubernetes, it&#39;s likely that you
don&#39;t use package managers to install Kubernetes and you don&#39;t need to take any action.
--&gt;
&lt;p&gt;如果你找不到与 Kubernetes 相关的仓库定义，
则很可能你没有使用软件包管理器来安装 Kubernetes，因此不需要执行任何操作。&lt;/p&gt;
&lt;!--
### RPM-based Linux distributions

The repository definitions are located in `/etc/yum.repos.d` if you&#39;re using the
`yum` package manager, or `/etc/dnf/dnf.conf` and `/etc/dnf/repos.d/` if you&#39;re using
`dnf` package manager. Inspect those locations and try to locate a package repository
definition that looks like this:
--&gt;
&lt;h3 id=&#34;rpm-based-linux-distributions&#34;&gt;基于 RPM 的 Linux 发行版 &lt;/h3&gt;
&lt;p&gt;如果你使用的是 &lt;code&gt;yum&lt;/code&gt; 软件包管理器，则仓库定义位于
&lt;code&gt;/etc/yum.repos.d&lt;/code&gt;，或者 &lt;code&gt;/etc/dnf/dnf.conf&lt;/code&gt; 和 &lt;code&gt;/etc/dnf/repos.d/&lt;/code&gt;
如果你使用的是 &lt;code&gt;dnf&lt;/code&gt; 软件包管理器。检查这些位置并尝试找到如下所示的软件包仓库定义：&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
&lt;/code&gt;&lt;/pre&gt;&lt;!--
**If you find a repository definition that looks like this, you&#39;re using the legacy repository and you need to migrate.**

If the repository definition uses `pkgs.k8s.io`, you&#39;re already using the
community-hosted repositories and you don&#39;t need to take any action.
--&gt;
&lt;p&gt;&lt;strong&gt;如果你发现像这样的仓库定义，则你正在使用旧仓库并且需要迁移。&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;如果仓库定义使用 &lt;code&gt;pkgs.k8s.io&lt;/code&gt;，则你已经在使用社区托管的仓库，无需执行任何操作。&lt;/p&gt;
&lt;!--
On most systems, that repository definition should be located in `/etc/yum.repos.d/kubernetes.repo`
(as recommended by the Kubernetes documentation), but on some systems it might be
in a different location.
--&gt;
&lt;p&gt;在大多数系统上，该仓库定义应位于 &lt;code&gt;/etc/yum.repos.d/kubernetes.repo&lt;/code&gt;
（按照 Kubernetes 文档的建议），但在某些系统上它可能位于不同的位置。&lt;/p&gt;
&lt;!--
If you can&#39;t find a repository definition related to Kubernetes, it&#39;s likely that you
don&#39;t use package managers to install Kubernetes and you don&#39;t need to take any action.
--&gt;
&lt;p&gt;如果你找不到与 Kubernetes 相关的仓库定义，则很可能你没有使用软件包管理器来安装
Kubernetes，那么你不需要执行任何操作。&lt;/p&gt;
&lt;!--
## How can I migrate to the new community-operated repositories?

For more information on how to migrate to the new community
managed packages, please refer to the
[announcement blog post for `pkgs.k8s.io`](/blog/2023/08/15/pkgs-k8s-io-introduction/).
--&gt;
&lt;h2 id=&#34;how-can-i-migrate-to-the-new-community-operated-repositories&#34;&gt;我如何迁移到新的社区运营的仓库？ &lt;/h2&gt;
&lt;p&gt;有关如何迁移到新的社区管理软件包的更多信息，请参阅
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/15/pkgs-k8s-io-introduction/&#34;&gt;&lt;code&gt;pkgs.k8s.io&lt;/code&gt;的公告博客文章&lt;/a&gt; 。&lt;/p&gt;
&lt;!--
## Why is the Kubernetes project making this change?

Kubernetes has been publishing packages solely to the Google-hosted repository
since Kubernetes v1.5, or the past **seven** years! Following in the footsteps of
migrating to our community-managed registry, `registry.k8s.io`, we are now migrating the
Kubernetes package repositories to our own community-managed infrastructure. We’re
thankful to Google for their continuous hosting and support all these years, but
this transition marks another big milestone for the project’s goal of migrating
to complete community-owned infrastructure.
--&gt;
&lt;h2 id=&#34;why-is-the-kubernetes-project-making-this-change&#34;&gt;为什么 Kubernetes 项目要做出这样的改变？ &lt;/h2&gt;
&lt;p&gt;自 Kubernetes v1.5 或过去&lt;strong&gt;七&lt;/strong&gt;年以来，Kubernetes 一直只将软件包发布到
Google 托管的仓库！继迁移到社区管理的注册表 &lt;code&gt;registry.k8s.io&lt;/code&gt; 之后，
我们现在正在将 Kubernetes 软件包仓库迁移到我们自己的社区管理的基础设施。
我们感谢 Google 这些年来持续的托管和支持，
但这一转变标志着该项目迁移到完全由社区拥有的基础设施的目标的又一个重要里程碑。&lt;/p&gt;
&lt;!--
## Is there a Kubernetes tool to help me migrate?

We don&#39;t have any announcement to make about tooling there. As a Kubernetes user, you
have to manually modify your configuration to use the new repositories. Automating
the migration from the legacy to the community-owned repositories is technically
challenging and we want to avoid any potential risks associated with this.
--&gt;
&lt;h2 id=&#34;is-there-a-kubernetes-tool-to-help-me-migrate&#34;&gt;有 Kubernetes 工具可以帮助我迁移吗？&lt;/h2&gt;
&lt;p&gt;关于迁移工具方面，我们目前没有任何公告。作为 Kubernetes 用户，
你必须手动修改配置才能使用新仓库。自动从旧仓库迁移到社区拥有的仓库在技术上具有挑战性，
我们希望避免与此相关的任何潜在风险。&lt;/p&gt;
&lt;!--
## Acknowledgments

First of all, we want to acknowledge the contributions from Alphabet. Staff at Google
have provided their time; Google as a business has provided both the infrastructure
to serve packages, and the security context for giving those packages trustworthy
digital signatures.
These have been important to the adoption and growth of Kubernetes.
--&gt;
&lt;h2 id=&#34;acknowledgments&#34;&gt;致谢 &lt;/h2&gt;
&lt;p&gt;首先，我们要感谢 Alphabet 的贡献。Google 的员工投入了他们的时间；
作为一家企业，谷歌既提供了服务于软件包的基础设施，也提供了为这些软件包提供可信数字签名的安全上下文。
这些对于 Kubernetes 的采用和成长非常重要。&lt;/p&gt;
&lt;!--
Releasing software might not be glamorous but it&#39;s important. Many people within
the Kubernetes contributor community have contributed to the new way that we, as a
project, have for building and publishing packages.
--&gt;
&lt;p&gt;发布软件可能并不那么引人注目，但很重要。Kubernetes
贡献者社区中的许多人都为我们作为一个项目构建和发布软件包的新方法做出了贡献。&lt;/p&gt;
&lt;!--
And finally, we want to once again acknowledge the help from SUSE. OpenBuildService,
from SUSE, is the technology that the powers the new community-managed package repositories.
--&gt;
&lt;p&gt;最后，我们要再次感谢 SUSE 的帮助。SUSE 的 OpenBuildService
为新的社区管理的软件包仓库提供支持的技术。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Gateway API v0.8.0：引入服务网格支持</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/29/gateway-api-v0-8/</link>
      <pubDate>Tue, 29 Aug 2023 10:00:00 -0800</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/29/gateway-api-v0-8/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Gateway API v0.8.0: Introducing Service Mesh Support&#34;
date: 2023-08-29T10:00:00-08:00
slug: gateway-api-v0-8
--&gt;
&lt;!--
***Authors:*** Flynn (Buoyant), John Howard (Google), Keith Mattix (Microsoft), Michael Beaumont (Kong), Mike Morris (independent), Rob Scott (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Flynn (Buoyant), John Howard (Google), Keith Mattix (Microsoft), Michael Beaumont (Kong), Mike Morris (independent), Rob Scott (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (Daocloud)&lt;/p&gt;
&lt;!--
We are thrilled to announce the v0.8.0 release of Gateway API! With this
release, Gateway API support for service mesh has reached [Experimental
status][status]. We look forward to your feedback!

We&#39;re especially delighted to announce that Kuma 2.3+, Linkerd 2.14+, and Istio
1.16+ are all fully-conformant implementations of Gateway API service mesh
support.
--&gt;
&lt;p&gt;我们很高兴地宣布 Gateway API 的 v0.8.0 版本发布了！
通过此版本，Gateway API 对服务网格的支持已达到&lt;a href=&#34;https://gateway-api.sigs.k8s.io/geps/overview/#status&#34;&gt;实验性（Experimental）状态&lt;/a&gt;。
我们期待你的反馈！&lt;/p&gt;
&lt;p&gt;我们很高兴地宣布 Kuma 2.3+、Linkerd 2.14+ 和 Istio 1.16+ 都是 Gateway API
服务网格支持的完全一致实现。&lt;/p&gt;
&lt;!--
## Service mesh support in Gateway API

While the initial focus of Gateway API was always ingress (north-south)
traffic, it was clear almost from the beginning that the same basic routing
concepts should also be applicable to service mesh (east-west) traffic. In
2022, the Gateway API subproject started the [GAMMA initiative][gamma], a
dedicated vendor-neutral workstream, specifically to examine how best to fit
service mesh support into the framework of the Gateway API resources, without
requiring users of Gateway API to relearn everything they understand about the
API.
--&gt;
&lt;h2 id=&#34;gateway-api-中的服务网格支持&#34;&gt;Gateway API 中的服务网格支持&lt;/h2&gt;
&lt;p&gt;虽然 Gateway API 最初的重点一直是入站（南北）流量，但几乎从最开始就比较明确，
相同的基本路由概念也应适用于服务网格（东西）流量。2022 年，Gateway API
子项目启动了 &lt;a href=&#34;https://gateway-api.sigs.k8s.io/concepts/gamma/&#34;&gt;GAMMA 计划&lt;/a&gt;，这是一个专门的供应商中立的工作流，
旨在专门研究如何最好地将服务网格支持纳入 Gateway API 资源的框架中，
而不需要 Gateway API 的用户重新学习他们了解的有关 API 的一切。&lt;/p&gt;
&lt;!--
Over the last year, GAMMA has dug deeply into the challenges and possible
solutions around using Gateway API for service mesh. The end result is a small
number of [enhancement proposals][geps] that subsume many hours of thought and
debate, and provide a minimum viable path to allow Gateway API to be used for
service mesh.
--&gt;
&lt;p&gt;在过去的一年中，GAMMA 深入研究了使用 Gateway API 用于服务网格的挑战和可能的解决方案。
最终结果是少量的&lt;a href=&#34;https://gateway-api.sigs.k8s.io/contributing/enhancement-requests/&#34;&gt;增强提案&lt;/a&gt;，其中包含了很长时间的思考和辩论，并提供允许使用 Gateway API
用于服务网格的最短可行路径。&lt;/p&gt;
&lt;!--
### How will mesh routing work when using Gateway API?

You can find all the details in the [Gateway API Mesh routing
documentation][mesh-routing] and [GEP-1426], but the short version for Gateway
API v0.8.0 is that an HTTPRoute can now have a `parentRef` that is a Service,
rather than just a Gateway. We anticipate future GEPs in this area as we gain
more experience with service mesh use cases -- binding to a Service makes it
possible to use the Gateway API with a service mesh, but there are several
interesting use cases that remain difficult to cover.

As an example, you might use an HTTPRoute to do an A-B test in the mesh as
follows:
--&gt;
&lt;h3 id=&#34;当使用-gateway-api-时-网格路由将如何工作&#34;&gt;当使用 Gateway API 时，网格路由将如何工作？&lt;/h3&gt;
&lt;p&gt;你可以在 &lt;a href=&#34;https://gateway-api.sigs.k8s.io/concepts/gamma/#how-the-gateway-api-works-for-service-mesh&#34;&gt;Gateway API Mesh 路由文档&lt;/a&gt;和 &lt;a href=&#34;https://gateway-api.sigs.k8s.io/geps/gep-1426/&#34;&gt;GEP-1426&lt;/a&gt; 中找到所有详细信息，
但对于 Gateway API v0.8.0 的简短的版本是现在 HTTPRoute 可以设置 &lt;code&gt;parentRef&lt;/code&gt;，
它是一个 Service，而不仅仅是一个网关。随着我们对服务网格用例的经验不断丰富，我们预计在这个领域会出现更多
GEP -- 绑定到 Service 使得将 Gateway API 与服务网格结合使用成为可能，但仍有几个有趣的用例难以覆盖。&lt;/p&gt;
&lt;p&gt;例如，你可以使用 HTTPRoute 在网格中进行 A-B 测试，如下所示：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;gateway.networking.k8s.io/v1beta1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;HTTPRoute&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;bar-route&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;parentRefs&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;group&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Service&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;demo-app&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;port&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;5000&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;rules&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matches&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;headers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Exact&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;env&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;value&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;backendRefs&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;demo-app-v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;port&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;5000&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;backendRefs&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;demo-app-v2&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;port&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;5000&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Any request to port 5000 of the `demo-app` Service that has the header `env:
v1` will be routed to `demo-app-v1`, while any request without that header
will be routed to `demo-app-v2` -- and since this is being handled by the
service mesh, not the ingress controller, the A/B test can happen anywhere in
the application&#39;s call graph.
--&gt;
&lt;p&gt;任何对 &lt;code&gt;demo-app&lt;/code&gt; Service 5000 端口且具有 &lt;code&gt;env: v1&lt;/code&gt; 表头的请求都将被路由到 &lt;code&gt;demo-app-v1&lt;/code&gt;，
而没有该标头的请求都将被路由到 &lt;code&gt;demo-app-v2&lt;/code&gt; -- 并且由于这是由服务网格而不是
Ingress 控制器处理的，A/B 测试可以发生在应用程序的调用图中的任何位置。&lt;/p&gt;
&lt;!--
### How do I know this will be truly portable?

Gateway API has been investing heavily in conformance tests across all
features it supports, and mesh is no exception. One of the challenges that the
GAMMA initiative ran into is that many of these tests were strongly tied to
the idea that a given implementation provides an ingress controller. Many
service meshes don&#39;t, and requiring a GAMMA-conformant mesh to also implement
an ingress controller seemed impractical at best. This resulted in work
restarting on Gateway API _conformance profiles_, as discussed in [GEP-1709].
--&gt;
&lt;h3 id=&#34;如何确定这种方案的可移植性是真的&#34;&gt;如何确定这种方案的可移植性是真的？&lt;/h3&gt;
&lt;p&gt;Gateway API 一直在其支持的所有功能的一致性测试上投入大量资源，网格也不例外。
GAMMA 面临的挑战之一是，许多测试都认为一个给定实现会提供 Ingress 控制器。
许多服务网格不提供 Ingress 控制器，要求符合 GAMMA 标准的网格同时实现 Ingress 控制器似乎并不切实际。
这导致在 Gateway API &lt;strong&gt;一致性配置文件&lt;/strong&gt;的工作重新启动，如 &lt;a href=&#34;https://gateway-api.sigs.k8s.io/geps/gep-1709/&#34;&gt;GEP-1709&lt;/a&gt; 中所述。&lt;/p&gt;
&lt;!--
The basic idea of conformance profiles is that we can define subsets of the
Gateway API, and allow implementations to choose (and document) which subsets
they conform to. GAMMA is adding a new profile, named `Mesh` and described in
[GEP-1686], which checks only the mesh functionality as defined by GAMMA. At
this point, Kuma 2.3+, Linkerd 2.14+, and Istio 1.16+ are all conformant with
the `Mesh` profile.
--&gt;
&lt;p&gt;一致性配置文件的基本思想是，我们可以定义 Gateway API 的子集，并允许实现选择（并记录）他们符合哪些子集。
GAMMA 正在添加一个名为 &lt;code&gt;Mesh&lt;/code&gt; 的新配置文件，其描述在 &lt;a href=&#34;https://gateway-api.sigs.k8s.io/geps/gep-1686/&#34;&gt;GEP-1686&lt;/a&gt; 中，仅检查由 GAMMA 定义的网格功能。
此时，Kuma 2.3+、Linkerd 2.14+ 和 Istio 1.16+ 都符合 &lt;code&gt;Mesh&lt;/code&gt; 配置文件的标准。&lt;/p&gt;
&lt;!--
## What else is in Gateway API v0.8.0?

This release is all about preparing Gateway API for the upcoming v1.0 release
where HTTPRoute, Gateway, and GatewayClass will graduate to GA. There are two
main changes related to this: CEL validation and API version changes.
--&gt;
&lt;h2 id=&#34;gateway-api-v0-8-0-中还有什么&#34;&gt;Gateway API v0.8.0 中还有什么？&lt;/h2&gt;
&lt;p&gt;这个版本的发布都是为了即将到来的 v1.0 版本做准备，其中
HTTPRoute、Gateway 和 GatewayClass 将进级为 GA。与此相关的有两个主要更改：
CEL 验证和 API 版本更改。&lt;/p&gt;
&lt;!--
### CEL Validation

The first major change is that Gateway API v0.8.0 is the start of a transition
from webhook validation to [CEL validation][cel] using information built into
the CRDs. That will mean different things depending on the version of
Kubernetes you&#39;re using:
--&gt;
&lt;h3 id=&#34;cel-验证&#34;&gt;CEL 验证&lt;/h3&gt;
&lt;p&gt;第一个重大变化是，Gateway API v0.8.0 起从 Webhook 验证转向使用内置于
CRD 中的信息的 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/using-api/cel/&#34;&gt;CEL 验证&lt;/a&gt;。取决于你使用的 Kubernetes 版本，这一转换的影响有些不同：&lt;/p&gt;
&lt;!--
#### Kubernetes 1.25+

CEL validation is fully supported, and almost all validation is implemented in
CEL. (The sole exception is that header names in header modifier filters can
only do case-insensitive validation. There is more information in [issue
2277].)

We recommend _not_ using the validating webhook on these Kubernetes versions.
--&gt;
&lt;h4 id=&#34;kubernetes-1-25&#34;&gt;Kubernetes 1.25+&lt;/h4&gt;
&lt;p&gt;CEL 验证得到了完全支持，并且几乎所有验证都是在 CEL 中实现的。
（唯一的例外是，标头修饰符过滤器中的标头名称只能进行不区分大小写的验证，
更多的相关信息，请参见 &lt;a href=&#34;https://github.com/kubernetes-sigs/gateway-api/issues/2277&#34;&gt;issue 2277&lt;/a&gt;。）&lt;/p&gt;
&lt;p&gt;我们建议在这些 Kubernetes 版本上不使用验证 Webhook。&lt;/p&gt;
&lt;!--
#### Kubernetes 1.23 and 1.24

CEL validation is not supported, but Gateway API v0.8.0 CRDs can still be
installed. When you upgrade to Kubernetes 1.25+, the validation included in
these CRDs will automatically take effect.

We recommend continuing to use the validating webhook on these Kubernetes
versions.
--&gt;
&lt;h4 id=&#34;kubernetes-1-23-和-1-24&#34;&gt;Kubernetes 1.23 和 1.24&lt;/h4&gt;
&lt;p&gt;不支持 CEL 验证，但仍可以安装 Gateway API v0.8.0 CRD。
当你升级到 Kubernetes 1.25+ 时，这些 CRD 中包含的验证将自动生效。&lt;/p&gt;
&lt;p&gt;我们建议在这些 Kubernetes 版本上继续使用验证 Webhook。&lt;/p&gt;
&lt;!--
#### Kubernetes 1.22 and older

Gateway API only commits to support for [5 most recent versions of
Kubernetes][supported-versions]. As such, these versions are no longer
supported by Gateway API, and unfortunately Gateway API v0.8.0 cannot be
installed on them, since CRDs containing CEL validation will be rejected.
--&gt;
&lt;h4 id=&#34;kubernetes-1-22-及更早版本&#34;&gt;Kubernetes 1.22 及更早版本&lt;/h4&gt;
&lt;p&gt;Gateway API 只承诺支持&lt;a href=&#34;https://gateway-api.sigs.k8s.io/concepts/versioning/#supported-versions&#34;&gt;最新的 5 个 Kubernetes 版本&lt;/a&gt;。
因此，Gateway API 不再支持这些版本，不幸的是，在这些集群版本中无法安装 Gateway API v0.8.0，
因为包含 CEL 验证的 CRD 将被拒绝。&lt;/p&gt;
&lt;!--
### API Version Changes

As we prepare for a v1.0 release that will graduate Gateway, GatewayClass, and
HTTPRoute to the `v1` API Version from `v1beta1`, we are continuing the process
of moving away from `v1alpha2` for resources that have graduated to `v1beta1`.
For more information on this change and everything else included in this
release, refer to the [v0.8.0 release notes][v0.8.0 release notes].
--&gt;
&lt;h3 id=&#34;api-版本更改&#34;&gt;API 版本更改&lt;/h3&gt;
&lt;p&gt;在我们所准备的 v1.0 版本中，Gateway、GatewayClass 和 HTTPRoute 都会从
&lt;code&gt;v1beta1&lt;/code&gt; 升级到 &lt;code&gt;v1&lt;/code&gt; API 版本，对于已升级到 &lt;code&gt;v1beta1&lt;/code&gt; 的资源，我们将继续从 &lt;code&gt;v1alpha2&lt;/code&gt; 迁移的过程。&lt;/p&gt;
&lt;p&gt;有关此更改以及此版本中包含的所有其他内容的更多信息，请参见 &lt;a href=&#34;https://github.com/kubernetes-sigs/gateway-api/releases/tag/v0.8.0&#34;&gt;v0.8.0 发布说明&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## How can I get started with Gateway API?

Gateway API represents the future of load balancing, routing, and service mesh
APIs in Kubernetes. There are already more than 20 [implementations][impl]
available (including both ingress controllers and service meshes) and the list
keeps growing.
--&gt;
&lt;h2 id=&#34;如何开始使用-gateway-api&#34;&gt;如何开始使用 Gateway API？&lt;/h2&gt;
&lt;p&gt;Gateway API 代表了 Kubernetes 中负载平衡、路由和服务网格 API 的未来。
已经有超过 20 个&lt;a href=&#34;https://gateway-api.sigs.k8s.io/implementations/&#34;&gt;实现&lt;/a&gt;可用（包括入口控制器和服务网格），而这一列表还在不断增长。&lt;/p&gt;
&lt;!--
If you&#39;re interested in getting started with Gateway API, take a look at the
[API concepts documentation][concepts] and check out some of the
[Guides][guides] to try it out. Because this is a CRD-based API, you can
install the latest version on any Kubernetes 1.23+ cluster.
--&gt;
&lt;p&gt;如果你有兴趣开始使用 Gateway API，请查阅 &lt;a href=&#34;https://gateway-api.sigs.k8s.io/concepts/api-overview/&#34;&gt;API 概念文档&lt;/a&gt; 和一些&lt;a href=&#34;https://gateway-api.sigs.k8s.io/guides/getting-started/&#34;&gt;指南&lt;/a&gt;以尝试使用它。
因为这是一个基于 CRD 的 API，所以你可以在任何 Kubernetes 1.23+ 集群上安装最新版本。&lt;/p&gt;
&lt;!--
If you&#39;re specifically interested in helping to contribute to Gateway API, we
would love to have you! Please feel free to [open a new issue][issue] on the
repository, or join in the [discussions][disc]. Also check out the [community
page][community] which includes links to the Slack channel and community
meetings. We look forward to seeing you!!
--&gt;
&lt;p&gt;如果你有兴趣为 Gateway API 做出贡献，我们非常欢迎你！
请随时在仓库中&lt;a href=&#34;https://github.com/kubernetes-sigs/gateway-api/issues/new/choose&#34;&gt;报告问题&lt;/a&gt;，或加入&lt;a href=&#34;https://github.com/kubernetes-sigs/gateway-api/discussions&#34;&gt;讨论&lt;/a&gt;。
另请查看&lt;a href=&#34;https://gateway-api.sigs.k8s.io/contributing/community/&#34;&gt;社区页面&lt;/a&gt;，其中包含 Slack 频道和社区会议的链接。
我们期待你的光临！！&lt;/p&gt;
&lt;!--
## Further Reading:

- [GEP-1324] provides an overview of the GAMMA goals and some important
  definitions. This GEP is well worth a read for its discussion of the problem
  space.
- [GEP-1426] defines how to use Gateway API route resources, such as
  HTTPRoute, to manage traffic within a service mesh.
- [GEP-1686] builds on the work of [GEP-1709] to define a _conformance
  profile_ for service meshes to be declared conformant with Gateway API.
--&gt;
&lt;h2 id=&#34;进一步阅读&#34;&gt;进一步阅读：&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/geps/gep-1324/&#34;&gt;GEP-1324&lt;/a&gt; 提供了 GAMMA 目标和一些重要定义的概述。这个 GEP 值得一读，因为它讨论了问题空间。&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/geps/gep-1426/&#34;&gt;GEP-1426&lt;/a&gt; 定义了如何使用 Gateway API 路由资源（如 HTTPRoute）管理服务网格内的流量。&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://gateway-api.sigs.k8s.io/geps/gep-1686/&#34;&gt;GEP-1686&lt;/a&gt; 在 &lt;a href=&#34;https://gateway-api.sigs.k8s.io/geps/gep-1709/&#34;&gt;GEP-1709&lt;/a&gt; 的工作基础上，为声明符合 Gateway API 的服务网格定义了一个一致性配置文件。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
Although these are [Experimental][status] patterns, note that they are available
in the [`standard` release channel][ch], since the GAMMA initiative has not
needed to introduce new resources or fields to date.
--&gt;
&lt;p&gt;虽然这些都是&lt;a href=&#34;https://gateway-api.sigs.k8s.io/geps/overview/#status&#34;&gt;实验特性&lt;/a&gt;，但请注意，它们可在 &lt;a href=&#34;https://gateway-api.sigs.k8s.io/concepts/versioning/#release-channels-eg-experimental-standard&#34;&gt;standard 发布频道&lt;/a&gt;使用，
因为 GAMMA 计划迄今为止不需要引入新的资源或字段。&lt;/p&gt;
&lt;!--
[gamma]:https://gateway-api.sigs.k8s.io/concepts/gamma/
[status]:https://gateway-api.sigs.k8s.io/geps/overview/#status
[ch]:https://gateway-api.sigs.k8s.io/concepts/versioning/#release-channels-eg-experimental-standard
[cel]:/docs/reference/using-api/cel/
[crd]:/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/
[concepts]:https://gateway-api.sigs.k8s.io/concepts/api-overview/
[geps]:https://gateway-api.sigs.k8s.io/contributing/enhancement-requests/
[guides]:https://gateway-api.sigs.k8s.io/guides/getting-started/
[impl]:https://gateway-api.sigs.k8s.io/implementations/
[install-crds]:https://gateway-api.sigs.k8s.io/guides/getting-started/#install-the-crds
[issue]:https://github.com/kubernetes-sigs/gateway-api/issues/new/choose
[disc]:https://github.com/kubernetes-sigs/gateway-api/discussions
[community]:https://gateway-api.sigs.k8s.io/contributing/community/
[mesh-routing]:https://gateway-api.sigs.k8s.io/concepts/gamma/#how-the-gateway-api-works-for-service-mesh
[GEP-1426]:https://gateway-api.sigs.k8s.io/geps/gep-1426/
[GEP-1324]:https://gateway-api.sigs.k8s.io/geps/gep-1324/
[GEP-1686]:https://gateway-api.sigs.k8s.io/geps/gep-1686/
[GEP-1709]:https://gateway-api.sigs.k8s.io/geps/gep-1709/
[issue 2277]:https://github.com/kubernetes-sigs/gateway-api/issues/2277
[supported-versions]:https://gateway-api.sigs.k8s.io/concepts/versioning/#supported-versions
[v0.8.0 release notes]:https://github.com/kubernetes-sigs/gateway-api/releases/tag/v0.8.0
[versioning docs]:https://gateway-api.sigs.k8s.io/concepts/versioning/
--&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.28：用于改进集群安全升级的新（Alpha）机制</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/28/kubernetes-1-28-feature-mixed-version-proxy-alpha/</link>
      <pubDate>Mon, 28 Aug 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/28/kubernetes-1-28-feature-mixed-version-proxy-alpha/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Kubernetes 1.28: A New (alpha) Mechanism For Safer Cluster Upgrades&#34;
date: 2023-08-28
slug: kubernetes-1-28-feature-mixed-version-proxy-alpha
--&gt;
&lt;!--
**Author:** Richa Banker (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Richa Banker (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (DaoCloud)&lt;/p&gt;
&lt;!--
This blog describes the _mixed version proxy_, a new alpha feature in Kubernetes 1.28. The
mixed version proxy enables an HTTP request for a resource to be served by the correct API server
in cases where there are multiple API servers at varied versions in a cluster. For example,
this is useful during a cluster upgrade, or when you&#39;re rolling out the runtime configuration of
the cluster&#39;s control plane.
--&gt;
&lt;p&gt;本博客介绍了&lt;strong&gt;混合版本代理（Mixed Version Proxy）&lt;/strong&gt;，这是 Kubernetes 1.28 中的一个新的
Alpha 级别特性。当集群中存在多个不同版本的 API 服务器时，混合版本代理使对资源的 HTTP 请求能够被正确的
API 服务器处理。例如，在集群升级期间或当发布集群控制平面的运行时配置时此特性非常有用。&lt;/p&gt;
&lt;!--
## What problem does this solve?
When a cluster undergoes an upgrade, the kube-apiservers existing at different
versions in that scenario can serve different sets (groups, versions, resources)
of built-in resources. A resource request made in this scenario may be served by
any of the available apiservers, potentially resulting in the request ending up
at an apiserver that may not be aware of the requested resource; consequently it
being served a 404 not found error which is incorrect. Furthermore, incorrect serving
of the 404 errors can lead to serious consequences such as namespace deletion being
blocked incorrectly or objects being garbage collected mistakenly.
--&gt;
&lt;h2 id=&#34;这解决了什么问题&#34;&gt;这解决了什么问题？&lt;/h2&gt;
&lt;p&gt;当集群进行升级时，集群中不同版本的 kube-apiserver 为不同的内置资源集（组、版本、资源）提供服务。
在这种情况下资源请求如果由任一可用的 apiserver 提供服务，请求可能会到达无法解析此请求资源的
apiserver 中；因此，它会收到 404（&amp;quot;Not Found&amp;quot;）的响应报错，这是不正确的。
此外，返回 404 的错误服务可能会导致严重的后果，例如命名空间的删除被错误阻止或资源对象被错误地回收。&lt;/p&gt;
&lt;!--
## How do we solve the problem?



&lt;figure class=&#34;diagram-large &#34;&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/images/blog/2023-08-28-a-new-alpha-mechanism-for-safer-cluster-upgrades/mvp-flow-diagram.svg&#34;/&gt; 
&lt;/figure&gt;
--&gt;
&lt;h2 id=&#34;如何解决此问题&#34;&gt;如何解决此问题？&lt;/h2&gt;


&lt;figure class=&#34;diagram-large &#34;&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/images/blog/2023-08-28-a-new-alpha-mechanism-for-safer-cluster-upgrades/mvp-flow-diagram_zh.svg&#34;/&gt; 
&lt;/figure&gt;
&lt;!--
The new feature “Mixed Version Proxy” provides the kube-apiserver with the capability to proxy a request to a peer kube-apiserver which is aware of the requested resource and hence can serve the request. To do this, a new filter has been added to the handler chain in the API server&#39;s aggregation layer.
--&gt;
&lt;p&gt;&amp;quot;混合版本代理&amp;quot;新特性为 kube-apiserver 提供了将请求代理到对等的、
能够感知所请求的资源并因此能够服务请求的 kube-apiserver。
为此，一个全新的过滤器已被添加到 API&lt;/p&gt;
&lt;!--
1. The new filter in the handler chain checks if the request is for a group/version/resource
   that the apiserver doesn&#39;t know about (using the existing
   [StorageVersion API](https://github.com/kubernetes/kubernetes/blob/release-1.28/pkg/apis/apiserverinternal/types.go#L25-L37)).
   If so, it proxies the request to one of the apiservers that is listed in the ServerStorageVersion object.
   If the identified peer apiserver fails to respond (due to reasons like network connectivity,
   race between the request being received and the controller registering the apiserver-resource info
   in ServerStorageVersion object), then error 503(&#34;Service Unavailable&#34;) is served.
2. To prevent indefinite proxying of the request, a (new for v1.28) HTTP header
   `X-Kubernetes-APIServer-Rerouted: true` is added to the original request once
   it is determined that the request cannot be served by the original API server.
   Setting that to true marks that the original API server couldn&#39;t handle the request
   and it should therefore be proxied. If a destination peer API server sees this header,
   it never proxies the request further.
3. To set the network location of a kube-apiserver that peers will use to proxy requests,
   the value passed in `--advertise-address` or (when `--advertise-address` is unspecified)
   the `--bind-address` flag is used. For users with network configurations that would not
   allow communication between peer kube-apiservers using the addresses specified in these flags,
   there is an option to pass in the correct peer address as `--peer-advertise-ip` and
   `--peer-advertise-port` flags that are introduced in this feature.
--&gt;
&lt;ol&gt;
&lt;li&gt;处理程序链中的新过滤器检查请求是否为 apiserver 无法解析的 API 组/版本/资源（使用现有的
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/release-1.28/pkg/apis/apiserverinternal/types.go#L25-L37&#34;&gt;StorageVersion API&lt;/a&gt;）。
如果是，它会将请求代理到 ServerStorageVersion 对象中列出的 apiserver 之一。
如果所选的对等 apiserver 无法响应（由于网络连接、收到的请求与在 ServerStorageVersion
对象中注册 apiserver-resource 信息的控制器之间的竞争等原因），则会出现 503（&amp;quot;Service Unavailable&amp;quot;）错误响应。&lt;/li&gt;
&lt;li&gt;为了防止无限期地代理请求，一旦最初的 API 服务器确定无法处理该请求，就会在原始请求中添加一个
（v1.28 新增）HTTP 请求头 &lt;code&gt;X-Kubernetes-APIServer-Rerouted: true&lt;/code&gt;。将其设置为 true 意味着原始
API 服务器无法处理该请求，需要对其进行代理。如果目标侧对等 API 服务器看到此标头，则不会对该请求做进一步的代理操作。&lt;/li&gt;
&lt;li&gt;要设置 kube-apiserver 的网络位置，以供对等服务器来代理请求，将使用 &lt;code&gt;--advertise-address&lt;/code&gt;
或（当未指定&lt;code&gt;--advertise-address&lt;/code&gt;时）&lt;code&gt;--bind-address&lt;/code&gt; 标志所设置的值。
如果网络配置中不允许用户在对等 kube-apiserver 之间使用这些标志中指定的地址进行通信，
可以选择将正确的对等地址配置在此特性引入的 &lt;code&gt;--peer-advertise-ip&lt;/code&gt; 和 &lt;code&gt;--peer-advertise-port&lt;/code&gt;
参数中。&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
## How do I enable this feature?
Following are the required steps to enable the feature:
--&gt;
&lt;h2 id=&#34;如何启用此特性&#34;&gt;如何启用此特性？&lt;/h2&gt;
&lt;p&gt;以下是启用此特性的步骤：&lt;/p&gt;
&lt;!--
* Download the [latest Kubernetes project](/releases/download/) (version `v1.28.0` or later)  
* Switch on the feature gate with the command line flag `--feature-gates=UnknownVersionInteroperabilityProxy=true`
  on the kube-apiservers
* Pass the CA bundle that will be used by source kube-apiserver to authenticate
  destination kube-apiserver&#39;s serving certs using the flag `--peer-ca-file`
  on the kube-apiservers. Note: this is a required flag for this feature to work.
  There is no default value enabled for this flag.
* Pass the correct ip and port of the local kube-apiserver that will be used by
  peers to connect to this kube-apiserver while proxying a request.
  Use the flags `--peer-advertise-ip` and `peer-advertise-port` to the kube-apiservers
  upon startup. If unset, the value passed to either `--advertise-address` or `--bind-address`
  is used. If those too, are unset, the host&#39;s default interface will be used.
--&gt;
&lt;ul&gt;
&lt;li&gt;下载&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/releases/download/&#34;&gt;Kubernetes 项目的最新版本&lt;/a&gt;（版本 &lt;code&gt;v1.28.0&lt;/code&gt; 或更高版本）&lt;/li&gt;
&lt;li&gt;在 kube-apiserver 上使用命令行标志 &lt;code&gt;--feature-gates=UnknownVersionInteroperabilityProxy=true&lt;/code&gt;
打开特性门控&lt;/li&gt;
&lt;li&gt;使用 kube-apiserver 的 &lt;code&gt;--peer-ca-file&lt;/code&gt; 参数为源 kube-apiserver 提供 CA 证书，
用以验证目标 kube-apiserver 的服务证书。注意：这是此功能正常工作所必需的参数。
此参数没有默认值。&lt;/li&gt;
&lt;li&gt;为本地 kube-apiserver 设置正确的 IP 和端口，在代理请求时，对等方将使用该 IP 和端口连接到此
&lt;code&gt;--peer-advertise-port&lt;/code&gt; 命令行参数来配置 kube-apiserver。
&lt;code&gt;--peer-advertise-port&lt;/code&gt; 命令行参数。
如果未设置这两个参数，则默认使用 &lt;code&gt;--advertise-address&lt;/code&gt; 或 &lt;code&gt;--bind-address&lt;/code&gt; 命令行参数的值。
如果这些也未设置，则将使用主机的默认接口。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## What’s missing?
Currently we only proxy resource requests to a peer kube-apiserver when its determined to do so.
Next we need to address how to work discovery requests in such scenarios. Right now we are planning
to have the following capabilities for beta
--&gt;
&lt;h2 id=&#34;少了什么东西&#34;&gt;少了什么东西？&lt;/h2&gt;
&lt;p&gt;目前，我们仅在确定时将资源请求代理到对等 kube-apiserver。
接下来我们需要解决如何在这种情况下处理发现请求。
目前我们计划在测试版中提供以下特性：&lt;/p&gt;
&lt;!--
* Merged discovery across all kube-apiservers
* Use an egress dialer for network connections made to peer kube-apiservers
--&gt;
&lt;ul&gt;
&lt;li&gt;合并所有 kube-apiserver 的发现数据&lt;/li&gt;
&lt;li&gt;使用出口拨号器（egress dialer）与对等 kube-apiserver 进行网络连接&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## How can I learn more?

- Read the [Mixed Version Proxy documentation](/docs/concepts/architecture/mixed-version-proxy)
- Read [KEP-4020: Unknown Version Interoperability Proxy](https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/4020-unknown-version-interoperability-proxy)
--&gt;
&lt;h2 id=&#34;如何进一步了解&#34;&gt;如何进一步了解？&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;阅读&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/architecture/mixed-version-proxy&#34;&gt;混合版本代理文档&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;阅读 &lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/4020-unknown-version-interoperability-proxy&#34;&gt;KEP-4020：未知版本互操作代理&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## How can I get involved?
Reach us on [Slack](https://slack.k8s.io/): [#sig-api-machinery](https://kubernetes.slack.com/messages/sig-api-machinery), or through the [mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-api-machinery). 

Huge thanks to the contributors that have helped in the design, implementation, and review of this feature: Daniel Smith, Han Kang, Joe Betz, Jordan Liggit, Antonio Ojea, David Eads and Ben Luddy!
--&gt;
&lt;h2 id=&#34;如何参与其中&#34;&gt;如何参与其中？&lt;/h2&gt;
&lt;p&gt;通过 &lt;a href=&#34;https://slack.k8s.io/&#34;&gt;Slack&lt;/a&gt;：&lt;a href=&#34;https://kubernetes.slack.com/messages/sig-api-machinery&#34;&gt;#sig-api-machinery&lt;/a&gt;
或&lt;a href=&#34;https://groups.google.com/forum/#!forum/kubernetes-sig-api-machinery&#34;&gt;邮件列表&lt;/a&gt;
联系我们。&lt;/p&gt;
&lt;p&gt;非常感谢帮助设计、实施和评审此特性的贡献者：
Daniel Smith、Han Kang、Joe Betz、Jordan Liggit、Antonio Ojea、David Eads 和 Ben Luddy！&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes v1.28：介绍原生边车容器</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/25/native-sidecar-containers/</link>
      <pubDate>Fri, 25 Aug 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/25/native-sidecar-containers/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Kubernetes v1.28: Introducing native sidecar containers&#34;
date: 2023-08-25
slug: native-sidecar-containers
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Todd Neal (AWS), Matthias Bertschy (ARMO), Sergey Kanzhelev (Google), Gunju Kim (NAVER), Shannon Kularathna (Google)&lt;/p&gt;
&lt;!--
***Authors:*** Todd Neal (AWS), Matthias Bertschy (ARMO), Sergey Kanzhelev (Google), Gunju Kim (NAVER), Shannon Kularathna (Google)
--&gt;
&lt;!--
This post explains how to use the new sidecar feature, which enables restartable init containers and is available in alpha in Kubernetes 1.28. We want your feedback so that we can graduate this feature as soon as possible.
--&gt;
&lt;p&gt;本文介绍了如何使用新的边车（Sidecar）功能，该功能支持可重新启动的 Init 容器，
并且在 Kubernetes 1.28 以 Alpha 版本发布。我们希望得到你的反馈，以便我们尽快完成此功能。&lt;/p&gt;
&lt;!--
The concept of a “sidecar” has been part of Kubernetes since nearly the very beginning. In 2015, sidecars were described in a [blog post](/blog/2015/06/the-distributed-system-toolkit-patterns/) about composite containers as additional containers that “extend and enhance the ‘main’ container”. Sidecar containers have become a common Kubernetes deployment pattern and are often used for network proxies or as part of a logging system. Until now, sidecars were a concept that Kubernetes users applied without native support. The lack of native support has caused some usage friction, which this enhancement aims to resolve.
--&gt;
&lt;p&gt;“边车”的概念几乎从一开始就是 Kubernetes 的一部分。在 2015 年，
一篇关于复合容器的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2015/06/the-distributed-system-toolkit-patterns/&#34;&gt;博客文章（英文）&lt;/a&gt;将边车描述为“扩展和增强 ‘main’ 容器”的附加容器。
边车容器已成为一种常见的 Kubernetes 部署模式，通常用于网络代理或作为日志系统的一部分。
到目前为止，边车已经成为 Kubernetes 用户在没有原生支持情况下使用的概念。
缺乏原生支持导致了一些使用摩擦，此增强功能旨在解决这些问题。&lt;/p&gt;
&lt;!--
## What are sidecar containers in 1.28?
--&gt;
&lt;h2 id=&#34;what-are-sidecar-containers-in-1-28&#34;&gt;在 Kubernetes 1.28 中的边车容器是什么？ &lt;/h2&gt;
&lt;!--
Kubernetes 1.28 adds a new `restartPolicy` field to [init containers](/docs/concepts/workloads/pods/init-containers/) that is available when the `SidecarContainers` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled.
--&gt;
&lt;p&gt;Kubernetes 1.28 在 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/pods/init-containers/&#34;&gt;Init 容器&lt;/a&gt;中添加了一个新的 &lt;code&gt;restartPolicy&lt;/code&gt; 字段，
该字段在 &lt;code&gt;SidecarContainers&lt;/code&gt; &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;特性门控&lt;/a&gt;启用时可用。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;initContainers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;secret-fetch&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;secret-fetch:1.0&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;network-proxy&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;network-proxy:1.0&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;restartPolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Always&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;...&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
The field is optional and, if set, the only valid value is Always. Setting this field changes the behavior of init containers as follows:
--&gt;
&lt;p&gt;该字段是可选的，如果对其设置，则唯一有效的值为 Always。设置此字段会更改 Init 容器的行为，如下所示：&lt;/p&gt;
&lt;!--
- The container restarts if it exits
- Any subsequent init container starts immediately after the [startupProbe](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes) has successfully completed instead of waiting for the restartable init container to exit
- The resource usage calculation changes for the pod as restartable init container resources are now added to the sum of the resource requests by the main containers
--&gt;
&lt;ul&gt;
&lt;li&gt;如果容器退出则会重新启动&lt;/li&gt;
&lt;li&gt;任何后续的 Init 容器在 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes&#34;&gt;startupProbe&lt;/a&gt;
成功完成后立即启动，而不是等待可重新启动 Init 容器退出&lt;/li&gt;
&lt;li&gt;由于可重新启动的 Init 容器资源现在添加到主容器的资源请求总和中，所以 Pod 使用的资源计算发生了变化。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
[Pod termination](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) continues to only depend on the main containers. An init container with a `restartPolicy` of `Always` (named a sidecar) won&#39;t prevent the pod from terminating after the main containers exit.
--&gt;
&lt;p&gt;&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination&#34;&gt;Pod 终止&lt;/a&gt;继续仅依赖于主容器。
&lt;code&gt;restartPolicy&lt;/code&gt; 为 &lt;code&gt;Always&lt;/code&gt; 的 Init 容器（称为边车）不会阻止 Pod 在主容器退出后终止。&lt;/p&gt;
&lt;!--
The following properties of restartable init containers make them ideal for the sidecar deployment pattern:
--&gt;
&lt;p&gt;可重新启动的 Init 容器的以下属性使其非常适合边车部署模式：&lt;/p&gt;
&lt;!--
- Init containers have a well-defined startup order regardless of whether you set a `restartPolicy`, so you can ensure that your sidecar starts before any container declarations that come after the sidecar declaration in your manifest.
- Sidecar containers don&#39;t extend the lifetime of the Pod, so you can use them in short-lived Pods with no changes to the Pod lifecycle.
- Sidecar containers are restarted on exit, which improves resilience and lets you use sidecars to provide services that your main containers can more reliably consume.
--&gt;
&lt;ul&gt;
&lt;li&gt;无论你是否设置 &lt;code&gt;restartPolicy&lt;/code&gt;，初始化容器都有一个明确定义的启动顺序，
因此你可以确保你的边车在其所在清单中声明的后续任何容器之前启动。&lt;/li&gt;
&lt;li&gt;边车容器不会延长 Pod 的生命周期，因此你可以在短生命周期的 Pod 中使用它们，而不会对 Pod 生命周期产生改变。&lt;/li&gt;
&lt;li&gt;边车容器在退出时将被重新启动，这提高了弹性，并允许你使用边车来为主容器提供更可靠地服务。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## When to use sidecar containers
--&gt;
&lt;h2 id=&#34;when-to-use-sidecar-containers&#34;&gt;何时要使用边车容器&lt;/h2&gt;
&lt;!--
You might find built-in sidecar containers useful for workloads such as the following:
--&gt;
&lt;p&gt;你可能会发现内置边车容器对于以下工作负载很有用：&lt;/p&gt;
&lt;!--
- **Batch or AI/ML workloads**, or other Pods that run to completion. These workloads will experience the most significant benefits.
- **Network proxies** that start up before any other container in the manifest. Every other container that runs can use the proxy container&#39;s services. For instructions, see the [Kubernetes Native sidecars in Istio blog post](https://istio.io/latest/blog/2023/native-sidecars/).
- **Log collection containers**, which can now start before any other container and run until the Pod terminates. This improves the reliability of log collection in your Pods.
- **Jobs**, which can use sidecars for any purpose without Job completion being blocked by the running sidecar. No additional configuration is required to ensure this behavior.
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;批量或 AI/ML 工作负载&lt;/strong&gt;，或已运行完成的其他 Pod。这些工作负载将获得最显着的好处。&lt;/li&gt;
&lt;li&gt;任何在清单中其他容器之前启动的&lt;strong&gt;网络代理&lt;/strong&gt;。所有运行的其他容器都可以使用代理容器的服务。
有关说明，请参阅&lt;a href=&#34;https://istio.io/latest/blog/2023/native-sidecars/&#34;&gt;在 Istio 中使用 Kubernetes 原生 Sidecar&lt;/a&gt;。&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;日志收集容器&lt;/strong&gt;，现在可以在任何其他容器之前启动并运行直到 Pod 终止。这提高了 Pod 中日志收集的可靠性。&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Job&lt;/strong&gt;，可以将边车用于任何目的，而 Job 完成不会被正在运行的边车阻止。无需额外配置即可确保此行为。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## How did users get sidecar behavior before 1.28?
--&gt;
&lt;h2 id=&#34;how-did-users-get-sidecar-behavior-before-1-28&#34;&gt;1.28 之前用户如何获得 Sidecar 行为？&lt;/h2&gt;
&lt;!--
Prior to the sidecar feature, the following options were available for implementing sidecar behavior depending on the desired lifetime of the sidecar container:
--&gt;
&lt;p&gt;在边车功能出现之前，可以使用以下选项来根据边车容器的所需生命周期来实现边车行为：&lt;/p&gt;
&lt;!--
- **Lifetime of sidecar less than Pod lifetime**: Use an init container, which provides well-defined startup order. However, the sidecar has to exit for other init containers and main Pod containers to start.
- **Lifetime of sidecar equal to Pod lifetime**: Use a main container that runs alongside your workload containers in the Pod. This method doesn&#39;t give you control over startup order, and lets the sidecar container potentially block Pod termination after the workload containers exit.
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;边车的生命周期小于 Pod 生命周期&lt;/strong&gt;：使用 Init 容器，它提供明确定义的启动顺序。
然而，边车必须退出才能让其他 Init 容器和主 Pod 容器启动。&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;边车的生命周期等于 Pod 生命周期&lt;/strong&gt;：使用与 Pod 中的工作负载容器一起运行的主容器。
此方法无法让你控制启动顺序，并让边车容器可能会在工作负载容器退出后阻止 Pod 终止。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
The built-in sidecar feature solves for the use case of having a lifetime equal to the Pod lifetime and has the following additional benefits:
--&gt;
&lt;p&gt;内置的边车功能解决了其生命周期与 Pod 生命周期相同的用例，并具有以下额外优势：&lt;/p&gt;
&lt;!--
- Provides control over startup order
- Doesn’t block Pod termination
--&gt;
&lt;ul&gt;
&lt;li&gt;提供对启动顺序的控制&lt;/li&gt;
&lt;li&gt;不阻碍 Pod 终止&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Transitioning existing sidecars to the new model
--&gt;
&lt;h2 id=&#34;transitioning-existing-sidecars-to-the-new-model&#34;&gt;将现有边车过渡到新模式&lt;/h2&gt;
&lt;!--
We recommend only using the sidecars feature gate in [short lived testing clusters](/docs/reference/command-line-tools-reference/feature-gates/#feature-stages) at the alpha stage. If you have an existing sidecar that is configured as a main container so it can run for the lifetime of the pod, it can be moved to the `initContainers` section of the pod spec and given a `restartPolicy` of `Always`. In many cases, the sidecar should work as before with the added benefit of having a defined startup ordering and not prolonging the pod lifetime.
--&gt;
&lt;p&gt;我们建议仅在 Alpha 阶段的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/command-line-tools-reference/feature-gates/#feature-stages&#34;&gt;短期测试集群&lt;/a&gt;中使用边车功能。
如果你有一个现有的边车，被配置为主容器，以便它可以在 Pod 的生命周期内运行，
则可以将其移至 Pod 规范的 &lt;code&gt;initContainers&lt;/code&gt; 部分，并将 &lt;code&gt;restartPolicy&lt;/code&gt; 指定为 &lt;code&gt;Always&lt;/code&gt;。
在许多情况下，边车应该像以前一样工作，并具有定义启动顺序且不会延长 Pod 生命周期的额外好处。&lt;/p&gt;
&lt;!--
## Known issues
--&gt;
&lt;h2 id=&#34;known-issues&#34;&gt;已知问题&lt;/h2&gt;
&lt;!--
The alpha release of built-in sidecar containers has the following known issues, which we&#39;ll resolve before graduating the feature to beta:
--&gt;
&lt;p&gt;内置边车容器的 Alpha 版本具有以下已知问题，我们将在该功能升级为 Beta 之前解决这些问题：&lt;/p&gt;
&lt;!--
- The CPU, memory, device, and topology manager are unaware of the sidecar container lifetime and additional resource usage, and will operate as if the Pod had lower resource requests than it actually does.
- The output of `kubectl describe node` is incorrect when sidecars are in use. The output shows resource usage that&#39;s lower than the actual usage because it doesn&#39;t use the new resource usage calculation for sidecar containers.
--&gt;
&lt;ul&gt;
&lt;li&gt;CPU、内存、设备和拓扑管理器不知道边车容器的生命周期和额外的资源使用情况，并且会像 Pod 的资源请求低于实际情况的方式运行。&lt;/li&gt;
&lt;li&gt;使用边车时，&lt;code&gt;kubectl describe node&lt;/code&gt; 的输出不正确。输出显示的资源使用量低于实际使用量，
因为它没有对边车容器使用新的资源使用计算方式。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## We need your feedback!
--&gt;
&lt;h2 id=&#34;we-need-your-feedback&#34;&gt;我们需要你的反馈！&lt;/h2&gt;
&lt;!--
In the alpha stage, we want you to try out sidecar containers in your environments and open issues if you encounter bugs or friction points. We&#39;re especially interested in feedback about the following:
--&gt;
&lt;p&gt;在 Alpha 阶段，我们希望你在环境中尝试边车容器，并在遇到错误或摩擦点时提出问题。我们对以下方面的反馈特别感兴趣：&lt;/p&gt;
&lt;!--
- The shutdown sequence, especially with multiple sidecars running 
- The backoff timeout adjustment for crashing sidecars 
- The behavior of Pod readiness and liveness probes when sidecars are running
--&gt;
&lt;ul&gt;
&lt;li&gt;关闭顺序，尤其是多个边车运行时&lt;/li&gt;
&lt;li&gt;碰撞边车的退避超时调整&lt;/li&gt;
&lt;li&gt;边车运行时 Pod 就绪性和活性探测的行为&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
To open an issue, see the [Kubernetes GitHub repository](https://github.com/kubernetes/kubernetes/issues/new/choose).
--&gt;
&lt;p&gt;要提出问题，请参阅 &lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/new/choose&#34;&gt;Kubernetes GitHub 存储库&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## What’s next?
--&gt;
&lt;h2 id=&#34;what-s-next&#34;&gt;接下来是什么？&lt;/h2&gt;
&lt;!--
In addition to the known issues that will be resolved, we&#39;re working on adding termination ordering for sidecar and main containers. This will ensure that sidecar containers only terminate after the Pod&#39;s main containers have exited.
--&gt;
&lt;p&gt;除了将要解决的已知问题之外，我们正在努力为边车和主容器添加终止顺序。这将确保边车容器仅在 Pod 主容器退出后终止。&lt;/p&gt;
&lt;!--
We’re excited to see the sidecar feature come to Kubernetes and are interested in feedback.
--&gt;
&lt;p&gt;我们很高兴看到 Kubernetes 引入了边车功能，并期望得到反馈。&lt;/p&gt;
&lt;!--
## Acknowledgements
--&gt;
&lt;h2 id=&#34;acknowledgements&#34;&gt;致谢&lt;/h2&gt;
&lt;!--
Many years have passed since the original KEP was written, so we apologize if we omit anyone who worked on this feature over the years. This is a best-effort attempt to recognize the people involved in this effort.
--&gt;
&lt;p&gt;自从最初的 KEP 编写以来已经过去了很多年，因此如果我们遗漏了多年来致力于此功能的任何人，我们将深表歉意。
这也是识别该功能参与者的最大限度努力。&lt;/p&gt;
&lt;!--
- [mrunalp](https://github.com/mrunalp/) for design discussions and reviews
- [thockin](https://github.com/thockin/) for API discussions and support thru years
- [bobbypage](https://github.com/bobbypage) for reviews
- [smarterclayton](https://github.com/smarterclayton) for detailed review and feedback
- [howardjohn](https://github.com/howardjohn) for feedback over years and trying it early during implementation
- [derekwaynecarr](https://github.com/derekwaynecarr) and [dchen1107](https://github.com/dchen1107) for leadership
- [jpbetz](https://github.com/Jpbetz) for API and termination ordering designs as well as code reviews
- [Joseph-Irving](https://github.com/Joseph-Irving) and [rata](https://github.com/rata) for the early iterations design and reviews years back
- [swatisehgal](https://github.com/swatisehgal) and [ffromani](https://github.com/ffromani) for early feedback on resource managers impact
- [alculquicondor](https://github.com/Alculquicondor) for feedback on addressing the version skew of the scheduler
- [wojtek-t](https://github.com/Wojtek-t) for PRR review of a KEP
- [ahg-g](https://github.com/ahg-g) for reviewing the scheduler portion of a KEP
- [adisky](https://github.com/Adisky) for the Job completion issue
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/mrunalp/&#34;&gt;mrunalp&lt;/a&gt; 对于设计的探讨和评论&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/thockin/&#34;&gt;thockin&lt;/a&gt; 多年来对于 API 的讨论和支持&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/bobbypage&#34;&gt;bobbypage&lt;/a&gt; 的审查工作&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/smarterclayton&#34;&gt;smarterclayton&lt;/a&gt; 进行详细审查和反馈&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/howardjohn&#34;&gt;howardjohn&lt;/a&gt; 多年来进行的反馈以及在实施过程中的早期尝试&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/derekwaynecarr&#34;&gt;derekwaynecarr&lt;/a&gt; 和 &lt;a href=&#34;https://github.com/dchen1107&#34;&gt;dchen1107&lt;/a&gt; 的领导力&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/Jpbetz&#34;&gt;jpbetz&lt;/a&gt; 对 API 和终止排序的设计以及代码审查&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/Joseph-Irving&#34;&gt;Joseph-Irving&lt;/a&gt; 和 &lt;a href=&#34;https://github.com/rata&#34;&gt;rata&lt;/a&gt; 对于多年前的早期迭代设计和审查&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/swatisehgal&#34;&gt;swatisehgal&lt;/a&gt; 和 &lt;a href=&#34;https://github.com/ffromani&#34;&gt;ffromani&lt;/a&gt;
对于有关资源管理器影响的早期反馈&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/Alculquicondor&#34;&gt;alculquicondor&lt;/a&gt; 对于解决调度程序版本偏差的相关反馈&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/Wojtek-t&#34;&gt;wojtek-t&lt;/a&gt; 对于 KEP 的 PRR 进行审查&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/ahg-g&#34;&gt;ahg-g&lt;/a&gt; 对于 KEP 的调度程序部分进行审查&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/Adisky&#34;&gt;adisky&lt;/a&gt; 处理了 Job 完成问题&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## More Information
--&gt;
&lt;h2 id=&#34;more-information&#34;&gt;更多内容&lt;/h2&gt;
&lt;!--
- Read [API for sidecar containers](/docs/concepts/workloads/pods/init-containers/#api-for-sidecar-containers) in the Kubernetes documentation
- Read the [Sidecar KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/753-sidecar-containers/README.md)
--&gt;
&lt;ul&gt;
&lt;li&gt;阅读 Kubernetes 文档中的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/pods/init-containers/#api-for-sidecar-containers&#34;&gt;边车容器 API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;阅读&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/753-sidecar-containers/README.md&#34;&gt;边车 KEP&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.28：在 Linux 上使用交换内存的 Beta 支持</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/24/swap-linux-beta/</link>
      <pubDate>Thu, 24 Aug 2023 10:00:00 -0800</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/24/swap-linux-beta/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Kubernetes 1.28: Beta support for using swap on Linux&#34;
date: 2023-08-24T10:00:00-08:00
slug: swap-linux-beta
--&gt;
&lt;!--
**Author:** Itamar Holder (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Itamar Holder (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Wilson Wu (DaoCloud)&lt;/p&gt;
&lt;!--
The 1.22 release [introduced Alpha support](/blog/2021/08/09/run-nodes-with-swap-alpha/) for configuring swap memory usage for Kubernetes workloads running on Linux on a per-node basis. Now, in release 1.28, support for swap on Linux nodes has graduated to Beta, along with many new improvements.
--&gt;
&lt;p&gt;Kubernetes 1.22 版本为交换内存&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2021/08/09/run-nodes-with-swap-alpha/&#34;&gt;引入了一项 Alpha 支持&lt;/a&gt;，
用于为在 Linux 节点上运行的 Kubernetes 工作负载逐个节点地配置交换内存使用。
现在，在 1.28 版中，对 Linux 节点上的交换内存的支持已升级为 Beta 版，并有许多新的改进。&lt;/p&gt;
&lt;!--
Prior to version 1.22, Kubernetes did not provide support for swap memory on Linux systems. This was due to the inherent difficulty in guaranteeing and accounting for pod memory utilization when swap memory was involved. As a result, swap support was deemed out of scope in the initial design of Kubernetes, and the default behavior of a kubelet was to fail to start if swap memory was detected on a node.
--&gt;
&lt;p&gt;在 1.22 版之前，Kubernetes 不提供对 Linux 系统上交换内存的支持。
这是由于在涉及交换内存时保证和计算 Pod 内存利用率的固有困难。
因此，交换内存支持被认为超出了 Kubernetes 的初始设计范围，并且如果在节点上检测到交换内存，
kubelet 的默认行为是无法启动。&lt;/p&gt;
&lt;!--
In version 1.22, the swap feature for Linux was initially introduced in its Alpha stage. This represented a significant advancement, providing Linux users with the opportunity to experiment with the swap feature for the first time. However, as an Alpha version, it was not fully developed and had several issues, including inadequate support for cgroup v2, insufficient metrics and summary API statistics, inadequate testing, and more.
--&gt;
&lt;p&gt;在 1.22 版中，Linux 的交换特性以 Alpha 阶段初次引入。
这代表着一项重大进步，首次为 Linux 用户提供了尝试交换内存特性的机会。
然而，作为 Alpha 版本，它尚未开发完成，并存在一些问题，
包括对 cgroup v2 支持的不足、指标和 API 统计摘要不足、测试不足等等。&lt;/p&gt;
&lt;!--
Swap in Kubernetes has numerous [use cases](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md#user-stories) for a wide range of users. As a result, the node special interest group within the Kubernetes project has invested significant effort into supporting swap on Linux nodes for beta. Compared to the alpha, the kubelet&#39;s support for running with swap enabled is more stable and robust, more user-friendly, and addresses many known shortcomings. This graduation to beta represents a crucial step towards achieving the goal of fully supporting swap in Kubernetes.
--&gt;
&lt;p&gt;Kubernetes 中的交换内存有许多&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md#user-stories&#34;&gt;用例&lt;/a&gt;，
并适用于大量用户。因此，Kubernetes 项目内的节点特别兴趣小组投入了大量精力来支持
Linux 节点上的交换内存特性的 Beta 版本。
与 Alpha 版本相比，启用交换内存后 kubelet 的运行更加稳定和健壮，更加用户友好，并且解决了许多已知缺陷。
这次升级到 Beta 版代表朝着实现在 Kubernetes 中完全支持交换内存的目标迈出了关键一步。&lt;/p&gt;
&lt;!--
## How do I use it?
--&gt;
&lt;h2 id=&#34;how-do-i-use-it&#34;&gt;如何使用此特性？&lt;/h2&gt;
&lt;!--
The utilization of swap memory on a node where it has already been provisioned can be facilitated by the activation of the `NodeSwap` feature gate on the kubelet. Additionally, you must disable the `failSwapOn` configuration setting, or the deprecated `--fail-swap-on` command line flag must be deactivated.
--&gt;
&lt;p&gt;通过激活 kubelet 上的 &lt;code&gt;NodeSwap&lt;/code&gt; 特性门控，可以在已配置交换内存的节点上使用此特性。
此外，你必须禁用 &lt;code&gt;failSwapOn&lt;/code&gt; 设置，或者停用已被弃用的 &lt;code&gt;--fail-swap-on&lt;/code&gt; 命令行标志。&lt;/p&gt;
&lt;!--
It is possible to configure the `memorySwap.swapBehavior` option to define the manner in which a node utilizes swap memory. For instance,
--&gt;
&lt;p&gt;可以配置 &lt;code&gt;memorySwap.swapBehavior&lt;/code&gt; 选项来定义节点使用交换内存的方式。例如：&lt;/p&gt;
&lt;!--
```yaml
# this fragment goes into the kubelet&#39;s configuration file
memorySwap:
  swapBehavior: UnlimitedSwap
```
--&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# 将此段内容放入 kubelet 配置文件&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;memorySwap&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;swapBehavior&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;UnlimitedSwap&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
The available configuration options for `swapBehavior` are:
--&gt;
&lt;p&gt;&lt;code&gt;swapBehavior&lt;/code&gt; 的可用配置选项有：&lt;/p&gt;
&lt;!--
- `UnlimitedSwap` (default): Kubernetes workloads can use as much swap memory as they request, up to the system limit.
- `LimitedSwap`: The utilization of swap memory by Kubernetes workloads is subject to limitations. Only Pods of [Burstable](/docs/concepts/workloads/pods/pod-qos/#burstable) QoS are permitted to employ swap.
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;UnlimitedSwap&lt;/code&gt;（默认）：Kubernetes 工作负载可以根据请求使用尽可能多的交换内存，最多可达到系统限制。&lt;/li&gt;
&lt;li&gt;&lt;code&gt;LimitedSwap&lt;/code&gt;：Kubernetes 工作负载对交换内存的使用受到限制。
只有 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/pods/pod-qos/#burstable&#34;&gt;Burstable&lt;/a&gt; QoS Pod 才允许使用交换内存。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
If configuration for `memorySwap` is not specified and the feature gate is enabled, by default the kubelet will apply the same behaviour as the `UnlimitedSwap` setting.
--&gt;
&lt;p&gt;如果未指定 &lt;code&gt;memorySwap&lt;/code&gt; 的配置并且启用了特性门控，则默认情况下，
kubelet 将应用与 &lt;code&gt;UnlimitedSwap&lt;/code&gt; 设置相同的行为。&lt;/p&gt;
&lt;!--
Note that `NodeSwap` is supported for **cgroup v2** only. For Kubernetes v1.28, using swap along with cgroup v1 is no longer supported.
--&gt;
&lt;p&gt;请注意，仅 &lt;strong&gt;cgroup v2&lt;/strong&gt; 支持 &lt;code&gt;NodeSwap&lt;/code&gt;。针对 Kubernetes v1.28，不再支持将交换内存与 cgroup v1 一起使用。&lt;/p&gt;
&lt;!--
## Install a swap-enabled cluster with kubeadm
--&gt;
&lt;h2 id=&#34;install-a-swap-enabled-cluster-with-kubeadm&#34;&gt;使用 kubeadm 安装支持交换内存的集群&lt;/h2&gt;
&lt;!--
### Before you begin
--&gt;
&lt;h3 id=&#34;before-you-begin&#34;&gt;开始之前&lt;/h3&gt;
&lt;!--
It is required for this demo that the kubeadm tool be installed, following the steps outlined in the [kubeadm installation guide](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm). If swap is already enabled on the node, cluster creation may proceed. If swap is not enabled, please refer to the provided instructions for enabling swap.
--&gt;
&lt;p&gt;此演示需要安装 kubeadm 工具，
安装过程按照 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/setup/product-environment/tools/kubeadm/create-cluster-kubeadm&#34;&gt;kubeadm 安装指南&lt;/a&gt;中描述的步骤进行操作。
如果节点上已启用交换内存，则可以继续创建集群。如果未启用交换内存，请参阅提供的启用交换内存说明。&lt;/p&gt;
&lt;!--
### Create a swap file and turn swap on
--&gt;
&lt;h3 id=&#34;create-a-swap-file-and-turn-swap-on&#34;&gt;创建交换内存文件并开启交换内存功能&lt;/h3&gt;
&lt;!--
I&#39;ll demonstrate creating 4GiB of unencrypted swap.
--&gt;
&lt;p&gt;我将演示创建 4GiB 的未加密交换内存。&lt;/p&gt;
&lt;!--
```bash
dd if=/dev/zero of=/swapfile bs=128M count=32
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
swapon -s # enable the swap file only until this node is rebooted
```
--&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;dd &lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;if&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;/dev/zero &lt;span style=&#34;color:#b8860b&#34;&gt;of&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;/swapfile &lt;span style=&#34;color:#b8860b&#34;&gt;bs&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;128M &lt;span style=&#34;color:#b8860b&#34;&gt;count&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;32&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;chmod &lt;span style=&#34;color:#666&#34;&gt;600&lt;/span&gt; /swapfile
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;mkswap /swapfile
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;swapon /swapfile
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;swapon -s &lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# 仅在该节点被重新启动后启用该交换内存文件&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
To start the swap file at boot time, add line like `/swapfile swap swap defaults 0 0` to `/etc/fstab` file.
--&gt;
&lt;p&gt;要在引导时启动交换内存文件，请将诸如 &lt;code&gt;/swapfile swap swap defaults 0 0&lt;/code&gt; 的内容添加到 &lt;code&gt;/etc/fstab&lt;/code&gt; 文件中。&lt;/p&gt;
&lt;!--
### Set up a Kubernetes cluster that uses swap-enabled nodes
--&gt;
&lt;h3 id=&#34;set-up-a-kubernetes-cluster-that-uses-swap-enabled-nodes&#34;&gt;在 Kubernetes 集群中设置开启交换内存的节点 &lt;/h3&gt;
&lt;!--
To make things clearer, here is an example kubeadm configuration file `kubeadm-config.yaml` for the swap enabled cluster.
--&gt;
&lt;p&gt;清晰起见，这里给出启用交换内存特性的集群的 kubeadm 配置文件示例 &lt;code&gt;kubeadm-config.yaml&lt;/code&gt;。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;---&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;kubeadm.k8s.io/v1beta3&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;InitConfiguration&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#00f;font-weight:bold&#34;&gt;---&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;kubelet.config.k8s.io/v1beta1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;KubeletConfiguration&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;failSwapOn&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;false&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;featureGates&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;NodeSwap&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#a2f;font-weight:bold&#34;&gt;true&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;memorySwap&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;swapBehavior&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;LimitedSwap&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Then create a single-node cluster using `kubeadm init --config kubeadm-config.yaml`. During init, there is a warning that swap is enabled on the node and in case the kubelet `failSwapOn` is set to true. We plan to remove this warning in a future release.
--&gt;
&lt;p&gt;接下来使用 &lt;code&gt;kubeadm init --config kubeadm-config.yaml&lt;/code&gt; 创建单节点集群。
在初始化过程中，如果 kubelet &lt;code&gt;failSwapOn&lt;/code&gt; 设置为 true，则会出现一条警告，告知节点上启用了交换内存特性。
我们计划在未来的版本中删除此警告。&lt;/p&gt;
&lt;!--
## How is the swap limit being determined with LimitedSwap?
--&gt;
&lt;h2 id=&#34;how-is-the-swap-limit-being-determined-with-limitedswap&#34;&gt;如何通过 LimitedSwap 确定交换内存限额？&lt;/h2&gt;
&lt;!--
The configuration of swap memory, including its limitations, presents a significant challenge. Not only is it prone to misconfiguration, but as a system-level property, any misconfiguration could potentially compromise the entire node rather than just a specific workload. To mitigate this risk and ensure the health of the node, we have implemented Swap in Beta with automatic configuration of limitations.
--&gt;
&lt;p&gt;交换内存的配置（包括其局限性）是一项挑战。不仅容易出现配置错误，而且作为系统级属性，
任何错误配置都可能危及整个节点而不仅仅是特定的工作负载。
为了减轻这种风险并确保节点的健康，我们在交换内存的 Beta 版本中实现了对缺陷的自动配置。&lt;/p&gt;
&lt;!--
With `LimitedSwap`, Pods that do not fall under the Burstable QoS classification (i.e. `BestEffort`/`Guaranteed` Qos Pods) are prohibited from utilizing swap memory. `BestEffort` QoS Pods exhibit unpredictable memory consumption patterns and lack information regarding their memory usage, making it difficult to determine a safe allocation of swap memory. Conversely, `Guaranteed` QoS Pods are typically employed for applications that rely on the precise allocation of resources specified by the workload, with memory being immediately available. To maintain the aforementioned security and node health guarantees, these Pods are not permitted to use swap memory when `LimitedSwap` is in effect.
--&gt;
&lt;p&gt;使用 &lt;code&gt;LimitedSwap&lt;/code&gt;，不属于 Burstable QoS 类别的 Pod（即 &lt;code&gt;BestEffort&lt;/code&gt;/&lt;code&gt;Guaranteed&lt;/code&gt; QoS Pod）被禁止使用交换内存。
&lt;code&gt;BestEffort&lt;/code&gt; QoS Pod 表现出不可预测的内存消耗模式，并且缺乏有关其内存使用情况的信息，
因此很难完成交换内存的安全分配。相反，&lt;code&gt;Guaranteed&lt;/code&gt; QoS Pod 通常用于根据工作负载的设置精确分配资源的应用，
其中的内存资源立即可用。
为了维持上述安全和节点健康保证，当 &lt;code&gt;LimitedSwap&lt;/code&gt; 生效时，这些 Pod 将不允许使用交换内存。&lt;/p&gt;
&lt;!--
Prior to detailing the calculation of the swap limit, it is necessary to define the following terms:
--&gt;
&lt;p&gt;在详细计算交换内存限制之前，有必要定义以下术语：&lt;/p&gt;
&lt;!--
* `nodeTotalMemory`: The total amount of physical memory available on the node.
* `totalPodsSwapAvailable`: The total amount of swap memory on the node that is available for use by Pods (some swap memory may be reserved for system use).
* `containerMemoryRequest`: The container&#39;s memory request.
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;nodeTotalMemory&lt;/code&gt;：节点上可用的物理内存总量。&lt;/li&gt;
&lt;li&gt;&lt;code&gt;totalPodsSwapAvailable&lt;/code&gt;：节点上可供 Pod 使用的交换内存总量（可以保留一些交换内存供系统使用）。&lt;/li&gt;
&lt;li&gt;&lt;code&gt;containerMemoryRequest&lt;/code&gt;：容器的内存请求。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
Swap limitation is configured as: `(containerMemoryRequest / nodeTotalMemory) × totalPodsSwapAvailable`
--&gt;
&lt;p&gt;交换内存限制配置为：&lt;code&gt;(containerMemoryRequest / nodeTotalMemory) × totalPodsSwapAvailable&lt;/code&gt;&lt;/p&gt;
&lt;!--
In other words, the amount of swap that a container is able to use is proportionate to its memory request, the node&#39;s total physical memory and the total amount of swap memory on the node that is available for use by Pods.
--&gt;
&lt;p&gt;换句话说，容器能够使用的交换内存量与其内存请求、节点的总物理内存以及节点上可供 Pod
使用的交换内存总量呈比例关系。&lt;/p&gt;
&lt;!--
It is important to note that, for containers within Burstable QoS Pods, it is possible to opt-out of swap usage by specifying memory requests that are equal to memory limits. Containers configured in this manner will not have access to swap memory.
--&gt;
&lt;p&gt;值得注意的是，对于 Burstable QoS Pod 中的容器，可以通过设置内存限制与内存请求相同来选择不使用交换内存。
以这种方式配置的容器将无法访问交换内存。&lt;/p&gt;
&lt;!--
## How does it work?
--&gt;
&lt;h2 id=&#34;how-does-it-work&#34;&gt;此特性如何工作？&lt;/h2&gt;
&lt;!--
There are a number of possible ways that one could envision swap use on a node. When swap is already provisioned and available on a node, SIG Node have [proposed](https://github.com/kubernetes/enhancements/blob/9d127347773ad19894ca488ee04f1cd3af5774fc/keps/sig-node/2400-node-swap/README.md#proposal) the kubelet should be able to be configured so that:
--&gt;
&lt;p&gt;我们可以想象可以在节点上使用交换内存的多种可能方式。当节点上提供了交换内存并可用时，
SIG 节点&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/9d127347773ad19894ca488ee04f1cd3af5774fc/keps/sig-node/2400-node-swap/README.md#proposal&#34;&gt;建议&lt;/a&gt;
kubelet 应该能够遵循如下的配置：&lt;/p&gt;
&lt;!--
- It can start with swap on.
- It will direct the Container Runtime Interface to allocate zero swap memory to Kubernetes workloads by default.
--&gt;
&lt;ul&gt;
&lt;li&gt;在交换内存特性被启用时能够启动。&lt;/li&gt;
&lt;li&gt;默认情况下，kubelet 将指示容器运行时接口（CRI）不为 Kubernetes 工作负载分配交换内存。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
Swap configuration on a node is exposed to a cluster admin via the [`memorySwap` in the KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1). As a cluster administrator, you can specify the node&#39;s behaviour in the presence of swap memory by setting `memorySwap.swapBehavior`.
--&gt;
&lt;p&gt;节点上的交换内存配置通过 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/config-api/kubelet-config.v1&#34;&gt;KubeletConfiguration 中的 &lt;code&gt;memorySwap&lt;/code&gt;&lt;/a&gt; 向集群管理员公开。
作为集群管理员，你可以通过设置 &lt;code&gt;memorySwap.swapBehavior&lt;/code&gt; 来指定存在交换内存时节点的行为。&lt;/p&gt;
&lt;!--
The kubelet [employs the CRI](/docs/concepts/architecture/cri/) (container runtime interface) API to direct the CRI to configure specific cgroup v2 parameters (such as `memory.swap.max`) in a manner that will enable the desired swap configuration for a container. The CRI is then responsible to write these settings to the container-level cgroup.
--&gt;
&lt;p&gt;kubelet &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/architecture/cri/&#34;&gt;使用 CRI&lt;/a&gt;
（容器运行时接口）API 来指示 CRI 配置特定的 cgroup v2 参数（例如 &lt;code&gt;memory.swap.max&lt;/code&gt;），
配置方式要支持容器所期望的交换内存配置。接下来，CRI 负责将这些设置写入容器级的 cgroup。&lt;/p&gt;
&lt;!--
## How can I monitor swap?
--&gt;
&lt;h2 id=&#34;how-can-i-monitor-swap&#34;&gt;如何对交换内存进行监控？&lt;/h2&gt;
&lt;!--
A notable deficiency in the Alpha version was the inability to monitor and introspect swap usage. This issue has been addressed in the Beta version introduced in Kubernetes 1.28, which now provides the capability to monitor swap usage through several different methods.
--&gt;
&lt;p&gt;Alpha 版本的一个显著缺陷是无法监控或检视交换内存的使用情况。
这个问题已在 Kubernetes 1.28 引入的 Beta 版本中得到解决，该版本现在提供了通过多种不同方法监控交换内存使用情况的能力。&lt;/p&gt;
&lt;!--
The beta version of kubelet now collects [node-level metric statistics](/docs/reference/instrumentation/node-metrics/), which can be accessed at the `/metrics/resource` and `/stats/summary` kubelet HTTP endpoints. This allows clients who can directly interrogate the kubelet to monitor swap usage and remaining swap memory when using LimitedSwap. Additionally, a `machine_swap_bytes` metric has been added to cadvisor to show the total physical swap capacity of the machine.
--&gt;
&lt;p&gt;kubelet 的 Beta 版本现在支持收集&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/instrumentation/node-metrics/&#34;&gt;节点级指标统计信息&lt;/a&gt;，
可以通过 &lt;code&gt;/metrics/resource&lt;/code&gt; 和 &lt;code&gt;/stats/summary&lt;/code&gt; kubelet HTTP 端点进行访问。
这些信息使得客户端能够在使用 LimitedSwap 时直接访问 kubelet 来监控交换内存使用情况和剩余交换内存情况。
此外，cadvisor 中还添加了 &lt;code&gt;machine_swap_bytes&lt;/code&gt; 指标，以显示机器上总的物理交换内存容量。&lt;/p&gt;
&lt;!--
## Caveats
--&gt;
&lt;h2 id=&#34;caveats&#34;&gt;注意事项&lt;/h2&gt;
&lt;!--
Having swap available on a system reduces predictability. Swap&#39;s performance is worse than regular memory, sometimes by many orders of magnitude, which can cause unexpected performance regressions. Furthermore, swap changes a system&#39;s behaviour under memory pressure. Since enabling swap permits greater memory usage for workloads in Kubernetes that cannot be predictably accounted for, it also increases the risk of noisy neighbours and unexpected packing configurations, as the scheduler cannot account for swap memory usage.
--&gt;
&lt;p&gt;在系统上提供可用交换内存会降低可预测性。由于交换内存的性能比常规内存差，
有时差距甚至在多个数量级，因而可能会导致意外的性能下降。此外，交换内存会改变系统在内存压力下的行为。
由于启用交换内存允许 Kubernetes 中的工作负载使用更大的内存量，而这一用量是无法预测的，
因此也会增加嘈杂邻居和非预期的装箱配置的风险，因为调度程序无法考虑交换内存使用情况。&lt;/p&gt;
&lt;!--
The performance of a node with swap memory enabled depends on the underlying physical storage. When swap memory is in use, performance will be significantly worse in an I/O operations per second (IOPS) constrained environment, such as a cloud VM with I/O throttling, when compared to faster storage mediums like solid-state drives or NVMe.
--&gt;
&lt;p&gt;启用交换内存的节点的性能取决于底层物理存储。当使用交换内存时，与固态硬盘或 NVMe 等更较快的存储介质相比，
在每秒 I/O 操作数（IOPS）受限的环境（例如具有 I/O 限制的云虚拟机）中，性能会明显变差。&lt;/p&gt;
&lt;!--
As such, we do not advocate the utilization of swap memory for workloads or environments that are subject to performance constraints. Furthermore, it is recommended to employ `LimitedSwap`, as this significantly mitigates the risks posed to the node.
--&gt;
&lt;p&gt;因此，我们不提倡针对有性能约束的工作负载或环境使用交换内存。
此外，建议使用 &lt;code&gt;LimitedSwap&lt;/code&gt;，因为这可以显著减轻给节点带来的风险。&lt;/p&gt;
&lt;!--
Cluster administrators and developers should benchmark their nodes and applications before using swap in production scenarios, and [we need your help](#how-do-i-get-involved) with that!
--&gt;
&lt;p&gt;集群管理员和开发人员应该在生产场景中使用交换内存之前对其节点和应用进行基准测试，
&lt;a href=&#34;#how-do-i-get-involved&#34;&gt;我们需要你的帮助&lt;/a&gt;！&lt;/p&gt;
&lt;!--
### Security risk
--&gt;
&lt;h3 id=&#34;security-risk&#34;&gt;安全风险&lt;/h3&gt;
&lt;!--
Enabling swap on a system without encryption poses a security risk, as critical information, such as volumes that represent Kubernetes Secrets, [may be swapped out to the disk](/docs/concepts/configuration/secret/#information-security-for-secrets). If an unauthorized individual gains access to the disk, they could potentially obtain these confidential data. To mitigate this risk, the Kubernetes project strongly recommends that you encrypt your swap space. However, handling encrypted swap is not within the scope of kubelet; rather, it is a general OS configuration concern and should be addressed at that level. It is the administrator&#39;s responsibility to provision encrypted swap to mitigate this risk.
--&gt;
&lt;p&gt;在没有加密的系统上启用交换内存会带来安全风险，因为关键信息（例如代表 Kubernetes Secret 的卷）
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/configuration/secret/#information-security-for-secrets&#34;&gt;可能会被交换到磁盘&lt;/a&gt;。
如果未经授权的个人访问磁盘，他们就有可能获得这些机密数据。为了减轻这种风险，
Kubernetes 项目强烈建议你对交换内存空间进行加密。但是，处理加密交换内存不是 kubelet 的责任；
相反，它其实是操作系统配置通用问题，应在该级别解决。管理员有责任提供加密交换内存来减轻这种风险。&lt;/p&gt;
&lt;!--
Furthermore, as previously mentioned, with `LimitedSwap` the user has the option to completely disable swap usage for a container by specifying memory requests that are equal to memory limits. This will prevent the corresponding containers from accessing swap memory.
--&gt;
&lt;p&gt;此外，如前所述，启用 &lt;code&gt;LimitedSwap&lt;/code&gt; 模式时，用户可以选择通过设置内存限制与内存请求相同来完全禁止容器使用交换内存。
这种设置会阻止相应的容器访问交换内存。&lt;/p&gt;
&lt;!--
## Looking ahead
--&gt;
&lt;h2 id=&#34;looking-ahead&#34;&gt;展望未来&lt;/h2&gt;
&lt;!--
The Kubernetes 1.28 release introduced Beta support for swap memory on Linux nodes, and we will continue to work towards [general availability](/docs/reference/command-line-tools-reference/feature-gates/#feature-stages) for this feature. I hope that this will include:
--&gt;
&lt;p&gt;Kubernetes 1.28 版本引入了对 Linux 节点上交换内存的 Beta 支持，
我们将继续为这项特性的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/command-line-tools-reference/feature-gates/#feature-stages&#34;&gt;正式发布&lt;/a&gt;而努力。
我希望这将包括：&lt;/p&gt;
&lt;!--
* Add the ability to set a system-reserved quantity of swap from what kubelet detects on the host.
* Adding support for controlling swap consumption at the Pod level via cgroups.
  * This point is still under discussion.
* Collecting feedback from test user cases.
  * We will consider introducing new configuration modes for swap, such as a node-wide swap limit for workloads.
--&gt;
&lt;ul&gt;
&lt;li&gt;添加根据 kubelet 在主机上检测到的内容来设置系统预留交换内存量的功能。&lt;/li&gt;
&lt;li&gt;添加对通过 cgroup 在 Pod 级别控制交换内存用量的支持。
&lt;ul&gt;
&lt;li&gt;这一点仍在讨论中。&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;收集测试用例的反馈。
&lt;ul&gt;
&lt;li&gt;我们将考虑引入新的交换内存配置模式，例如在节点层面为工作负载设置交换内存限制。&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## How can I learn more?
--&gt;
&lt;h2 id=&#34;how-can-i-learn-more&#34;&gt;如果进一步学习？&lt;/h2&gt;
&lt;!--
You can review the current [documentation](/docs/concepts/architecture/nodes/#swap-memory) for using swap with Kubernetes.
--&gt;
&lt;p&gt;你可以查看当前&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/architecture/nodes/#swap-memory&#34;&gt;文档&lt;/a&gt;以了解如何在 Kubernetes 中使用交换内存。&lt;/p&gt;
&lt;!--
For more information, and to assist with testing and provide feedback, please see [KEP-2400](https://github.com/kubernetes/enhancements/issues/4128) and its [design proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md).
--&gt;
&lt;p&gt;如需了解更多信息，以及协助测试和提供反馈，请参阅 &lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/4128&#34;&gt;KEP-2400&lt;/a&gt;
及其&lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md&#34;&gt;设计提案&lt;/a&gt;。&lt;/p&gt;
&lt;!--
## How do I get involved?
--&gt;
&lt;h2 id=&#34;how-do-i-get-involved&#34;&gt;参与其中&lt;/h2&gt;
&lt;!--
Your feedback is always welcome! SIG Node [meets regularly](https://github.com/kubernetes/community/tree/master/sig-node#meetings) and [can be reached](https://github.com/kubernetes/community/tree/master/sig-node#contact) via [Slack](https://slack.k8s.io/) (channel **#sig-node**), or the SIG&#39;s [mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-node). A Slack channel dedicated to swap is also available at **#sig-node-swap**.
--&gt;
&lt;p&gt;随时欢迎你的反馈！SIG Node &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-node#meetings&#34;&gt;定期举行会议&lt;/a&gt;并可以通过
&lt;a href=&#34;https://slack.k8s.io/&#34;&gt;Slack&lt;/a&gt;（&lt;strong&gt;#sig-node&lt;/strong&gt; 频道）
或 SIG 的 &lt;a href=&#34;https://groups.google.com/forum/#!forum/kubernetes-sig-node&#34;&gt;邮件列表&lt;/a&gt;
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-node#contact&#34;&gt;进行联系&lt;/a&gt;。
Slack 还提供了专门讨论交换内存的 &lt;strong&gt;#sig-node-swap&lt;/strong&gt; 频道。&lt;/p&gt;
&lt;!--
Feel free to reach out to me, Itamar Holder (**@iholder101** on Slack and GitHub) if you&#39;d like to help or ask further questions.
--&gt;
&lt;p&gt;如果你想提供帮助或提出进一步的问题，请随时联系 Itamar Holder（Slack 和 GitHub 账号为 &lt;strong&gt;@iholder101&lt;/strong&gt;）。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.28：节点 podresources API 正式发布</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/23/kubelet-podresources-api-ga/</link>
      <pubDate>Wed, 23 Aug 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/23/kubelet-podresources-api-ga/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#39;Kubernetes 1.28: Node podresources API Graduates to GA&#39;
date: 2023-08-23
slug: kubelet-podresources-api-GA
--&gt;
&lt;!--
**Author:**
Francesco Romani (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Francesco Romani (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Wilson Wu (DaoCloud)&lt;/p&gt;
&lt;!--
The podresources API is an API served by the kubelet locally on the node, which exposes the compute resources exclusively allocated to containers. With the release of Kubernetes 1.28, that API is now Generally Available.
--&gt;
&lt;p&gt;podresources API 是由 kubelet 提供的节点本地 API，它用于公开专门分配给容器的计算资源。
随着 Kubernetes 1.28 的发布，该 API 现已正式发布。&lt;/p&gt;
&lt;!--
## What problem does it solve?
--&gt;
&lt;h2 id=&#34;what-problem-does-it-solve&#34;&gt;它解决了什么问题？&lt;/h2&gt;
&lt;!--
The kubelet can allocate exclusive resources to containers, like [CPUs, granting exclusive access to full cores](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/) or [memory, either regions or hugepages](https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/). Workloads which require high performance, or low latency (or both) leverage these features. The kubelet also can assign [devices to containers](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/). Collectively, these features which enable exclusive assignments are known as &#34;resource managers&#34;.
--&gt;
&lt;p&gt;kubelet 可以向容器分配独占资源，例如
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/tasks/administer-cluster/cpu-management-policies/&#34;&gt;CPU，授予对完整核心的独占访问权限&lt;/a&gt;或&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/tasks/administer-cluster/memory-manager/&#34;&gt;内存，包括内存区域或巨页&lt;/a&gt;。
需要高性能或低延迟（或者两者都需要）的工作负载可以利用这些特性。
kubelet 还可以将&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/&#34;&gt;设备分配给容器&lt;/a&gt;。
总的来说，这些支持独占分配的特性被称为“资源管理器（Resource Managers）”。&lt;/p&gt;
&lt;!--
Without an API like podresources, the only possible option to learn about resource assignment was to read the state files the resource managers use. While done out of necessity, the problem with this approach is the path and the format of these file are both internal implementation details. Albeit very stable, the project reserves the right to change them freely. Consuming the content of the state files is thus fragile and unsupported, and projects doing this are recommended to consider moving to podresources API or to other supported APIs.
--&gt;
&lt;p&gt;如果没有像 podresources 这样的 API，了解资源分配的唯一可能选择就是读取资源管理器使用的状态文件。
虽然这样做是出于必要，但这种方法的问题是这些文件的路径和格式都是内部实现细节。
尽管非常稳定，但项目保留自由更改它们的权利。因此，使用状态文件内容的做法是不可靠的且不受支持的，
建议这样做的项目考虑迁移到使用 podresources API 或其他受支持的 API。&lt;/p&gt;
&lt;!--
## Overview of the API
--&gt;
&lt;h2 id=&#34;overview-of-the-api&#34;&gt;API 概述&lt;/h2&gt;
&lt;!--
The podresources API was [initially proposed to enable device monitoring](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources). In order to enable monitoring agents, a key prerequisite is to enable introspection of device assignment, which is performed by the kubelet. Serving this purpose was the initial goal of the API. The first iteration of the API only had a single function implemented, `List`, to return information about the assignment of devices to containers. The API is used by [multus CNI](https://github.com/k8snetworkplumbingwg/multus-cni) and by [GPU monitoring tools](https://github.com/NVIDIA/dcgm-exporter).
--&gt;
&lt;p&gt;podresources API &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources&#34;&gt;最初被提出是为了实现设备监控&lt;/a&gt;。
为了支持监控代理，一个关键的先决条件是启用由 kubelet 执行的设备分配自省（Introspection）。
API 的最初目标就是服务于此目的。API 的第一次迭代仅实现了一个函数 &lt;code&gt;List&lt;/code&gt;，用于返回有关设备分配给容器的信息。
该 API 由 &lt;a href=&#34;https://github.com/k8snetworkplumbingwg/multus-cni&#34;&gt;multus CNI&lt;/a&gt;
和 &lt;a href=&#34;https://github.com/NVIDIA/dcgm-exporter&#34;&gt;GPU 监控工具&lt;/a&gt;使用。&lt;/p&gt;
&lt;!--
Since its inception, the podresources API increased its scope to cover other resource managers than device manager. Starting from Kubernetes 1.20, the `List` API reports also CPU cores and memory regions (including hugepages); the API also reports the NUMA locality of the devices, while the locality of CPUs and memory can be inferred from the system.
--&gt;
&lt;p&gt;自推出以来，podresources API 扩大了其范围，涵盖了设备管理器之外的其他资源管理器。
从 Kubernetes 1.20 开始，&lt;code&gt;List&lt;/code&gt; API 还报告 CPU 核心和内存区域（包括巨页）；
在能够从系统中推断 CPU 和内存的位置时，API 还报告设备的 NUMA 位置。&lt;/p&gt;
&lt;!--
In Kubernetes 1.21, the API [gained](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2403-pod-resources-allocatable-resources/README.md) the `GetAllocatableResources` function. This newer API complements the existing `List` API and enables monitoring agents to determine the unallocated resources, thus enabling new features built on top of the podresources API like a [NUMA-aware scheduler plugin](https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/noderesourcetopology/README.md).
--&gt;
&lt;p&gt;在 Kubernetes 1.21 中，API &lt;a href=&#34;https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2403-pod-resources-allocatable-resources/README.md&#34;&gt;增加了&lt;/a&gt;
&lt;code&gt;GetAllocatableResources&lt;/code&gt; 函数。这个较新的 API 补充了现有的 &lt;code&gt;List&lt;/code&gt; API，
并使监控代理能够辨识尚未分配的资源，从而支持在 podresources API 之上构建新的特性，
例如 &lt;a href=&#34;https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/pkg/noderesourcetopology/README.md&#34;&gt;NUMA 感知的调度器插件&lt;/a&gt;。&lt;/p&gt;
&lt;!--
Finally, in Kubernetes 1.27, another function, `Get` was introduced to be more friendly with CNI meta-plugins, to make it simpler to access resources allocated to a specific pod, rather than having to filter through resources for all pods on the node. The `Get` function is currently alpha level.
--&gt;
&lt;p&gt;最后，在 Kubernetes 1.27 中，引入了另一个函数 &lt;code&gt;Get&lt;/code&gt;，以便对 CNI 元插件（Meta-Plugins）更加友好，
简化对已分配给特定 Pod 的资源的访问，而不必过滤节点上所有 Pod 的资源。&lt;code&gt;Get&lt;/code&gt; 函数目前处于 Alpha 级别。&lt;/p&gt;
&lt;!--
## Consuming the API
--&gt;
&lt;h2 id=&#34;consuming-the-api&#34;&gt;使用 API&lt;/h2&gt;
&lt;!--
The podresources API is served by the kubelet locally, on the same node on which is running. On unix flavors, the endpoint is served over a unix domain socket; the default path is `/var/lib/kubelet/pod-resources/kubelet.sock`. On windows, the endpoint is served over a named pipe; the default path is `npipe://\\.\pipe\kubelet-pod-resources`.
--&gt;
&lt;p&gt;podresources API 由本地 kubelet 提供，位于 kubelet 运行所在的同一节点上。
在 Unix 风格的系统上，通过 Unix 域套接字提供端点；默认路径是 &lt;code&gt;/var/lib/kubelet/pod-resources/kubelet.sock&lt;/code&gt;。
在 Windows 上，通过命名管道提供端点；默认路径是 &lt;code&gt;npipe://\\.\pipe\kubelet-pod-resources&lt;/code&gt;。&lt;/p&gt;
&lt;!--
In order for the containerized monitoring application consume the API, the socket should be mounted inside the container. A good practice is to mount the directory on which the podresources socket endpoint sits rather than the socket directly. This will ensure that after a kubelet restart, the containerized monitor application will be able to re-connect to the socket.
--&gt;
&lt;p&gt;为了让容器化监控应用使用 API，套接字应挂载到容器内。
一个好的做法是挂载 podresources 套接字端点所在的目录，而不是直接挂载套接字。
这种做法将确保 kubelet 重新启动后，容器化监视器应用能够重新连接到套接字。&lt;/p&gt;
&lt;!--
An example manifest for a hypothetical monitoring agent consuming the podresources API and deployed as a DaemonSet could look like:
--&gt;
&lt;p&gt;在下面的 DaemonSet 示例清单中，包含一个假想的使用 podresources API 的监控代理：&lt;/p&gt;
&lt;!--
```yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: podresources-monitoring-app
  namespace: monitoring
spec:
  selector:
    matchLabels:
      name: podresources-monitoring
  template:
    metadata:
      labels:
        name: podresources-monitoring
    spec:
      containers:
      - args:
        - --podresources-socket=unix:///host-podresources/kubelet.sock
        command:
        - /bin/podresources-monitor
        image: podresources-monitor:latest  # just for an example
        volumeMounts:
        - mountPath: /host-podresources
          name: host-podresources
      serviceAccountName: podresources-monitor
      volumes:
      - hostPath:
          path: /var/lib/kubelet/pod-resources
          type: Directory
        name: host-podresources
```
--&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;apps/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;DaemonSet&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;podresources-monitoring-app&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;namespace&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;monitoring&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;selector&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;matchLabels&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;podresources-monitoring&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;template&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;labels&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;podresources-monitoring&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;args&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- --podresources-socket=unix:///host-podresources/kubelet.sock&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;command&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- /bin/podresources-monitor&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;podresources-monitor:latest &lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# 仅作为样例&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumeMounts&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;mountPath&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/host-podresources&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;host-podresources&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;serviceAccountName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;podresources-monitor&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;volumes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;hostPath&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;path&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;/var/lib/kubelet/pod-resources&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Directory&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;host-podresources&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
I hope you find it straightforward to consume the podresources API  programmatically. The kubelet API package provides the protocol file and the go type definitions; however, a client package is not yet available from the project, and the existing code should not be used directly. The [recommended](https://github.com/kubernetes/kubernetes/blob/v1.28.0-rc.0/pkg/kubelet/apis/podresources/client.go#L32) approach is to reimplement the client in your projects, copying and pasting the related functions like for example the multus project is [doing](https://github.com/k8snetworkplumbingwg/multus-cni/blob/v4.0.2/pkg/kubeletclient/kubeletclient.go).
--&gt;
&lt;p&gt;我希望你发现以编程方式使用 podresources API 很简单。kubelet API包提供了协议文件和 Go 类型定义；
但是，该项目尚未提供客户端包，并且你也不应直接使用现有代码。
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/blob/v1.28.0-rc.0/pkg/kubelet/apis/podresources/client.go#L32&#34;&gt;推荐&lt;/a&gt;方法是在你自己的项目中重新实现客户端，
复制并粘贴相关功能，就像 multus 项目&lt;a href=&#34;https://github.com/k8snetworkplumbingwg/multus-cni/blob/v4.0.2/pkg/kubeletclient/kubeletclient.go&#34;&gt;所做的那样&lt;/a&gt;。&lt;/p&gt;
&lt;!--
When operating the containerized monitoring application consuming the podresources API, few points are worth highlighting to prevent &#34;gotcha&#34; moments:
--&gt;
&lt;p&gt;在操作使用 podresources API 的容器化监控应用程序时，有几点值得强调，以防止出现“陷阱”：&lt;/p&gt;
&lt;!--
- Even though the API only exposes data, and doesn&#39;t allow by design clients to mutate the kubelet state, the gRPC request/response model requires read-write access to the podresources API socket. In other words, it is not possible to limit the container mount to `ReadOnly`.
- Multiple clients are allowed to connect to the podresources socket and consume the API, since it is stateless.
- The kubelet has [built-in rate limits](https://github.com/kubernetes/kubernetes/pull/116459) to mitigate local Denial of Service attacks from misbehaving or malicious consumers. The consumers of the API must tolerate rate limit errors returned by the server. The rate limit is currently hardcoded and global, so misbehaving clients can consume all the quota and potentially starve correctly behaving clients.
--&gt;
&lt;ul&gt;
&lt;li&gt;尽管 API 仅公开数据，并且设计上不允许客户端改变 kubelet 状态，
但 gRPC 请求/响应模型要求能对 podresources API 套接字进行读写访问。
换句话说，将容器挂载限制为 &lt;code&gt;ReadOnly&lt;/code&gt; 是不可能的。&lt;/li&gt;
&lt;li&gt;让多个客户端连接到 podresources 套接字并使用此 API 是允许的，因为 API 是无状态的。&lt;/li&gt;
&lt;li&gt;kubelet 具有&lt;a href=&#34;https://github.com/kubernetes/kubernetes/pull/116459&#34;&gt;内置限速机制&lt;/a&gt;，
用以缓解来自行为不当或恶意用户的本地拒绝服务攻击。API 的使用者必须容忍服务器返回的速率限制错误。
速率限制目前是硬编码的且作用于全局的，因此行为不当的客户端可能会耗光所有配额，进而导致行为正确的客户端挨饿。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Future enhancements
--&gt;
&lt;h2 id=&#34;future-enhancements&#34;&gt;未来的增强&lt;/h2&gt;
&lt;!--
For historical reasons, the podresources API has a less precise specification than typical kubernetes APIs (such as the Kubernetes HTTP API, or the container runtime interface). This leads to unspecified behavior in corner cases. An [effort](https://issues.k8s.io/119423) is ongoing to rectify this state and to have a more precise specification.
--&gt;
&lt;p&gt;由于历史原因，podresources API 的规范不如典型的 kubernetes API（例如 Kubernetes HTTP API 或容器运行时接口）精确。
这会导致在极端情况下出现未指定的行为。我们正在&lt;a href=&#34;https://issues.k8s.io/119423&#34;&gt;努力&lt;/a&gt;纠正这种状态并制定更精确的规范。&lt;/p&gt;
&lt;!--
The [Dynamic Resource Allocation (DRA)](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/3063-dynamic-resource-allocation) infrastructure is a major overhaul of the resource management. The [integration](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/3695-pod-resources-for-dra) with the podresources API is already ongoing.
--&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/3063-dynamic-resource-allocation&#34;&gt;动态资源分配（DRA）&lt;/a&gt;基础设施是对资源管理的重大改革。
与 podresources API 的&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/3695-pod-resources-for-dra&#34;&gt;集成&lt;/a&gt;已经在进行中。&lt;/p&gt;
&lt;!--
An [effort](https://issues.k8s.io/119817) is ongoing to recommend or create a reference client package ready to be consumed.
--&gt;
&lt;p&gt;我们正在&lt;a href=&#34;https://issues.k8s.io/119817&#34;&gt;努力&lt;/a&gt;推荐或创建可供使用的参考客户端包。&lt;/p&gt;
&lt;!--
## Getting involved
--&gt;
&lt;h2 id=&#34;getting-involved&#34;&gt;参与其中&lt;/h2&gt;
&lt;!--
This feature is driven by [SIG Node](https://github.com/Kubernetes/community/blob/master/sig-node/README.md). Please join us to connect with the community and share your ideas and feedback around the above feature and beyond. We look forward to hearing from you!
--&gt;
&lt;p&gt;此功能由 &lt;a href=&#34;https://github.com/Kubernetes/community/blob/master/sig-node/README.md&#34;&gt;SIG Node&lt;/a&gt; 驱动。
请加入我们，与社区建立联系，并分享你对上述功能及其他功能的想法和反馈。我们期待你的回音！&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.28：Job 失效处理的改进</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/21/kubernetes-1-28-jobapi-update/</link>
      <pubDate>Mon, 21 Aug 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/21/kubernetes-1-28-jobapi-update/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Kubernetes 1.28: Improved failure handling for Jobs&#34;
date: 2023-08-21
slug: kubernetes-1-28-jobapi-update
--&gt;
&lt;!--
**Authors:** Kevin Hannon (G-Research), Michał Woźniak (Google)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Kevin Hannon (G-Research), Michał Woźniak (Google)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (Daocloud)&lt;/p&gt;
&lt;!--
This blog discusses two new features in Kubernetes 1.28 to improve Jobs for batch
users: [Pod replacement policy](/docs/concepts/workloads/controllers/job/#pod-replacement-policy)
and [Backoff limit per index](/docs/concepts/workloads/controllers/job/#backoff-limit-per-index).
--&gt;
&lt;p&gt;本博客讨论 Kubernetes 1.28 中的两个新特性，用于为批处理用户改进 Job：
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/#pod-replacement-policy&#34;&gt;Pod 更换策略&lt;/a&gt;
和&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/#backoff-limit-per-index&#34;&gt;基于索引的回退限制&lt;/a&gt;。&lt;/p&gt;
&lt;!--
These features continue the effort started by the
[Pod failure policy](/docs/concepts/workloads/controllers/job/#pod-failure-policy)
to improve the handling of Pod failures in a Job.
--&gt;
&lt;p&gt;这些特性延续了 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/#pod-failure-policy&#34;&gt;Pod 失效策略&lt;/a&gt;
为开端的工作，用来改进对 Job 中 Pod 失效的处理。&lt;/p&gt;
&lt;!--
## Pod replacement policy {#pod-replacement-policy}

By default, when a pod enters a terminating state (e.g. due to preemption or
eviction), Kubernetes immediately creates a replacement Pod. Therefore, both Pods are running
at the same time. In API terms, a pod is considered terminating when it has a
`deletionTimestamp` and it has a phase `Pending` or `Running`.
--&gt;
&lt;h2 id=&#34;pod-replacement-policy&#34;&gt;Pod 更换策略 &lt;/h2&gt;
&lt;p&gt;默认情况下，当 Pod 进入终止（Terminating）状态（例如由于抢占或驱逐机制）时，Kubernetes
会立即创建一个替换的 Pod，因此这时会有两个 Pod 同时运行。就 API 而言，当 Pod 具有
&lt;code&gt;deletionTimestamp&lt;/code&gt; 字段并且处于 &lt;code&gt;Pending&lt;/code&gt; 或 &lt;code&gt;Running&lt;/code&gt; 阶段时会被视为终止。&lt;/p&gt;
&lt;!--
The scenario when two Pods are running at a given time is problematic for
some popular machine learning frameworks, such as
TensorFlow and [JAX](https://jax.readthedocs.io/en/latest/), which require at most one Pod running at the same time,
for a given index. 
Tensorflow gives the following error if two pods are running for a given index.
--&gt;
&lt;p&gt;对于一些流行的机器学习框架来说，在给定时间运行两个 Pod 的情况是有问题的，
例如 TensorFlow 和 &lt;a href=&#34;https://jax.readthedocs.io/en/latest/&#34;&gt;JAX&lt;/a&gt;，
对于给定的索引，它们最多同时运行一个 Pod。如果两个 Pod 使用同一个索引来运行，
Tensorflow 会抛出以下错误：&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt; /job:worker/task:4: Duplicate task registration with task_name=/job:worker/replica:0/task:4
&lt;/code&gt;&lt;/pre&gt;&lt;!--
See more details in the ([issue](https://github.com/kubernetes/kubernetes/issues/115844)).

Creating the replacement Pod before the previous one fully terminates can also
cause problems in clusters with scarce resources or with tight budgets, such as:
* cluster resources can be difficult to obtain for Pods pending to be scheduled,
  as Kubernetes might take a long time to find available nodes until the existing
  Pods are fully terminated.
* if cluster autoscaler is enabled, the replacement Pods might produce undesired
  scale ups.
--&gt;
&lt;p&gt;可参考&lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/115844&#34;&gt;问题报告&lt;/a&gt;进一步了解细节。&lt;/p&gt;
&lt;p&gt;在前一个 Pod 完全终止之前创建替换的 Pod 也可能会导致资源或预算紧张的集群出现问题，例如：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;对于待调度的 Pod 来说，很难分配到集群资源，导致 Kubernetes 需要很长时间才能找到可用节点，
直到现有 Pod 完全终止。&lt;/li&gt;
&lt;li&gt;如果启用了集群自动扩缩器（Cluster Autoscaler），可能会产生不必要的集群规模扩增。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
### How can you use it? {#pod-replacement-policy-how-to-use}

This is an alpha feature, which you can enable by turning on `JobPodReplacementPolicy`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) in
your cluster.

Once the feature is enabled in your cluster, you can use it by creating a new Job that specifies a
`podReplacementPolicy` field as shown here:
--&gt;
&lt;h3 id=&#34;pod-replacement-policy-how-to-use&#34;&gt;如何使用？ &lt;/h3&gt;
&lt;p&gt;这是一项 Alpha 级别特性，你可以通过在集群中启用 &lt;code&gt;JobPodReplacementPolicy&lt;/code&gt;
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;特性门控&lt;/a&gt;
来启用该特性。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Job&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;new&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;...&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;podReplacementPolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Failed&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;...&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
In that Job, the Pods would only be replaced once they reached the `Failed` phase,
and not when they are terminating.

Additionally, you can inspect the `.status.terminating` field of a Job. The value
of the field is the number of Pods owned by the Job that are currently terminating.
--&gt;
&lt;p&gt;在此 Job 中，Pod 仅在达到 &lt;code&gt;Failed&lt;/code&gt; 阶段时才会被替换，而不是在它们处于终止过程中（Terminating）时被替换。&lt;/p&gt;
&lt;p&gt;此外，你可以检查 Job 的 &lt;code&gt;.status.termination&lt;/code&gt; 字段。该字段的值表示终止过程中的
Job 所关联的 Pod 数量。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl get jobs/myjob -o&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#b8860b&#34;&gt;jsonpath&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;{.items[*].status.terminating}&amp;#39;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;3 # three Pods are terminating and have not yet reached the Failed phase
&lt;/code&gt;&lt;/pre&gt;&lt;!--
This can be particularly useful for external queueing controllers, such as
[Kueue](https://github.com/kubernetes-sigs/kueue), that tracks quota
from running Pods of a Job until the resources are reclaimed from
the currently terminating Job.

Note that the `podReplacementPolicy: Failed` is the default when using a custom
[Pod failure policy](/docs/concepts/workloads/controllers/job/#pod-failure-policy).
--&gt;
&lt;p&gt;这一特性对于外部排队控制器（例如 &lt;a href=&#34;https://github.com/kubernetes-sigs/kueue&#34;&gt;Kueue&lt;/a&gt;）特别有用，
它跟踪作业的运行 Pod 的配额，直到从当前终止过程中的 Job 资源被回收为止。&lt;/p&gt;
&lt;p&gt;请注意，使用自定义 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/#pod-failure-policy&#34;&gt;Pod 失败策略&lt;/a&gt;时，
&lt;code&gt;podReplacementPolicy: Failed&lt;/code&gt; 是默认值。&lt;/p&gt;
&lt;!--
## Backoff limit per index {#backoff-limit-per-index}

By default, Pod failures for [Indexed Jobs](/docs/concepts/workloads/controllers/job/#completion-mode)
are counted towards the global limit of retries, represented by `.spec.backoffLimit`.
This means, that if there is a consistently failing index, it is restarted
repeatedly until it exhausts the limit. Once the limit is reached the entire
Job is marked failed and some indexes may never be even started.
--&gt;
&lt;h2 id=&#34;backoff-limit-per-index&#34;&gt;逐索引的回退限制 &lt;/h2&gt;
&lt;p&gt;默认情况下，&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/#completion-mode&#34;&gt;带索引的 Job（Indexed Job）&lt;/a&gt;的
Pod 失败情况会被统计下来，受 &lt;code&gt;.spec.backoffLimit&lt;/code&gt; 字段所设置的全局重试次数限制。
这意味着，如果存在某个索引值的 Pod 一直持续失败，则会 Pod 会被重新启动，直到重试次数达到限制值。
一旦达到限制值，整个 Job 将被标记为失败，并且对应某些索引的 Pod 甚至可能从不曾被启动。&lt;/p&gt;
&lt;!--
This is problematic for use cases where you want to handle Pod failures for
every index independently. For example, if you use Indexed Jobs for running
integration tests where each index corresponds to a testing suite. In that case,
you may want to account for possible flake tests allowing for 1 or 2 retries per
suite. There might be some buggy suites, making the corresponding
indexes fail consistently. In that case you may prefer to limit retries for
the buggy suites, yet allowing other suites to complete.
--&gt;
&lt;p&gt;对于你想要独立处理不同索引值的 Pod 的失败的场景而言，这是有问题的。
例如，如果你使用带索引的 Job（Indexed Job）来运行集成测试，其中每个索引值对应一个测试套件。
在这种情况下，你可能需要考虑可能发生的脆弱测试（Flake Test），允许每个套件重试 1 次或 2 次。
可能存在一些有缺陷的套件，导致对应索引的 Pod 始终失败。在这种情况下，
你或许更希望限制有问题的套件的重试，而允许其他套件完成。&lt;/p&gt;
&lt;!--
The feature allows you to:
* complete execution of all indexes, despite some indexes failing.
* better utilize the computational resources by avoiding unnecessary retries of consistently failing indexes.
--&gt;
&lt;p&gt;此特性允许你：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;尽管某些索引值的 Pod 失败，但仍完成执行所有索引值的 Pod。&lt;/li&gt;
&lt;li&gt;通过避免对持续失败的、特定索引值的 Pod 进行不必要的重试，更好地利用计算资源。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
### How can you use it? {#backoff-limit-per-index-how-to-use}

This is an alpha feature, which you can enable by turning on the
`JobBackoffLimitPerIndex`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
in your cluster.

Once the feature is enabled in your cluster, you can create an Indexed Job with the
`.spec.backoffLimitPerIndex` field specified.
--&gt;
&lt;h3 id=&#34;backoff-limit-per-index-how-to-use&#34;&gt;可以如何使用它？ &lt;/h3&gt;
&lt;p&gt;这是一个 Alpha 特性，你可以通过启用集群的 &lt;code&gt;JobBackoffLimitPerIndex&lt;/code&gt;
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/command-line-tools-reference/feature-gates/&#34;&gt;特性门控&lt;/a&gt;来启用此特性。&lt;/p&gt;
&lt;p&gt;在集群中启用该特性后，你可以在创建带索引的 Job（Indexed Job）时指定 &lt;code&gt;.spec.backoffLimitPerIndex&lt;/code&gt; 字段。&lt;/p&gt;
&lt;!--
#### Example

The following example demonstrates how to use this feature to make sure the
Job executes all indexes (provided there is no other reason for the early Job
termination, such as reaching the `activeDeadlineSeconds` timeout, or being
manually deleted by the user), and the number of failures is controlled per index.
--&gt;
&lt;h4 id=&#34;示例&#34;&gt;示例&lt;/h4&gt;
&lt;p&gt;下面的示例演示如何使用此功能来确保 Job 执行所有索引值的 Pod（前提是没有其他原因导致 Job 提前终止，
例如达到 &lt;code&gt;activeDeadlineSeconds&lt;/code&gt; 超时，或者被用户手动删除），以及按索引控制失败次数。&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;batch/v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Job&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;job-backoff-limit-per-index-execute-all&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;completions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;8&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;parallelism&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;2&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;completionMode&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Indexed&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;backoffLimitPerIndex&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;1&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;template&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;restartPolicy&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Never&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;example&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#080;font-style:italic&#34;&gt;# 当此示例容器作为任何 Job 中的第二个或第三个索引运行时（即使在重试之后），它会返回错误并失败&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;python&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;command&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- python3&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- -c&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- |&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;          import os, sys, time
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;          id = int(os.environ.get(&amp;#34;JOB_COMPLETION_INDEX&amp;#34;))
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;          if id == 1 or id == 2:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;            sys.exit(1)
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44;font-style:italic&#34;&gt;          time.sleep(1)&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;          
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Now, inspect the Pods after the job is finished:
--&gt;
&lt;p&gt;现在，在 Job 完成后检查 Pod：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-sh&#34; data-lang=&#34;sh&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl get pods -l job-name&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;job-backoff-limit-per-index-execute-all
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Returns output similar to this:
--&gt;
&lt;p&gt;返回的输出类似与：&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;NAME                                              READY   STATUS      RESTARTS   AGE
job-backoff-limit-per-index-execute-all-0-b26vc   0/1     Completed   0          49s
job-backoff-limit-per-index-execute-all-1-6j5gd   0/1     Error       0          49s
job-backoff-limit-per-index-execute-all-1-6wd82   0/1     Error       0          37s
job-backoff-limit-per-index-execute-all-2-c66hg   0/1     Error       0          32s
job-backoff-limit-per-index-execute-all-2-nf982   0/1     Error       0          43s
job-backoff-limit-per-index-execute-all-3-cxmhf   0/1     Completed   0          33s
job-backoff-limit-per-index-execute-all-4-9q6kq   0/1     Completed   0          28s
job-backoff-limit-per-index-execute-all-5-z9hqf   0/1     Completed   0          28s
job-backoff-limit-per-index-execute-all-6-tbkr8   0/1     Completed   0          23s
job-backoff-limit-per-index-execute-all-7-hxjsq   0/1     Completed   0          22s
&lt;/code&gt;&lt;/pre&gt;&lt;!--
Additionally, you can take a look at the status for that Job:
--&gt;
&lt;p&gt;此外，你可以查看该 Job 的状态：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-sh&#34; data-lang=&#34;sh&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;kubectl get &lt;span style=&#34;color:#a2f&#34;&gt;jobs&lt;/span&gt; job-backoff-limit-per-index-fail-index -o yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
The output ends with a `status` similar to:
--&gt;
&lt;p&gt;输出内容以 &lt;code&gt;status&lt;/code&gt; 结尾，类似于：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;status&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;completedIndexes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;0&lt;/span&gt;,&lt;span style=&#34;color:#666&#34;&gt;3-7&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;failedIndexes&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;1&lt;/span&gt;,&lt;span style=&#34;color:#666&#34;&gt;2&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;succeeded&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;6&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;failed&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;4&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;conditions&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;message&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Job has failed indexes&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;reason&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;FailedIndexes&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;status&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;True&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;type&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Failed&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Here, indexes `1`  and `2` were both retried once. After the second failure,
in each of them, the specified `.spec.backoffLimitPerIndex` was exceeded, so
the retries were stopped. For comparison, if the per-index backoff was disabled,
then the buggy indexes would retry until the global `backoffLimit` was exceeded,
and then the entire Job would be marked failed, before some of the higher
indexes are started.
--&gt;
&lt;p&gt;这里，索引为 &lt;code&gt;1&lt;/code&gt; 和 &lt;code&gt;2&lt;/code&gt; 的 Pod 都被重试了一次。这两个 Pod 在第二次失败后都超出了指定的
&lt;code&gt;.spec.backoffLimitPerIndex&lt;/code&gt;，因此停止重试。相比之下，如果禁用了基于索引的回退，
那么有问题的、特定索引的 Pod 将被重试，直到超出全局 &lt;code&gt;backoffLimit&lt;/code&gt;，之后在启动一些索引值较高的 Pod 之前，
整个 Job 将被标记为失败。&lt;/p&gt;
&lt;!--
## How can you learn more?

- Read the user-facing documentation for [Pod replacement policy](/docs/concepts/workloads/controllers/job/#pod-replacement-policy),
[Backoff limit per index](/docs/concepts/workloads/controllers/job/#backoff-limit-per-index), and
[Pod failure policy](/docs/concepts/workloads/controllers/job/#pod-failure-policy)
- Read the KEPs for [Pod Replacement Policy](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/3939-allow-replacement-when-fully-terminated),
[Backoff limit per index](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/3850-backoff-limits-per-index-for-indexed-jobs), and
[Pod failure policy](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/3329-retriable-and-non-retriable-failures).
--&gt;
&lt;h2 id=&#34;how-can-you-learn-more&#34;&gt;如何进一步了解&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;阅读面向用户的 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/#pod-replacement-policy&#34;&gt;Pod 替换策略&lt;/a&gt;文档、
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/#backoff-limit-per-index&#34;&gt;逐索引的回退限制&lt;/a&gt;和
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/workloads/controllers/job/#pod-failure-policy&#34;&gt;Pod 失效策略&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;阅读 &lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/3939-allow-replacement-when-fully-terminated&#34;&gt;Pod 替换策略&lt;/a&gt;)、
&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/3850-backoff-limits-per-index-for-indexed-jobs&#34;&gt;逐索引的回退限制&lt;/a&gt;和
&lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/3329-retriable-and-non-retriable-failures&#34;&gt;Pod 失效策略&lt;/a&gt;的 KEP。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Getting Involved

These features were sponsored by [SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps).  Batch use cases are actively
being improved for Kubernetes users in the
[batch working group](https://github.com/kubernetes/community/tree/master/wg-batch).
Working groups are relatively short-lived initiatives focused on specific goals.
The goal of the WG Batch is to improve experience for batch workload users, offer support for
batch processing use cases, and enhance the
Job API for common use cases.  If that interests you, please join the working
group either by subscriping to our
[mailing list](https://groups.google.com/a/kubernetes.io/g/wg-batch) or on
[Slack](https://kubernetes.slack.com/messages/wg-batch).
--&gt;
&lt;h2 id=&#34;getting-Involved&#34;&gt;参与其中&lt;/h2&gt;
&lt;p&gt;这些功能由 &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-apps&#34;&gt;SIG Apps&lt;/a&gt; 赞助。
社区正在为&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/wg-batch&#34;&gt;批处理工作组&lt;/a&gt;中的
Kubernetes 用户积极改进批处理场景。
工作组是相对短暂的举措，专注于特定目标。WG Batch 的目标是改善批处理工作负载的用户体验、
提供对批处理场景的支持并增强常见场景下的 Job API。
如果你对此感兴趣，请通过订阅我们的&lt;a href=&#34;https://groups.google.com/a/kubernetes.io/g/wg-batch&#34;&gt;邮件列表&lt;/a&gt;或通过
&lt;a href=&#34;https://kubernetes.slack.com/messages/wg-batch&#34;&gt;Slack&lt;/a&gt; 加入进来。&lt;/p&gt;
&lt;!--
## Acknowledgments

As with any Kubernetes feature, multiple people contributed to getting this
done, from testing and filing bugs to reviewing code.

We would not have been able to achieve either of these features without Aldo
Culquicondor (Google) providing excellent domain knowledge and expertise
throughout the Kubernetes ecosystem.
--&gt;
&lt;h2 id=&#34;acknowledgments&#34;&gt;致谢&lt;/h2&gt;
&lt;p&gt;与其他 Kubernetes 特性一样，从测试、报告缺陷到代码审查，很多人为此特性做出了贡献。&lt;/p&gt;
&lt;p&gt;如果没有 Aldo Culquicondor（Google）提供出色的领域知识和跨整个 Kubernetes 生态系统的知识，
我们可能无法实现这些特性。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes v1.28：可追溯的默认 StorageClass 进阶至 GA</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/18/retroactive-default-storage-class-ga/</link>
      <pubDate>Fri, 18 Aug 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/18/retroactive-default-storage-class-ga/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Kubernetes v1.28: Retroactive Default StorageClass move to GA&#34;
date: 2023-08-18
slug: retroactive-default-storage-class-ga
--&gt;
&lt;!--
**Author:** Roman Bednář (Red Hat)
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Roman Bednář (Red Hat)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href=&#34;https://github.com/windsonsea&#34;&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
Announcing graduation to General Availability (GA) - Retroactive Default StorageClass Assignment
in Kubernetes v1.28!
--&gt;
&lt;p&gt;可追溯的默认 StorageClass 赋值（Retroactive Default StorageClass Assignment）在
Kubernetes v1.28 中宣布进阶至正式发布（GA）！&lt;/p&gt;
&lt;!--
Kubernetes SIG Storage team is thrilled to announce that the
&#34;Retroactive Default StorageClass Assignment&#34; feature,
introduced as an alpha in Kubernetes v1.25, has now graduated to GA
and is officially part of the Kubernetes v1.28 release.
This enhancement brings a significant improvement to how default
[StorageClasses](/docs/concepts/storage/storage-classes/) are assigned
to PersistentVolumeClaims (PVCs).
--&gt;
&lt;p&gt;Kubernetes SIG Storage 团队非常高兴地宣布，在 Kubernetes v1.25 中作为
Alpha 特性引入的 “可追溯默认 StorageClass 赋值” 现已进阶至 GA，
并正式成为 Kubernetes v1.28 发行版的一部分。
这项增强特性极大地改进了默认的 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/storage/storage-classes/&#34;&gt;StorageClasses&lt;/a&gt;
为 PersistentVolumeClaim (PVC) 赋值的方式。&lt;/p&gt;
&lt;!--
With this feature enabled, you no longer need to create a default StorageClass
first and then a PVC to assign the class. Instead, any PVCs without a StorageClass
assigned will now be retroactively updated to include the default StorageClass.
This enhancement ensures that PVCs no longer get stuck in an unbound state,
and storage provisioning works seamlessly,
even when a default StorageClass is not defined at the time of PVC creation.
--&gt;
&lt;p&gt;启用此特性后，你不再需要先创建默认的 StorageClass，再创建 PVC 来指定存储类。
现在，未分配 StorageClass 的所有 PVC 都将被自动更新为包含默认的 StorageClass。
此项增强特性确保即使默认的 StorageClass 在 PVC 创建时未被定义，
PVC 也不会再滞留在未绑定状态，存储制备工作可以无缝进行。&lt;/p&gt;
&lt;!--
## What changed?

The PersistentVolume (PV) controller has been modified to automatically assign
a default StorageClass to any unbound PersistentVolumeClaim with the `storageClassName` not set.
Additionally, the PersistentVolumeClaim admission validation mechanism within
the API server has been adjusted to allow changing values from an unset state
to an actual StorageClass name.
--&gt;
&lt;h2 id=&#34;what-changed&#34;&gt;有什么变化？  &lt;/h2&gt;
&lt;p&gt;PersistentVolume (PV) 控制器已修改为：当未设置 &lt;code&gt;storageClassName&lt;/code&gt; 时，自动向任何未绑定的
PersistentVolumeClaim 分配一个默认的 StorageClass。此外，API 服务器中的 PersistentVolumeClaim
准入验证机制也已调整为允许将值从未设置状态更改为实际的 StorageClass 名称。&lt;/p&gt;
&lt;!--
## How to use it?

As this feature has graduated to GA, there&#39;s no need to enable a feature gate anymore.
Simply make sure you are running Kubernetes v1.28 or later, and the feature will be
available for use.
--&gt;
&lt;h2 id=&#34;how-to-use-it&#34;&gt;如何使用？ &lt;/h2&gt;
&lt;p&gt;由于此特性已进阶至 GA，所以不再需要启用特性门控。
只需确保你运行的是 Kubernetes v1.28 或更高版本，此特性即可供使用。&lt;/p&gt;
&lt;!--
For more details, read about
[default StorageClass assignment](/docs/concepts/storage/persistent-volumes/#retroactive-default-storageclass-assignment)
in the Kubernetes documentation. You can also read the previous
[blog post](/blog/2023/01/05/retroactive-default-storage-class/)
announcing beta graduation in v1.26.
--&gt;
&lt;p&gt;有关更多细节，可以查阅 Kubernetes
文档中的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/storage/persistent-volumes/#retroactive-default-storageclass-assignment&#34;&gt;默认 StorageClass 赋值&lt;/a&gt;。
你也可以阅读以前在 v1.26 中宣布进阶至 Beta
的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/01/05/retroactive-default-storage-class/&#34;&gt;博客文章&lt;/a&gt;。&lt;/p&gt;
&lt;!--
To provide feedback, join our [Kubernetes Storage Special-Interest-Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG)
or participate in discussions on our [public Slack channel](https://app.slack.com/client/T09NY5SBT/C09QZFCE5).
--&gt;
&lt;p&gt;要提供反馈，请加入我们的
&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-storage&#34;&gt;Kubernetes 存储特别兴趣小组&lt;/a&gt; (SIG)
或参与&lt;a href=&#34;https://app.slack.com/client/T09NY5SBT/C09QZFCE5&#34;&gt;公共 Slack 频道&lt;/a&gt;上的讨论。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 1.28: 节点非体面关闭进入 GA 阶段（正式发布）</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/16/kubernetes-1-28-non-graceful-node-shutdown-ga/</link>
      <pubDate>Wed, 16 Aug 2023 10:00:00 -0800</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/16/kubernetes-1-28-non-graceful-node-shutdown-ga/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Kubernetes 1.28: Non-Graceful Node Shutdown Moves to GA&#34;
date: 2023-08-16T10:00:00-08:00
slug: kubernetes-1-28-non-graceful-node-shutdown-GA
--&gt;
&lt;!--
**Authors:** Xing Yang (VMware) and Ashutosh Kumar (Elastic)
--&gt;
&lt;p&gt;&lt;strong&gt;作者：&lt;/strong&gt; Xing Yang (VMware) 和 Ashutosh Kumar (Elastic)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者：&lt;/strong&gt; Xin Li (Daocloud)&lt;/p&gt;
&lt;!--
The Kubernetes Non-Graceful Node Shutdown feature is now GA in Kubernetes v1.28.
It was introduced as
[alpha](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2268-non-graceful-shutdown)
in Kubernetes v1.24, and promoted to
[beta](https://kubernetes.io/blog/2022/12/16/kubernetes-1-26-non-graceful-node-shutdown-beta/)
in Kubernetes v1.26.
This feature allows stateful workloads to restart on a different node if the
original node is shutdown unexpectedly or ends up in a non-recoverable state
such as the hardware failure or unresponsive OS.
--&gt;
&lt;p&gt;Kubernetes 节点非体面关闭特性现已在 Kubernetes v1.28 中正式发布。&lt;/p&gt;
&lt;p&gt;此特性在 Kubernetes v1.24 中作为 &lt;a href=&#34;https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2268-non-graceful-shutdown&#34;&gt;Alpha&lt;/a&gt;
特性引入，并在 Kubernetes v1.26 中转入 &lt;a href=&#34;https://kubernetes.io/blog/2022/12/16/kubernetes-1-26-non-graceful-node-shutdown-beta/&#34;&gt;Beta&lt;/a&gt;
阶段。如果原始节点意外关闭或最终处于不可恢复状态（例如硬件故障或操作系统无响应），
此特性允许有状态工作负载在不同节点上重新启动。&lt;/p&gt;
&lt;!--
## What is a Non-Graceful Node Shutdown

In a Kubernetes cluster, a node can be shutdown in a planned graceful way or
unexpectedly because of reasons such as power outage or something else external.
A node shutdown could lead to workload failure if the node is not drained
before the shutdown. A node shutdown can be either graceful or non-graceful.
--&gt;
&lt;h2 id=&#34;什么是节点非体面关闭&#34;&gt;什么是节点非体面关闭&lt;/h2&gt;
&lt;p&gt;在 Kubernetes 集群中，节点可能会按计划正常关闭，也可能由于断电或其他外部原因而意外关闭。
如果节点在关闭之前未腾空，则节点关闭可能会导致工作负载失败。节点关闭可以是正常关闭，也可以是非正常关闭。&lt;/p&gt;
&lt;!--
The [Graceful Node Shutdown](https://kubernetes.io/blog/2021/04/21/graceful-node-shutdown-beta/)
feature allows Kubelet to detect a node shutdown event, properly terminate the pods,
and release resources, before the actual shutdown.
--&gt;
&lt;p&gt;&lt;a href=&#34;https://kubernetes.io/blog/2021/04/21/graceful-node-shutdown-beta/&#34;&gt;节点体面关闭&lt;/a&gt;特性允许
kubelet 在实际关闭之前检测节点关闭事件、正确终止该节点上的 Pod 并释放资源。&lt;/p&gt;
&lt;!--
When a node is shutdown but not detected by Kubelet&#39;s Node Shutdown Manager,
this becomes a non-graceful node shutdown.
Non-graceful node shutdown is usually not a problem for stateless apps, however,
it is a problem for stateful apps.
The stateful application cannot function properly if the pods are stuck on the
shutdown node and are not restarting on a running node.
--&gt;
&lt;p&gt;当节点关闭但 kubelet 的节点关闭管理器未检测到时，将造成节点非体面关闭。
对于无状态应用程序来说，节点非体面关闭通常不是问题，但是对于有状态应用程序来说，这是一个问题。
如果 Pod 停留在关闭节点上并且未在正在运行的节点上重新启动，则有状态应用程序将无法正常运行。&lt;/p&gt;
&lt;!--
In the case of a non-graceful node shutdown, you can manually add an `out-of-service` taint on the Node.
--&gt;
&lt;p&gt;在节点非体面关闭的情况下，你可以在 Node 上手动添加 &lt;code&gt;out-of-service&lt;/code&gt; 污点。&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;kubectl taint nodes &amp;lt;node-name&amp;gt; node.kubernetes.io/out-of-service=nodeshutdown:NoExecute
&lt;/code&gt;&lt;/pre&gt;&lt;!--
This taint triggers pods on the node to be forcefully deleted if there are no
matching tolerations on the pods. Persistent volumes attached to the shutdown node
will be detached, and new pods will be created successfully on a different running
node.
--&gt;
&lt;p&gt;如果 Pod 上没有与之匹配的容忍规则，则此污点会触发节点上的 Pod 被强制删除。
挂接到关闭中的节点的持久卷将被解除挂接，新的 Pod 将在不同的运行节点上成功创建。&lt;/p&gt;
&lt;!--
**Note:** Before applying the out-of-service taint, you must verify that a node is
already in shutdown or power-off state (not in the middle of restarting).

Once all the workload pods that are linked to the out-of-service node are moved to
a new running node, and the shutdown node has been recovered, you should remove that
taint on the affected node after the node is recovered.
--&gt;
&lt;p&gt;&lt;strong&gt;注意：&lt;/strong&gt; 在应用 out-of-service 污点之前，你必须验证节点是否已经处于关闭或断电状态（而不是在重新启动中）。&lt;/p&gt;
&lt;p&gt;与 out-of-service 节点有关联的所有工作负载的 Pod 都被移动到新的运行节点，
并且所关闭的节点已恢复之后，你应该删除受影响节点上的污点。&lt;/p&gt;
&lt;!--
## What’s new in stable

With the promotion of the Non-Graceful Node Shutdown feature to stable, the
feature gate  `NodeOutOfServiceVolumeDetach` is locked to true on
`kube-controller-manager` and cannot be disabled.
--&gt;
&lt;h2 id=&#34;稳定版中有哪些新内容&#34;&gt;稳定版中有哪些新内容&lt;/h2&gt;
&lt;p&gt;随着非正常节点关闭功能提升到稳定状态，特性门控
&lt;code&gt;NodeOutOfServiceVolumeDetach&lt;/code&gt; 在 &lt;code&gt;kube-controller-manager&lt;/code&gt; 上被锁定为 true，并且无法禁用。&lt;/p&gt;
&lt;!--
Metrics `force_delete_pods_total` and `force_delete_pod_errors_total` in the
Pod GC Controller are enhanced to account for all forceful pods deletion.
A reason is added to the metric to indicate whether the pod is forcefully deleted
because it is terminated, orphaned, terminating with the `out-of-service` taint,
or terminating and unscheduled.
--&gt;
&lt;p&gt;Pod GC 控制器中的指标 &lt;code&gt;force_delete_pods_total&lt;/code&gt; 和 &lt;code&gt;force_delete_pod_errors_total&lt;/code&gt;
得到增强，以考虑所有 Pod 的强制删除情况。
指标中会添加一个 &amp;quot;reason&amp;quot;，以指示 Pod 是否因终止、孤儿、因 &lt;code&gt;out-of-service&lt;/code&gt;
污点而终止或因未计划终止而被强制删除。&lt;/p&gt;
&lt;!--
A &#34;reason&#34; is also added to the metric `attachdetach_controller_forced_detaches`
in the Attach Detach Controller to indicate whether the force detach is caused by
the `out-of-service` taint or a timeout.
--&gt;
&lt;p&gt;Attach Detach Controller 中的指标 &lt;code&gt;attachdetach_controller_forced_detaches&lt;/code&gt;
中还会添加一个 &amp;quot;reason&amp;quot;，以指示强制解除挂接是由 &lt;code&gt;out-of-service&lt;/code&gt; 污点还是超时引起的。&lt;/p&gt;
&lt;!--
## What’s next?

This feature requires a user to manually add a taint to the node to trigger
workloads failover and remove the taint after the node is recovered.
In the future, we plan to find ways to automatically detect and fence nodes
that are shutdown/failed and automatically failover workloads to another node.
--&gt;
&lt;h2 id=&#34;接下来&#34;&gt;接下来&lt;/h2&gt;
&lt;p&gt;此特性要求用户手动向节点添加污点以触发工作负载故障转移，并在节点恢复后删除污点。
未来，我们计划找到方法来自动检测和隔离关闭/失败的节点，并自动将工作负载故障转移到另一个节点。&lt;/p&gt;
&lt;!--
## How can I learn more?

Check out additional documentation on this feature
[here](https://kubernetes.io/docs/concepts/architecture/nodes/#non-graceful-node-shutdown).
--&gt;
&lt;h2 id=&#34;如何了解更多&#34;&gt;如何了解更多？&lt;/h2&gt;
&lt;p&gt;在&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/architecture/nodes/#non-graceful-node-shutdown&#34;&gt;此处&lt;/a&gt;可以查看有关此特性的其他文档。&lt;/p&gt;
&lt;!--
## How to get involved?

We offer a huge thank you to all the contributors who helped with design,
implementation, and review of this feature and helped move it from alpha, beta, to stable:
--&gt;
&lt;p&gt;我们非常感谢所有帮助设计、实现和审查此功能并帮助其从 Alpha、Beta 到稳定版的贡献者：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Michelle Au (&lt;a href=&#34;https://github.com/msau42&#34;&gt;msau42&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Derek Carr (&lt;a href=&#34;https://github.com/derekwaynecarr&#34;&gt;derekwaynecarr&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Danielle Endocrimes (&lt;a href=&#34;https://github.com/endocrimes&#34;&gt;endocrimes&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Baofa Fan (&lt;a href=&#34;https://github.com/carlory&#34;&gt;carlory&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Tim Hockin  (&lt;a href=&#34;https://github.com/thockin&#34;&gt;thockin&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Ashutosh Kumar (&lt;a href=&#34;https://github.com/sonasingh46&#34;&gt;sonasingh46&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Hemant Kumar (&lt;a href=&#34;https://github.com/gnufied&#34;&gt;gnufied&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Yuiko Mouri (&lt;a href=&#34;https://github.com/YuikoTakada&#34;&gt;YuikoTakada&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Mrunal Patel (&lt;a href=&#34;https://github.com/mrunalp&#34;&gt;mrunalp&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;David Porter (&lt;a href=&#34;https://github.com/bobbypage&#34;&gt;bobbypage&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Yassine Tijani (&lt;a href=&#34;https://github.com/yastij&#34;&gt;yastij&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Jing Xu (&lt;a href=&#34;https://github.com/jingxu97&#34;&gt;jingxu97&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Xing Yang (&lt;a href=&#34;https://github.com/xing-yang&#34;&gt;xing-yang&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
This feature is a collaboration between SIG Storage and SIG Node.
For those interested in getting involved with the design and development of any
part of the Kubernetes Storage system, join the Kubernetes Storage Special
Interest Group (SIG).
For those interested in getting involved with the design and development of the
components that support the controlled interactions between pods and host
resources, join the Kubernetes Node SIG.
--&gt;
&lt;p&gt;此特性是 SIG Storage 和 SIG Node 之间的协作。对于那些有兴趣参与 Kubernetes
存储系统任何部分的设计和开发的人，请加入 Kubernetes 存储特别兴趣小组（SIG）。
对于那些有兴趣参与支持 Pod 和主机资源之间受控交互的组件的设计和开发，请加入 Kubernetes Node SIG。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>pkgs.k8s.io：介绍 Kubernetes 社区自有的包仓库</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/15/pkgs-k8s-io-introduction/</link>
      <pubDate>Tue, 15 Aug 2023 20:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/15/pkgs-k8s-io-introduction/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;pkgs.k8s.io: Introducing Kubernetes Community-Owned Package Repositories&#34;
date: 2023-08-15T20:00:00+0000
slug: pkgs-k8s-io-introduction
--&gt;
&lt;!--
**Author**: Marko Mudrinić (Kubermatic)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Marko Mudrinić (Kubermatic)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Wilson Wu (DaoCloud)&lt;/p&gt;
&lt;!--
On behalf of Kubernetes SIG Release, I am very excited to introduce the
Kubernetes community-owned software
repositories for Debian and RPM packages: `pkgs.k8s.io`! The new package
repositories are replacement for the Google-hosted package repositories
(`apt.kubernetes.io` and `yum.kubernetes.io`) that we&#39;ve been using since
Kubernetes v1.5.
--&gt;
&lt;p&gt;我很高兴代表 Kubernetes SIG Release 介绍 Kubernetes
社区自有的 Debian 和 RPM 软件仓库：&lt;code&gt;pkgs.k8s.io&lt;/code&gt;！
这些全新的仓库取代了我们自 Kubernetes v1.5 以来一直使用的托管在
Google 的仓库（&lt;code&gt;apt.kubernetes.io&lt;/code&gt; 和 &lt;code&gt;yum.kubernetes.io&lt;/code&gt;）。&lt;/p&gt;
&lt;!--
This blog post contains information about these new package repositories,
what does it mean to you as an end user, and how to migrate to the new
repositories.
--&gt;
&lt;p&gt;这篇博文包含关于这些新的包仓库的信息、它对最终用户意味着什么以及如何迁移到新仓库。&lt;/p&gt;
&lt;!--
**ℹ️  Update (January 12, 2024):** the _**legacy Google-hosted repositories are going
away in January 2024.**_
Check out [the deprecation announcement](/blog/2023/08/31/legacy-package-repository-deprecation/)
for more details about this change.
--&gt;
&lt;p&gt;&lt;strong&gt;ℹ️ 更新（2024 年 1 月 12 日）：旧版托管在 Google 的仓库已被弃用，并将于 2024 年 1 月开始被冻结。&lt;/strong&gt;
查看&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/&#34;&gt;弃用公告&lt;/a&gt;了解有关此更改的更多详细信息。&lt;/p&gt;
&lt;!--
## What you need to know about the new package repositories?
--&gt;
&lt;h2 id=&#34;what-you-need-to-know-about-the-new-package-repositories&#34;&gt;关于新的包仓库，你需要了解哪些信息？&lt;/h2&gt;
&lt;!--
_(updated on January 12, 2024)_
--&gt;
&lt;p&gt;&lt;strong&gt;（更新于 2024 年 1 月 12 日）&lt;/strong&gt;&lt;/p&gt;
&lt;!--
- This is an **opt-in change**; you&#39;re required to manually migrate from the
  Google-hosted repository to the Kubernetes community-owned repositories.
  See [how to migrate](#how-to-migrate) later in this announcement for migration information
  and instructions.
--&gt;
&lt;ul&gt;
&lt;li&gt;这是一个&lt;strong&gt;明确同意的更改&lt;/strong&gt;；你需要手动从托管在 Google 的仓库迁移到
Kubernetes 社区自有的仓库。请参阅本公告后面的&lt;a href=&#34;#how-to-migrate&#34;&gt;如何迁移&lt;/a&gt;，
了解迁移信息和说明。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- **The legacy Google-hosted package repositories are going away in January 2024.** These repositories
  have been **deprecated as of August 31, 2023**, and **frozen as of September 13, 2023**.
  Check out the [deprecation announcement](/blog/2023/08/31/legacy-package-repository-deprecation/)
  for more details about this change.
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;旧版托管在 Google 的包仓库于 2024 年 1 月停用。&lt;/strong&gt;
这些仓库&lt;strong&gt;自 2023 年 8 月 31 日起被弃用&lt;/strong&gt; ，并&lt;strong&gt;自 2023 年 9 月 13 日被冻结&lt;/strong&gt; 。&lt;br&gt;
有关此变更的更多细节请查阅&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/&#34;&gt;弃用公告&lt;/a&gt;。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- ~~The existing packages in the legacy repositories will be available for the foreseeable future.
  However, the Kubernetes project can&#39;t provide any guarantees on how long is that going to be.
  The deprecated legacy repositories, and their contents, might be removed at any time in the future
  and without a further notice period.~~ **The legacy package repositories are going away in
  January 2024.**
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;del&gt;旧仓库中的现有包将在可预见的未来一段时间内可用。
然而，Kubernetes 项目无法保证这会持续多久。
已弃用的旧仓库及其内容可能会在未来随时被删除，恕不另行通知。&lt;/del&gt;
&lt;strong&gt;旧版包仓库于 2024 年 1 月停用。&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- Given that no new releases will be published to the legacy repositories after
  the September 13, 2023 cut-off point, you will not be able to upgrade to any patch or minor
  release made from that date onwards if you don&#39;t migrate to the new Kubernetes package repositories.
  That said, we recommend migrating to the new Kubernetes package repositories **as soon as possible**.
--&gt;
&lt;ul&gt;
&lt;li&gt;鉴于在 2023 年 9 月 13 日这个截止时间点之后不会向旧仓库发布任何新版本，
如果你不在该截止时间点迁移至新的 Kubernetes 仓库，
你将无法升级到该日期之后发布的任何补丁或次要版本。
也就是说，我们建议&lt;strong&gt;尽快&lt;/strong&gt;迁移到新的 Kubernetes 仓库。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- The new Kubernetes package repositories contain packages beginning with those
  Kubernetes versions that were still under support when the community took
  over the package builds. This means that the new package repositories have Linux packages for all
  Kubernetes releases starting with v1.24.0.
--&gt;
&lt;ul&gt;
&lt;li&gt;新的 Kubernetes 仓库中包含社区开始接管包构建以来仍在支持的 Kubernetes 版本的包。
这意味着 v1.24.0 之前的任何内容都只存在于托管在 Google 的仓库中。
这意味着新的包仓库将为从 v1.24.0 开始的所有 Kubernetes 版本提供 Linux 包。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- Kubernetes does not have official Linux packages available for earlier releases of Kubernetes;
  however, your Linux distribution may provide its own packages.
--&gt;
&lt;ul&gt;
&lt;li&gt;Kubernetes 没有为早期版本提供官方的 Linux 包；然而，你的 Linux 发行版可能会提供自己的包。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- There&#39;s a dedicated package repository for each Kubernetes minor version.
  When upgrading to a different minor release, you must bear in mind that
  the package repository details also change.
--&gt;
&lt;ul&gt;
&lt;li&gt;每个 Kubernetes 次要版本都有一个专用的仓库。
当升级到不同的次要版本时，你必须记住，仓库详细信息也会发生变化。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Why are we introducing new package repositories?
--&gt;
&lt;h2 id=&#34;why-are-we-introducing-new-package-repositories&#34;&gt;为什么我们要引入新的包仓库？&lt;/h2&gt;
&lt;!--
As the Kubernetes project is growing, we want to ensure the best possible
experience for the end users. The Google-hosted repository has been serving
us well for many years, but we started facing some problems that require
significant changes to how we publish packages. Another goal that we have is to
use community-owned infrastructure for all critical components and that
includes package repositories.
--&gt;
&lt;p&gt;随着 Kubernetes 项目的不断发展，我们希望确保最终用户获得最佳体验。
托管在 Google 的仓库多年来一直为我们提供良好的服务，
但我们开始面临一些问题，需要对发布包的方式进行重大变更。
我们的另一个目标是对所有关键组件使用社区拥有的基础设施，其中包括仓库。&lt;/p&gt;
&lt;!--
Publishing packages to the Google-hosted repository is a manual process that
can be done only by a team of Google employees called
[Google Build Admins](/releases/release-managers/#build-admins).
[The Kubernetes Release Managers team](/releases/release-managers/#release-managers)
is a very diverse team especially in terms of timezones that we work in.
Given this constraint, we have to do very careful planning for every release to
ensure that we have both Release Manager and Google Build Admin available to 
carry out the release.
--&gt;
&lt;p&gt;将包发布到托管在 Google 的仓库是一个手动过程，
只能由名为 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/releases/release-managers/#build-admins&#34;&gt;Google 构建管理员&lt;/a&gt;的 Google 员工团队来完成。
&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/releases/release-managers/#release-managers&#34;&gt;Kubernetes 发布管理员团队&lt;/a&gt;是一个非常多元化的团队，
尤其是在我们工作的时区方面。考虑到这一限制，我们必须对每个版本进行非常仔细的规划，
确保我们有发布经理和 Google 构建管理员来执行发布。&lt;/p&gt;
&lt;!--
Another problem is that we only have a single package repository. Because of
this, we were not able to publish packages for prerelease versions (alpha,
beta, and rc). This made testing Kubernetes prereleases harder for anyone who
is interested to do so. The feedback that we receive from people testing these
releases is critical to ensure the best quality of releases, so we want to make
testing these releases as easy as possible. On top of that, having only one
repository limited us when it comes to publishing dependencies like `cri-tools`
and `kubernetes-cni`.
--&gt;
&lt;p&gt;另一个问题是由于我们只有一个包仓库。因此，我们无法发布预发行版本
（Alpha、Beta 和 RC）的包。这使得任何有兴趣测试的人都更难测试 Kubernetes 预发布版本。
我们从测试这些版本的人员那里收到的反馈对于确保版本的最佳质量至关重要，
因此我们希望尽可能轻松地测试这些版本。最重要的是，只有一个仓库限制了我们对
&lt;code&gt;cri-tools&lt;/code&gt; 和 &lt;code&gt;kubernetes-cni&lt;/code&gt; 等依赖进行发布，&lt;/p&gt;
&lt;!--
Regardless of all these issues, we&#39;re very thankful to Google and Google Build
Admins for their involvement, support, and help all these years!
--&gt;
&lt;p&gt;尽管存在这些问题，我们仍非常感谢 Google 和 Google 构建管理员这些年来的参与、支持和帮助！&lt;/p&gt;
&lt;!--
## How the new package repositories work?
--&gt;
&lt;h2 id=&#34;how-the-new-package-repositories-work&#34;&gt;新的包仓库如何工作？&lt;/h2&gt;
&lt;!--
The new package repositories are hosted at `pkgs.k8s.io` for both Debian and
RPM packages. At this time, this domain points to a CloudFront CDN backed by S3
bucket that contains repositories and packages. However, we plan on onboarding
additional mirrors in the future, giving possibility for other companies to
help us with serving packages.
--&gt;
&lt;p&gt;新的 Debian 和 RPM 仓库托管在 &lt;code&gt;pkgs.k8s.io&lt;/code&gt;。
目前，该域指向一个 CloudFront CDN，其后是包含仓库和包的 S3 存储桶。
然而，我们计划在未来添加更多的镜像站点，让其他公司有可能帮助我们提供软件包服务。&lt;/p&gt;
&lt;!--
Packages are built and published via the [OpenBuildService (OBS) platform](http://openbuildservice.org).
After a long period of evaluating different solutions, we made a decision to
use OpenBuildService as a platform to manage our repositories and packages.
First of all, OpenBuildService is an open source platform used by a large
number of open source projects and companies, like openSUSE, VideoLAN,
Dell, Intel, and more. OpenBuildService has many features making it very
flexible and easy to integrate with our existing release tooling. It also
allows us to build packages in a similar way as for the Google-hosted
repository making the migration process as seamless as possible.
--&gt;
&lt;p&gt;包通过 &lt;a href=&#34;http://openbuildservice.org&#34;&gt;OpenBuildService（OBS）平台&lt;/a&gt;构建和发布。
经过长时间评估不同的解决方案后，我们决定使用 OpenBuildService 作为管理仓库和包的平台。
首先，OpenBuildService 是一个开源平台，被大量开源项目和公司使用，
如 openSUSE、VideoLAN、Dell、Intel 等。OpenBuildService 具有许多功能，
使其非常灵活且易于与我们现有的发布工具集成。
它还允许我们以与托管在 Google 的仓库类似的方式构建包，从而使迁移过程尽可能无缝。&lt;/p&gt;
&lt;!--
SUSE sponsors the Kubernetes project with access to their reference
OpenBuildService setup ([`build.opensuse.org`](http://build.opensuse.org)) and
with technical support to integrate OBS with our release processes.
--&gt;
&lt;p&gt;SUSE 赞助 Kubernetes 项目并且支持访问其引入的 OpenBuildService 环境
（&lt;a href=&#34;http://build.opensuse.org&#34;&gt;&lt;code&gt;build.opensuse.org&lt;/code&gt;&lt;/a&gt;），
还提供将 OBS 与我们的发布流程集成的技术支持。&lt;/p&gt;
&lt;!--
We use SUSE&#39;s OBS instance for building and publishing packages. Upon building
a new release, our tooling automatically pushes needed artifacts and 
package specifications to `build.opensuse.org`. That will trigger the build
process that&#39;s going to build packages for all supported architectures (AMD64,
ARM64, PPC64LE, S390X). At the end, generated packages will be automatically
pushed to our community-owned S3 bucket making them available to all users.
--&gt;
&lt;p&gt;我们使用 SUSE 的 OBS 实例来构建和发布包。构建新版本后，
我们的工具会自动将所需的制品和包设置推送到 &lt;code&gt;build.opensuse.org&lt;/code&gt;。
这将触发构建过程，为所有支持的架构（AMD64、ARM64、PPC64LE、S390X）构建包。
最后，生成的包将自动推送到我们社区拥有的 S3 存储桶，以便所有用户都可以使用它们。&lt;/p&gt;
&lt;!--
We want to take this opportunity to thank SUSE for allowing us to use
`build.opensuse.org` and their generous support to make this integration
possible!
--&gt;
&lt;p&gt;我们想借此机会感谢 SUSE 允许我们使用 &lt;code&gt;build.opensuse.org&lt;/code&gt;
以及他们的慷慨支持，使这种集成成为可能！&lt;/p&gt;
&lt;!--
## What are significant differences between the Google-hosted and Kubernetes package repositories?
--&gt;
&lt;h2 id=&#34;what-are-significant-differences-between-the-google-hosted-and-kubernetes-package-repositories&#34;&gt;托管在 Google 的仓库和 Kubernetes 仓库之间有哪些显著差异？&lt;/h2&gt;
&lt;!--
There are three significant differences that you should be aware of:
--&gt;
&lt;p&gt;你应该注意三个显著差异：&lt;/p&gt;
&lt;!--
- There&#39;s a dedicated package repository for each Kubernetes minor release.
  For example, repository called `core:/stable:/v1.28` only hosts packages for
  stable Kubernetes v1.28 releases. This means you can install v1.28.0 from
  this repository, but you can&#39;t install v1.27.0 or any other minor release
  other than v1.28. Upon upgrading to another minor version, you have to add a
  new repository and optionally remove the old one
--&gt;
&lt;ul&gt;
&lt;li&gt;每个 Kubernetes 次要版本都有一个专用的仓库。例如，
名为 &lt;code&gt;core:/stable:/v1.28&lt;/code&gt; 的仓库仅托管稳定 Kubernetes v1.28 版本的包。
这意味着你可以从此仓库安装 v1.28.0，但无法安装 v1.27.0 或 v1.28 之外的任何其他次要版本。
升级到另一个次要版本后，你必须添加新的仓库并可以选择删除旧的仓库&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- There&#39;s a difference in what `cri-tools` and `kubernetes-cni` package
  versions are available in each Kubernetes repository
  - These two packages are dependencies for `kubelet` and `kubeadm`
  - Kubernetes repositories for v1.24 to v1.27 have same versions of these
    packages as the Google-hosted repository
  - Kubernetes repositories for v1.28 and onwards are going to have published
    only versions that are used by that Kubernetes minor release
    - Speaking of v1.28, only kubernetes-cni 1.2.0 and cri-tools v1.28 are going
      to be available in the repository for Kubernetes v1.28
    - Similar for v1.29, we only plan on publishing cri-tools v1.29 and
      whatever kubernetes-cni version is going to be used by Kubernetes v1.29
--&gt;
&lt;ul&gt;
&lt;li&gt;每个 Kubernetes 仓库中可用的 &lt;code&gt;cri-tools&lt;/code&gt; 和 &lt;code&gt;kubernetes-cni&lt;/code&gt; 包版本有所不同
&lt;ul&gt;
&lt;li&gt;这两个包是 &lt;code&gt;kubelet&lt;/code&gt; 和 &lt;code&gt;kubeadm&lt;/code&gt; 的依赖项&lt;/li&gt;
&lt;li&gt;v1.24 到 v1.27 的 Kubernetes 仓库与托管在 Google 的仓库具有这些包的相同版本&lt;/li&gt;
&lt;li&gt;v1.28 及更高版本的 Kubernetes 仓库将仅发布该 Kubernetes 次要版本
&lt;ul&gt;
&lt;li&gt;就 v1.28 而言，Kubernetes v1.28 的仓库中仅提供 kubernetes-cni 1.2.0 和 cri-tools v1.28&lt;/li&gt;
&lt;li&gt;与 v1.29 类似，我们只计划发布 cri-tools v1.29 以及 Kubernetes v1.29 将使用的 kubernetes-cni 版本&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
- The revision part of the package version (the `-00` part in `1.28.0-00`) is
  now autogenerated by the OpenBuildService platform and has a different format.
  The revision is now in the format of `-x.y`, e.g. `1.28.0-1.1`
--&gt;
&lt;ul&gt;
&lt;li&gt;包版本的修订部分（&lt;code&gt;1.28.0-00&lt;/code&gt; 中的 &lt;code&gt;-00&lt;/code&gt; 部分）现在由 OpenBuildService
平台自动生成，并具有不同的格式。修订版本现在采用 &lt;code&gt;-x.y&lt;/code&gt; 格式，例如 &lt;code&gt;1.28.0-1.1&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
## Does this in any way affect existing Google-hosted repositories?
--&gt;
&lt;h2 id=&#34;does-this-in-any-way-affect-existing-google-hosted-repositories&#34;&gt;这是否会影响现有的托管在 Google 的仓库？&lt;/h2&gt;
&lt;!--
The Google-hosted repository and all packages published to it will continue
working in the same way as before. There are no changes in how we build and
publish packages to the Google-hosted repository, all newly-introduced changes
are only affecting packages publish to the community-owned repositories.
--&gt;
&lt;p&gt;托管在 Google 的仓库以及发布到其中的所有包仍然可用，与之前一样。
我们构建包并将其发布到托管在 Google 仓库的方式没有变化，
所有新引入的更改仅影响发布到社区自有仓库的包。&lt;/p&gt;
&lt;!--
However, as mentioned at the beginning of this blog post, we plan to stop
publishing packages to the Google-hosted repository in the future.
--&gt;
&lt;p&gt;然而，正如本文开头提到的，我们计划将来停止将包发布到托管在 Google 的仓库。&lt;/p&gt;
&lt;!--
## How to migrate to the Kubernetes community-owned repositories? {#how-to-migrate}
--&gt;
&lt;h2 id=&#34;how-to-migrate&#34;&gt;如何迁移到 Kubernetes 社区自有的仓库？&lt;/h2&gt;
&lt;!--
### Debian, Ubuntu, and operating systems using `apt`/`apt-get` {#how-to-migrate-deb}
--&gt;
&lt;h3 id=&#34;how-to-migrate-deb&#34;&gt;使用 &lt;code&gt;apt&lt;/code&gt;/&lt;code&gt;apt-get&lt;/code&gt; 的 Debian、Ubuntu 一起其他操作系统&lt;/h3&gt;
&lt;!--
1. Replace the `apt` repository definition so that `apt` points to the new
   repository instead of the Google-hosted repository. Make sure to replace the
   Kubernetes minor version in the command below with the minor version
   that you&#39;re currently using:
--&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;替换 &lt;code&gt;apt&lt;/code&gt; 仓库定义，以便 &lt;code&gt;apt&lt;/code&gt; 指向新仓库而不是托管在 Google 的仓库。
确保将以下命令中的 Kubernetes 次要版本替换为你当前使用的次要版本：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#a2f&#34;&gt;echo&lt;/span&gt; &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /&amp;#34;&lt;/span&gt; | sudo tee /etc/apt/sources.list.d/kubernetes.list
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
2. Download the public signing key for the Kubernetes package repositories.
   The same signing key is used for all repositories, so you can disregard the
   version in the URL:
--&gt;
&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;
&lt;p&gt;下载 Kubernetes 仓库的公共签名密钥。所有仓库都使用相同的签名密钥，
因此你可以忽略 URL 中的版本：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
3. Update the `apt` package index:
--&gt;
&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;
&lt;p&gt;更新 &lt;code&gt;apt&lt;/code&gt; 包索引：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;sudo apt-get update
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
### CentOS, Fedora, RHEL, and operating systems using `rpm`/`dnf` {#how-to-migrate-rpm}
--&gt;
&lt;h3 id=&#34;how-to-migrate-rpm&#34;&gt;使用 &lt;code&gt;rpm&lt;/code&gt;/&lt;code&gt;dnf&lt;/code&gt; 的 CentOS、Fedora、RHEL 以及其他操作系统&lt;/h3&gt;
&lt;!--
1. Replace the `yum` repository definition so that `yum` points to the new 
   repository instead of the Google-hosted repository. Make sure to replace the
   Kubernetes minor version in the command below with the minor version
   that you&#39;re currently using:
--&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;替换 &lt;code&gt;yum&lt;/code&gt; 仓库定义，使 &lt;code&gt;yum&lt;/code&gt; 指向新仓库而不是托管在 Google 的仓库。
确保将以下命令中的 Kubernetes 次要版本替换为你当前使用的次要版本：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;cat &lt;span style=&#34;color:#b44&#34;&gt;&amp;lt;&amp;lt;EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;[kubernetes]
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;name=Kubernetes
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;enabled=1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;gpgcheck=1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b44&#34;&gt;EOF&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;!--
## Can I rollback to the Google-hosted repository after migrating to the Kubernetes repositories?
--&gt;
&lt;h2 id=&#34;can-i-rollback-to-the-google-hosted-repository-after-migrating-to-the-kubernetes-repositories&#34;&gt;迁移到 Kubernetes 仓库后是否可以回滚到托管在 Google 的仓库？&lt;/h2&gt;
&lt;!--
In general, yes. Just do the same steps as when migrating, but use parameters
for the Google-hosted repository. You can find those parameters in a document
like [&#34;Installing kubeadm&#34;](/docs/setup/production-environment/tools/kubeadm/install-kubeadm).
--&gt;
&lt;p&gt;一般来说，可以。只需执行与迁移时相同的步骤，但使用托管在 Google 的仓库参数。
你可以在&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm&#34;&gt;“安装 kubeadm”&lt;/a&gt;等文档中找到这些参数。&lt;/p&gt;
&lt;!--
## Why isn’t there a stable list of domains/IPs? Why can’t I restrict package downloads?
--&gt;
&lt;h2 id=&#34;why-isn-t-there-a-stable-list-of-domains-ips-why-can-t-i-restrict-package-downloads&#34;&gt;为什么没有固定的域名/IP 列表？为什么我无法限制包下载？&lt;/h2&gt;
&lt;!--
Our plan for `pkgs.k8s.io` is to make it work as a redirector to a set of 
backends (package mirrors) based on user&#39;s location. The nature of this change
means that a user downloading a package could be redirected to any mirror at
any time. Given the architecture and our plans to onboard additional mirrors in
the near future, we can&#39;t provide a list of IP addresses or domains that you 
can add to an allow list.
--&gt;
&lt;p&gt;我们对 &lt;code&gt;pkgs.k8s.io&lt;/code&gt; 的计划是使其根据用户位置充当一组后端（包镜像）的重定向器。
此更改的本质意味着下载包的用户可以随时重定向到任何镜像。
鉴于架构和我们计划在不久的将来加入更多镜像，我们无法提供给你可以添加到允许列表中的
IP 地址或域名列表。&lt;/p&gt;
&lt;!--
Restrictive control mechanisms like man-in-the-middle proxies or network
policies that restrict access to a specific list of IPs/domains will break with
this change. For these scenarios, we encourage you to mirror the release
packages to a local package repository that you have strict control over.
--&gt;
&lt;p&gt;限制性控制机制（例如限制访问特定 IP/域名列表的中间人代理或网络策略）将随着此更改而中断。
对于这些场景，我们鼓励你将包的发布版本与你可以严格控制的本地仓库建立镜像。&lt;/p&gt;
&lt;!--
## What should I do if I detect some abnormality with the new repositories?
--&gt;
&lt;h2 id=&#34;what-should-i-do-if-i-detect-some-abnormality-with-the-new-repositories&#34;&gt;如果我发现新的仓库有异常怎么办？&lt;/h2&gt;
&lt;!--
If you encounter any issue with new Kubernetes package repositories, please
file an issue in the
[`kubernetes/release` repository](https://github.com/kubernetes/release/issues/new/choose).
--&gt;
&lt;p&gt;如果你在新的 Kubernetes 仓库中遇到任何问题，
请在 &lt;a href=&#34;https://github.com/kubernetes/release/issues/new/choose&#34;&gt;&lt;code&gt;kubernetes/release&lt;/code&gt; 仓库&lt;/a&gt;中提交问题。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>聚焦 SIG CLI</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/07/20/sig-cli-spotlight-2023/</link>
      <pubDate>Thu, 20 Jul 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/07/20/sig-cli-spotlight-2023/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Spotlight on SIG CLI&#34;
date: 2023-07-20
slug: sig-cli-spotlight-2023
canonicalUrl: https://www.kubernetes.dev/blog/2023/07/13/sig-cli-spotlight-2023/
--&gt;
&lt;!--
**Author**: Arpit Agrawal
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Arpit Agrawal&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：Xin Li (Daocloud)&lt;/p&gt;
&lt;!--
In the world of Kubernetes, managing containerized applications at
scale requires powerful and efficient tools. The command-line
interface (CLI) is an integral part of any developer or operator’s
toolkit, offering a convenient and flexible way to interact with a
Kubernetes cluster.
--&gt;
&lt;p&gt;在 Kubernetes 的世界中，大规模管理容器化应用程序需要强大而高效的工具。
命令行界面（CLI）是任何开发人员或操作人员工具包不可或缺的一部分，
其提供了一种方便灵活的方式与 Kubernetes 集群交互。&lt;/p&gt;
&lt;!--
SIG CLI plays a crucial role in improving the [Kubernetes
CLI](https://github.com/kubernetes/community/tree/master/sig-cli)
experience by focusing on the development and enhancement of
`kubectl`, the primary command-line tool for Kubernetes.
--&gt;
&lt;p&gt;SIG CLI 通过专注于 Kubernetes 主要命令行工具 &lt;code&gt;kubectl&lt;/code&gt; 的开发和增强，
在改善 &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-cli&#34;&gt;Kubernetes CLI&lt;/a&gt;
体验方面发挥着至关重要的作用。&lt;/p&gt;
&lt;!--
In this SIG CLI Spotlight, Arpit Agrawal, SIG ContribEx-Comms team
member, talked with [Katrina Verey](https://github.com/KnVerey), Tech
Lead &amp; Chair of SIG CLI,and [Maciej
Szulik](https://github.com/soltysh), SIG CLI Batch Lead, about SIG
CLI, current projects, challenges and how anyone can get involved.
--&gt;
&lt;p&gt;在本次 SIG CLI 聚焦中，SIG ContribEx-Comms 团队成员 Arpit Agrawal 与
SIG CLI 技术主管兼主席 &lt;a href=&#34;https://github.com/KnVerey&#34;&gt;Katrina Verey&lt;/a&gt;
和 SIG CLI Batch 主管 &lt;a href=&#34;https://github.com/soltysh&#34;&gt;Maciej Szulik&lt;/a&gt;
讨论了 SIG CLI 当前项目状态和挑战以及如何参与其中。&lt;/p&gt;
&lt;!--
So, whether you are a seasoned Kubernetes enthusiast or just getting
started, understanding the significance of SIG CLI will undoubtedly
enhance your Kubernetes journey.
--&gt;
&lt;p&gt;因此，无论你是经验丰富的 Kubernetes 爱好者还是刚刚入门，了解
SIG CLI 的重要性无疑将增强你的 Kubernetes 之旅。&lt;/p&gt;
&lt;!--
## Introductions

**Arpit**: Could you tell us a bit about yourself, your role, and how
you got involved in SIG CLI?
--&gt;
&lt;h2 id=&#34;简介&#34;&gt;简介&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Arpit&lt;/strong&gt;：你们能否向我们介绍一下你自己、你的角色以及你是如何参与 SIG CLI 的？&lt;/p&gt;
&lt;!--
**Maciej**: I’m one of the technical leads for SIG-CLI. I was working
on Kubernetes in multiple areas since 2014, and in 2018 I got
appointed a lead.
--&gt;
&lt;p&gt;&lt;strong&gt;Maciej&lt;/strong&gt;：我是 SIG-CLI 的技术负责人之一。自 2014 年以来，我一直在多个领域从事
Kubernetes 工作，并于 2018 年被任命为负责人。&lt;/p&gt;
&lt;!--
**Katrina**: I’ve been working with Kubernetes as an end-user since
2016, but it was only in late 2019 that I discovered how well SIG CLI
aligned with my experience from internal projects. I started regularly
attending meetings and made a few small PRs, and by 2021 I was working
more deeply with the
[Kustomize](https://github.com/kubernetes-sigs/kustomize) team
specifically. Later that year, I was appointed to my current roles as
subproject owner for Kustomize and KRM Functions, and as SIG CLI Tech
Lead and Chair.
--&gt;
&lt;p&gt;&lt;strong&gt;Katrina&lt;/strong&gt;：自 2016 年以来，我一直作为最终用户使用 Kubernetes，但直到 2019 年底，
我才发现 SIG CLI 与我在内部项目中的经验非常吻合。我开始定期参加会议并提交了一些小型 PR，
到 2021 年，我专门与 &lt;a href=&#34;https://github.com/kubernetes-sigs/kustomize&#34;&gt;Kustomize&lt;/a&gt;
团队进行了更深入的合作。同年晚些时候，我被任命担任目前的职务，担任 Kustomize 和
KRM Functions 的子项目 owner 以及 SIG CLI 技术主管和负责人。&lt;/p&gt;
&lt;!--
## About SIG CLI

**Arpit**: Thank you! Could you share with us the purpose and goals of SIG CLI?
--&gt;
&lt;h2 id=&#34;关于-sig-cli&#34;&gt;关于 SIG CLI&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Arpit&lt;/strong&gt;：谢谢！你们能否与我们分享一下 SIG CLI 的宗旨和目标？&lt;/p&gt;
&lt;!--
**Maciej**: Our
[charter](https://github.com/kubernetes/community/tree/master/sig-cli/)
has the most detailed description, but in few words, we handle all CLI
tooling that helps you manage your Kubernetes manifests and interact
with your Kubernetes clusters.
--&gt;
&lt;p&gt;&lt;strong&gt;Maciej&lt;/strong&gt;：我们的&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-cli/&#34;&gt;章程&lt;/a&gt;有最详细的描述，
但简而言之，我们处理所有 CLI 工具，帮助你管理 Kubernetes 资源清单以及与 Kubernetes 集群进行交互。&lt;/p&gt;
&lt;!--
**Arpit**: I see. And how does SIG CLI work to promote best-practices
for CLI development and usage in the cloud native ecosystem?
--&gt;
&lt;p&gt;&lt;strong&gt;Arpit&lt;/strong&gt;：我明白了。请问 SIG CLI 如何致力于推广云原生生态系统中 CLI 开发和使用的最佳实践？&lt;/p&gt;
&lt;!--
**Maciej**: Within `kubectl`, we have several on-going efforts that
try to encourage new contributors to align existing commands to new
standards. We publish several libraries which hopefully make it easier
to write CLIs that interact with Kubernetes APIs, such as cli-runtime
and
[kyaml](https://github.com/kubernetes-sigs/kustomize/tree/master/kyaml).
--&gt;
&lt;p&gt;&lt;strong&gt;Maciej&lt;/strong&gt;：在 &lt;code&gt;kubectl&lt;/code&gt; 中，我们正在进行多项努力，试图鼓励新的贡献者将现有命令与新标准保持一致。
我们发布了几个库，希望能够更轻松地编写与 Kubernetes API 交互的 CLI，例如 cli-runtime 和
&lt;a href=&#34;https://github.com/kubernetes-sigs/kustomize/tree/master/kyaml&#34;&gt;kyaml&lt;/a&gt;。&lt;/p&gt;
&lt;!--
**Katrina**: We also maintain some interoperability specifications for
CLI tooling, such as the [KRM Functions
Specification](https://github.com/kubernetes-sigs/kustomize/blob/master/cmd/config/docs/api-conventions/functions-spec.md)
(GA) and the new ApplySet
Specification
(alpha).
--&gt;
&lt;p&gt;&lt;strong&gt;Katrina&lt;/strong&gt;：我们还维护一些 CLI 工具的互操作性规范，例如
&lt;a href=&#34;https://github.com/kubernetes-sigs/kustomize/blob/master/cmd/config/docs/api-conventions/functions-spec.md&#34;&gt;KRM 函数规范&lt;/a&gt;（GA）
和新的 ApplySet 规范（Alpha）。&lt;/p&gt;
&lt;!--
## Current projects and challenges

**Arpit**: Going through the README file, it’s clear SIG CLI has a
number of subprojects, could you highlight some important ones?
--&gt;
&lt;h2 id=&#34;当前的项目和挑战&#34;&gt;当前的项目和挑战&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Arpit&lt;/strong&gt;：阅读了一遍 README 文件，发现 SIG CLI 有许多子项目，你能突出讲一些重要的子项目吗？&lt;/p&gt;
&lt;!--
**Maciej**: The four most active subprojects that are, in my opinion,
worthy of your time investment would be:

* [`kubectl`](https://github.com/kubernetes/kubectl):  the canonical Kubernetes CLI.
* [Kustomize](https://github.com/kubernetes-sigs/kustomize): a
  template-free customization tool for Kubernetes yaml manifest files.
* [KUI](https://kui.tools) - a GUI interface to Kubernetes, think
   `kubectl` on steroids.
* [`krew`](https://github.com/kubernetes-sigs/krew): a plugin manager for `kubectl`.
--&gt;
&lt;p&gt;&lt;strong&gt;Maciej&lt;/strong&gt;：在我看来，值得你投入时间的四个最活跃的子项目是：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/kubectl&#34;&gt;&lt;code&gt;kubectl&lt;/code&gt;&lt;/a&gt;：规范的 Kubernetes CLI。&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes-sigs/kustomize&#34;&gt;Kustomize&lt;/a&gt;：Kubernetes yaml 清单文件的无模板定制工具。&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://kui.tools&#34;&gt;KUI&lt;/a&gt; - 一个针对 Kubernetes 的 GUI 界面，可以将其视为增强版的 &lt;code&gt;kubectl&lt;/code&gt;。&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes-sigs/krew&#34;&gt;&lt;code&gt;krew&lt;/code&gt;&lt;/a&gt;：&lt;code&gt;kubectl&lt;/code&gt; 的插件管理器。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
**Arpit**: Are there any upcoming initiatives or developments that SIG
CLI is working on?

**Maciej**: There are always several initiatives we’re working on at
any given point in time. It’s best to join [one of our
calls](https://github.com/kubernetes/community/tree/master/sig-cli/#meetings)
to learn about the current ones.
--&gt;
&lt;p&gt;&lt;strong&gt;Arpit&lt;/strong&gt;：SIG CLI 是否有任何正在开展或即将开展的计划或开发工作？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Maciej&lt;/strong&gt;：在任何给定的时间点，我们总是在开展多项举措。
最好加入&lt;a href=&#34;https://github.com/kubernetes/community/tree/master/sig-cli/#meetings&#34;&gt;我们的一个电话会议&lt;/a&gt;来了解当前的情况。&lt;/p&gt;
&lt;!--
**Katrina**: For major features, you can check out [our open
KEPs](https://www.kubernetes.dev/resources/keps/). For instance, in
1.27 we introduced alphas for [a new pruning mode in kubectl
apply](https://kubernetes.io/blog/2023/05/09/introducing-kubectl-applyset-pruning/),
and for kubectl create plugins. Exciting ideas that are currently
under discussion include an interactive mode for `kubectl` delete
([KEP
3895](https://kubernetes.io/blog/2023/05/09/introducing-kubectl-applyset-pruning))
and the `kuberc` user preferences file ([KEP
3104](https://kubernetes.io/blog/2023/05/09/introducing-kubectl-applyset-pruning)).
--&gt;
&lt;p&gt;对于主要功能，你可以查看&lt;a href=&#34;https://www.kubernetes.dev/resources/keps/&#34;&gt;我们的开放 KEP&lt;/a&gt;。
例如，在 1.27 中，我们为 &lt;a href=&#34;https://kubernetes.io/blog/2023/05/09/introducing-kubectl-applyset-pruning/&#34;&gt;kubectl apply 中的新裁剪模式&lt;/a&gt;
引入了新的 Alpha 特性，并为 kubectl 添加了插件。
目前正在讨论的令人兴奋的想法包括 &lt;code&gt;kubectl&lt;/code&gt; 删除的交互模式（&lt;a href=&#34;https://kubernetes.io/blog/2023/05/09/introducing-kubectl-applyset-pruning&#34;&gt;KEP 3895&lt;/a&gt;）和
&lt;code&gt;kuberc&lt;/code&gt; 用户首选项文件（&lt;a href=&#34;https://kubernetes.io/blog/2023/05/09/introducing-kubectl-applyset-pruning&#34;&gt;KEP 3104&lt;/a&gt;）。&lt;/p&gt;
&lt;!--
**Arpit**: Could you discuss any challenges that SIG CLI faces in its
efforts to improve CLIs for cloud-native technologies? What are the
future efforts to solve them?
--&gt;
&lt;p&gt;&lt;strong&gt;Arpit&lt;/strong&gt;：你们能否说说 SIG CLI 在改善云本地技术的 CLI 时面临的任何挑战？未来将采取哪些措施来解决这些问题？&lt;/p&gt;
&lt;!--
**Katrina**: The biggest challenge we’re facing with every decision is
backwards compatibility and ensuring we don’t break existing users. It
frequently happens that fixing what&#39;s on the surface may seem
straightforward, but even fixing a bug could constitute a breaking
change for some users, which means we need to go through an extended
deprecation process to change it, or in some cases we can’t change it
at all. Another challenge is the need to balance customization with
usability in the flag sets we expose on our tools. For example, we get
many proposals for new flags that would certainly be useful to some
users, but not a large enough subset to justify the increased
complexity having them in the tool entails for everyone. The `kuberc`
proposal may help with some of these problems by giving individual
users the ability to set or override default values we can’t change,
and even create custom subcommands via aliases
--&gt;
&lt;p&gt;&lt;strong&gt;Katrina&lt;/strong&gt;：我们每个决定面临的最大挑战是向后兼容性并确保我们不会影响现有用户。
经常发生的情况是，修复表面上的内容似乎很简单，但即使修复 bug 也可能对某些用户造成破坏性更改，
这意味着我们需要经历一个较长的弃用过程来更改它，或者在某些情况下我们不能完全改变它。
另一个挑战是我们需要在工具上公开 flag 的平衡定制和可用性。例如，我们收到了许多关于新标志的建议，
这些建议肯定对某些用户有用，但没有足够大的子集来证明，将它们添加到工具中对每个用户来说都会增加复杂性。
&lt;code&gt;kuberc&lt;/code&gt; 提案可能会帮助个人用户设置或覆盖我们无法更改的默认值，甚至通过别名创建自定义子命令，
从而帮助解决其中一些问题。&lt;/p&gt;
&lt;!--
**Arpit**: With every new version release of Kubernetes, maintaining
consistency and integrity is surely challenging: how does the SIG CLI
team tackle it?
--&gt;
&lt;p&gt;&lt;strong&gt;Arpit&lt;/strong&gt;：随着 Kubernetes 的每个新版本的发布，保持一致性和完整性无疑是一项挑战：
SIG CLI 团队如何解决这个问题？&lt;/p&gt;
&lt;!--
**Maciej**: This is mostly similar to the topic mentioned in the
previous question: every new change, especially to existing commands
goes through a lot of scrutiny to ensure we don’t break existing
users. At any point in time we have to keep a reasonable balance
between features and not breaking users.
--&gt;
&lt;p&gt;&lt;strong&gt;Maciej&lt;/strong&gt;：这与上一个问题中提到的主题非常相似：每一个新的更改，尤其是对现有命令的更改，
都会经过大量的审查，以确保我们不会影响现有用户。在任何时候我们都必须在功能和不影响用户之间保持合理的平衡。&lt;/p&gt;
&lt;!--
## Future plans and contribution

**Arpit**: How do you see the role of CLI tools in the cloud-native
ecosystem evolving in the future?
--&gt;
&lt;h2 id=&#34;未来计划及贡献&#34;&gt;未来计划及贡献&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Arpit&lt;/strong&gt;：你们如何看待 CLI 工具在未来云原生生态系统中的作用？&lt;/p&gt;
&lt;!--
**Maciej**: I think that CLI tools were and will always be an
important piece of the ecosystem. Whether used by administrators on
remote machines that don’t have GUI or in every CI/CD pipeline, they
are irreplaceable.
--&gt;
&lt;p&gt;&lt;strong&gt;Maciej&lt;/strong&gt;：我认为 CLI 工具曾经并将永远是生态系统的重要组成部分。
无论是管理员在没有 GUI 的远程计算机上还是在每个 CI/CD 管道中使用，它们都是不可替代的。&lt;/p&gt;
&lt;!--
**Arpit**: Kubernetes is a community-driven project. Any
recommendation for anyone looking into getting involved in SIG CLI
work? Where should they start? Are there any prerequisites?

**Maciej**: There are no prerequisites other than a little bit of free
time on your hands and willingness to learn something new :-)
--&gt;
&lt;p&gt;&lt;strong&gt;Arpit&lt;/strong&gt;：Kubernetes 是一个社区驱动的项目。对于想要参与 SIG CLI 工作的人有什么建议吗？
他们应该从哪里开始？有什么先决条件吗？&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Maciej&lt;/strong&gt;：除了有一点空闲时间和学习新东西的意愿之外，没有任何先决条件 :-)&lt;/p&gt;
&lt;!--
**Katrina**: A working knowledge of [Go](https://go.dev/) often helps,
but we also have areas in need of non-code contributions, such as the
[Kustomize docs consolidation
project](https://github.com/kubernetes-sigs/kustomize/issues/4338).
--&gt;
&lt;p&gt;&lt;strong&gt;Katrina&lt;/strong&gt;：&lt;a href=&#34;https://go.dev/&#34;&gt;Go&lt;/a&gt; 的实用知识通常会有所帮助，但我们也有需要非代码贡献的领域，
例如 &lt;a href=&#34;https://github.com/kubernetes-sigs/kustomize/issues/4338&#34;&gt;Kustomize 文档整合项目&lt;/a&gt;。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Kubernetes 机密：使用机密虚拟机和安全区来增强你的集群安全性</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/07/06/confidential-kubernetes/</link>
      <pubDate>Thu, 06 Jul 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/07/06/confidential-kubernetes/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Confidential Kubernetes: Use Confidential Virtual Machines and Enclaves to improve your cluster security&#34;
date: 2023-07-06
slug: &#34;confidential-kubernetes&#34;
--&gt;
&lt;!--
**Authors:** Fabian Kammel (Edgeless Systems), Mikko Ylinen (Intel), Tobin Feldman-Fitzthum (IBM)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Fabian Kammel (Edgeless Systems), Mikko Ylinen (Intel), Tobin Feldman-Fitzthum (IBM)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：&lt;a href=&#34;https://github.com/asa3311&#34;&gt;顾欣&lt;/a&gt;&lt;/p&gt;
&lt;!--
In this blog post, we will introduce the concept of Confidential Computing (CC) to improve any computing environment&#39;s security and privacy properties. Further, we will show how
the Cloud-Native ecosystem, particularly Kubernetes, can benefit from the new compute paradigm.

Confidential Computing is a concept that has been introduced previously in the cloud-native world. The
[Confidential Computing Consortium](https://confidentialcomputing.io/) (CCC) is a project community in the Linux Foundation
that already worked on
[Defining and Enabling Confidential Computing](https://confidentialcomputing.io/wp-content/uploads/sites/85/2019/12/CCC_Overview.pdf).
In the [Whitepaper](https://confidentialcomputing.io/wp-content/uploads/sites/85/2023/01/CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_Updated_November_2022.pdf),
they provide a great motivation for the use of Confidential Computing:

   &gt; Data exists in three states: in transit, at rest, and in use. …Protecting sensitive data
   &gt; in all of its states is more critical than ever. Cryptography is now commonly deployed
   &gt; to provide both data confidentiality (stopping unauthorized viewing) and data integrity
   &gt; (preventing or detecting unauthorized changes). While techniques to protect data in transit
   &gt; and at rest are now commonly deployed, the third state - protecting data in use - is the new frontier.

Confidential Computing aims to primarily solve the problem of **protecting data in use**
by introducing a hardware-enforced Trusted Execution Environment (TEE).
--&gt;
&lt;p&gt;在这篇博客文章中，我们将介绍机密计算（Confidential Computing，简称 CC）的概念，
以增强任何计算环境的安全和隐私属性。此外，我们将展示云原生生态系统，
特别是 Kubernetes，如何从新的计算范式中受益。&lt;/p&gt;
&lt;p&gt;机密计算是一个先前在云原生领域中引入的概念。
&lt;a href=&#34;https://confidentialcomputing.io/&#34;&gt;机密计算联盟&lt;/a&gt;(Confidential Computing Consortium，简称 CCC)
是 Linux 基金会中的一个项目社区，
致力于&lt;a href=&#34;https://confidentialcomputing.io/wp-content/uploads/sites/85/2019/12/CCC_Overview.pdf&#34;&gt;定义和启用机密计算&lt;/a&gt;。
在&lt;a href=&#34;https://confidentialcomputing.io/wp-content/uploads/sites/85/2023/01/CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_Updated_November_2022.pdf&#34;&gt;白皮书&lt;/a&gt;中，
他们为使用机密计算提供了很好的动机。&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;数据存在于三种状态：传输中、静态存储和使用中。保护所有状态下的敏感数据比以往任何时候都更加关键。
现在加密技术常被部署以提供数据机密性（阻止未经授权的查看）和数据完整性（防止或检测未经授权的更改）。
虽然现在通常部署了保护传输中和静态存储中的数据的技术，但保护使用中的数据是新的前沿。&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;机密计算主要通过引入硬件强制执行的可信执行环境（TEE）来解决&lt;strong&gt;保护使用中的数据&lt;/strong&gt;的问题。&lt;/p&gt;
&lt;!--
## Trusted Execution Environments

For more than a decade, Trusted Execution Environments (TEEs) have been available in commercial
computing hardware in the form of [Hardware Security Modules](https://en.wikipedia.org/wiki/Hardware_security_module)
(HSMs) and [Trusted Platform Modules](https://www.iso.org/standard/50970.html) (TPMs). These
technologies provide trusted environments for shielded computations. They can
store highly sensitive cryptographic keys and carry out critical cryptographic operations
such as signing or encrypting data.
--&gt;
&lt;h2 id=&#34;trusted-execution-environments&#34;&gt;可信执行环境 &lt;/h2&gt;
&lt;p&gt;在过去的十多年里，可信执行环境（Trusted Execution Environments，简称 TEEs）
以&lt;a href=&#34;https://zh.wikipedia.org/zh-cn/%E7%A1%AC%E4%BB%B6%E5%AE%89%E5%85%A8%E6%A8%A1%E5%9D%97&#34;&gt;硬件安全模块&lt;/a&gt;（Hardware Security Modules，简称 HSMs）
和&lt;a href=&#34;https://www.iso.org/standard/50970.html&#34;&gt;可信平台模块&lt;/a&gt;（Trusted Platform Modules，简称 TPMs）
的形式在商业计算硬件中得以应用。这些技术提供了可信的环境来进行受保护的计算。
它们可以存储高度敏感的加密密钥，并执行关键的加密操作，如签名或加密数据。&lt;/p&gt;
&lt;!--
TPMs are optimized for low cost, allowing them to be integrated into mainboards and act as a
system&#39;s physical root of trust. To keep the cost low, TPMs are limited in scope, i.e., they
provide storage for only a few keys and are capable of just a small subset of cryptographic operations.

In contrast, HSMs are optimized for high performance, providing secure storage for far
more keys and offering advanced physical attack detection mechanisms. Additionally, high-end HSMs
can be programmed so that arbitrary code can be compiled and executed. The downside
is that they are very costly. A managed CloudHSM from AWS costs
[around $1.50 / hour](https://aws.amazon.com/cloudhsm/pricing/) or ~$13,500 / year.
--&gt;
&lt;p&gt;TPMs 的优化为降低成本，使它们能够集成到主板中并充当系统的物理根信任。
为了保持低成本，TPMs 的范围受到限制，即它们只能存储少量的密钥，并且仅能执行一小部分的加密操作。&lt;/p&gt;
&lt;p&gt;相比之下，HSMs 的优化为提高性能，为更多的密钥提供安全存储，并提供高级物理攻击检测机制。
此外，高端 HSMs 可以编程，以便可以编译和执行任意代码。缺点是它们的成本非常高。
来自 AWS 的托管 CloudHSM 的费用大约是&lt;a href=&#34;https://aws.amazon.com/cloudhsm/pricing/&#34;&gt;每小时 1.50 美元&lt;/a&gt;，
或者约每年 13,500 美元。&lt;/p&gt;
&lt;!--
In recent years, a new kind of TEE has gained popularity. Technologies like
[AMD SEV](https://developer.amd.com/sev/),
[Intel SGX](https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/overview.html),
and [Intel TDX](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html)
provide TEEs that are closely integrated with userspace. Rather than low-power or high-performance
devices that support specific use cases, these TEEs shield normal processes or virtual machines
and can do so with relatively low overhead. These technologies each have different design goals,
advantages, and limitations, and they are available in different environments, including consumer
laptops, servers, and mobile devices.
--&gt;
&lt;p&gt;近年来，一种新型的 TEE 已经变得流行。
像 &lt;a href=&#34;https://developer.amd.com/sev/&#34;&gt;AMD SEV&lt;/a&gt;、
&lt;a href=&#34;https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/overview.html&#34;&gt;Intel SGX&lt;/a&gt;
和 &lt;a href=&#34;https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html&#34;&gt;Intel TDX&lt;/a&gt;
这样的技术提供了与用户空间紧密集成的 TEE。与支持特定的低功耗或高性能设备不同，
这些 TEE 保护普通进程或虚拟机，并且可以以相对较低的开销执行此操作。
这些技术各有不同的设计目标、优点和局限性，
并且在不同的环境中可用，包括消费者笔记本电脑、服务器和移动设备。&lt;/p&gt;
&lt;!--
Additionally, we should mention
[ARM TrustZone](https://www.arm.com/technologies/trustzone-for-cortex-a), which is optimized
for embedded devices such as smartphones, tablets, and smart TVs, as well as
[AWS Nitro Enclaves](https://aws.amazon.com/ec2/nitro/nitro-enclaves/), which are only available
on [Amazon Web Services](https://aws.amazon.com/) and have a different threat model compared
to the CPU-based solutions by Intel and AMD.
--&gt;
&lt;p&gt;此外，我们应该提及 &lt;a href=&#34;https://www.arm.com/technologies/trustzone-for-cortex-a&#34;&gt;ARM TrustZone&lt;/a&gt;，
它针对智能手机、平板电脑和智能电视等嵌入式设备进行了优化，
以及 &lt;a href=&#34;https://aws.amazon.com/ec2/nitro/nitro-enclaves/&#34;&gt;AWS Nitro Enclaves&lt;/a&gt;，
它们只在 &lt;a href=&#34;https://aws.amazon.com/&#34;&gt;Amazon Web Services&lt;/a&gt; 上可用，
并且与 Intel 和 AMD 的基于 CPU 的解决方案相比，具有不同的威胁模型。&lt;/p&gt;
&lt;!--
[IBM Secure Execution for Linux](https://www.ibm.com/docs/en/linux-on-systems?topic=virtualization-secure-execution)
lets you run your Kubernetes cluster&#39;s nodes as KVM guests within a trusted execution environment on
IBM Z series hardware. You can use this hardware-enhanced virtual machine isolation to
provide strong isolation between tenants in a cluster, with hardware attestation about the (virtual) node&#39;s integrity.
--&gt;
&lt;p&gt;&lt;a href=&#34;https://www.ibm.com/docs/en/linux-on-systems?topic=virtualization-secure-execution&#34;&gt;IBM Secure Execution for Linux&lt;/a&gt;
允许你在 IBM Z 系列硬件的可信执行环境内以 KVM 客户端的形式运行 Kubernetes 集群的节点。
你可以使用这种硬件增强的虚拟机隔离机制为集群中的租户之间提供稳固的隔离，
并通过硬件验证提供关于（虚拟）节点完整性的信息。&lt;/p&gt;
&lt;!--
### Security properties and feature set

In the following sections, we will review the security properties and additional features
these new technologies bring to the table. Only some solutions will provide all properties;
we will discuss each technology in further detail in their respective section.
--&gt;
&lt;h3 id=&#34;security-properties-and-feature-set&#34;&gt;安全属性和特性功能 &lt;/h3&gt;
&lt;p&gt;下文将回顾这些新技术所带来的安全属性和额外功能。
只有部分解决方案会提供所有属性；我们将在各自的小节中更详细地讨论每项技术。&lt;/p&gt;
&lt;!--
The **Confidentiality** property ensures that information cannot be viewed while it is
in use in the TEE. This provides us with the highly desired feature to secure
**data in use**. Depending on the specific TEE used, both code and data may be protected
from outside viewers. The differences in TEE architectures and how their use
in a cloud native context are important considerations when designing end-to-end security
for sensitive workloads with a minimal **Trusted Computing Base** (TCB) in mind. CCC has recently
worked on a [common vocabulary and supporting material](https://confidentialcomputing.io/wp-content/uploads/sites/85/2023/01/Common-Terminology-for-Confidential-Computing.pdf)
that helps to explain where confidentiality boundaries are drawn with the different TEE
architectures and how that impacts the TCB size.
--&gt;
&lt;p&gt;&lt;strong&gt;机密性&lt;/strong&gt;属性确保在使用 TEE 时信息无法被查看。这为我们提供了非常需要的的功能以保护&lt;strong&gt;使用中的数据&lt;/strong&gt;。
根据使用的特定 TEE，代码和数据都可能受到外部查看者的保护。
TEE 架构的差异以及它们在云原生环境中的使用是在设计端到端安全性时的重要考虑因素，
目的是为敏感工作负载提供最小的&lt;strong&gt;可信计算基础&lt;/strong&gt;（Trusted Computing Base, 简称 TCB）。
CCC 最近致力于&lt;strong&gt;通用术语和支持材料&lt;/strong&gt;，以帮助解释在不同的 TEE 架构下机密性边界的划分，
以及这如何影响 TCB 的大小。&lt;/p&gt;
&lt;!--
Confidentiality is a great feature, but an attacker can still manipulate
or inject arbitrary code and data for the TEE to execute and, therefore, easily leak critical
information. **Integrity** guarantees a TEE owner that neither code nor data can be
tampered with while running critical computations.
--&gt;
&lt;p&gt;机密性是一个很好的特性，但攻击者仍然可以操纵或注入任意代码和数据供 TEE 执行，
因此，很容易泄露关键信息。&lt;strong&gt;完整性&lt;/strong&gt;保证 TEE 拥有者在运行关键计算时，代码和数据都不能被篡改。&lt;/p&gt;
&lt;!--
**Availability** is a basic property often discussed in the context of information
security. However, this property is outside the scope of most TEEs. Usually, they can be controlled
(shut down, restarted, …) by some higher level abstraction. This could be the CPU itself, the
hypervisor, or the kernel. This is to preserve the overall system&#39;s availability,
not the TEE itself. When running in the cloud, availability is usually guaranteed by
the cloud provider in terms of Service Level Agreements (SLAs) and is not cryptographically enforceable.
--&gt;
&lt;p&gt;&lt;strong&gt;可用性&lt;/strong&gt;是在信息安全背景下经常讨论的一项基本属性。然而，这一属性超出了大多数 TEE 的范围。
通常，它们可以被一些更高级别的抽象控制（关闭、重启...）。这可以是 CPU 本身、虚拟机监视器或内核。
这是为了保持整个系统的可用性，而不是 TEE 本身。在云环境中运行时，
可用性通常由云提供商以服务级别协议（Service Level Agreements，简称 SLAs）的形式保证，
并且不能通过加密强制执行。&lt;/p&gt;
&lt;!--
Confidentiality and Integrity by themselves are only helpful in some cases. For example,
consider a TEE running in a remote cloud. How would you know the TEE is genuine and running
your intended software? It could be an imposter stealing your data as soon as you send it over.
This fundamental problem is addressed by **Attestability**. Attestation allows us to verify
the identity, confidentiality, and integrity of TEEs based on cryptographic certificates issued
from the hardware itself. This feature can also be made available to clients outside of the
confidential computing hardware in the form of remote attestation.
--&gt;
&lt;p&gt;仅凭机密性和完整性在某些情况下是有帮助的。例如，考虑一个在远程云中运行的 TEE。
你如何知道 TEE 是真实的并且正在运行你预期的软件？一旦你发送数据，
它可能是一个冒名顶替者窃取你的数据。这个根本问题通过&lt;strong&gt;可验证性&lt;/strong&gt;得到解决。
验证允许我们基于硬件本身签发的加密证书来验证 TEE 的身份、机密性和完整性。
这个功能也可以以远程验证的形式提供给机密计算硬件之外的客户端使用。&lt;/p&gt;
&lt;!--
TEEs can hold and process information that predates or outlives the trusted environment. That
could mean across restarts, different versions, or platform migrations. Therefore **Recoverability**
is an important feature. Data and the state of a TEE need to be sealed before they are written
to persistent storage to maintain confidentiality and integrity guarantees. The access to such
sealed data needs to be well-defined. In most cases, the unsealing is bound to a TEE&#39;s identity.
Hence, making sure the recovery can only happen in the same confidential context.
--&gt;
&lt;p&gt;TEEs 可以保存和处理早于或超出可信环境存在时间的信息。这可能意味着重启、跨不同版本或平台迁移的信息。
因此，&lt;strong&gt;可恢复性&lt;/strong&gt;是一个重要的特性。在将数据和 TEE 的状态写入持久性存储之前，需要对它们进行封装，
以维护保证机密性和完整性。对这种封装数据的访问需要明确定义。在大多数情况下，
解封过程与 TEE 绑定的身份有关。因此，确保恢复只能在相同的机密环境中进行。&lt;/p&gt;
&lt;!--
This does not have to limit the flexibility of the overall system.
[AMD SEV-SNP&#39;s migration agent (MA)](https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf)
allows users to migrate a confidential virtual machine to a different host system
while keeping the security properties of the TEE intact.
--&gt;
&lt;p&gt;这不必限制整个系统的灵活性。
&lt;a href=&#34;https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf&#34;&gt;AMD SEV-SNP 的迁移代理 (MA)&lt;/a&gt;
允许用户将机密虚拟机迁移到不同的主机系统，同时保持 TEE 的安全属性不变。&lt;/p&gt;
&lt;!--
## Feature comparison

These sections of the article will dive a little bit deeper into the specific implementations,
compare supported features and analyze their security properties.
--&gt;
&lt;h2 id=&#34;feature-comparison&#34;&gt;功能比较 &lt;/h2&gt;
&lt;p&gt;本文的这部分将更深入地探讨具体的实现，比较支持的功能并分析它们的安全属性。&lt;/p&gt;
&lt;!--
### AMD SEV

AMD&#39;s [Secure Encrypted Virtualization (SEV)](https://developer.amd.com/sev/) technologies
are a set of features to enhance the security of virtual machines on AMD&#39;s server CPUs. SEV
transparently encrypts the memory of each VM with a unique key. SEV can also calculate a
signature of the memory contents, which can be sent to the VM&#39;s owner as an attestation that
the initial guest memory was not manipulated.
--&gt;
&lt;h3 id=&#34;amd-sev&#34;&gt;AMD SEV &lt;/h3&gt;
&lt;p&gt;AMD 的&lt;a href=&#34;https://developer.amd.com/sev/&#34;&gt;安全加密虚拟化 (SEV)&lt;/a&gt;技术是一组功能，
用于增强 AMD 服务器 CPU 上虚拟机的安全性。SEV 透明地用唯一密钥加密每个 VM 的内存。
SEV 还可以计算内存内容的签名，该签名可以作为证明初始客户机内存没有被篡改的依据发送给 VM 的所有者。&lt;/p&gt;
&lt;!--
The second generation of SEV, known as
[Encrypted State](https://www.amd.com/content/dam/amd/en/documents/epyc-business-docs/white-papers/Protecting-VM-Register-State-with-SEV-ES.pdf)
or SEV-ES, provides additional protection from the hypervisor by encrypting all
CPU register contents when a context switch occurs.
--&gt;
&lt;p&gt;SEV 的第二代，称为&lt;a href=&#34;https://www.amd.com/content/dam/amd/en/documents/epyc-business-docs/white-papers/Protecting-VM-Register-State-with-SEV-ES.pdf&#34;&gt;加密状态&lt;/a&gt;
或 SEV-ES，通过在发生上下文切换时加密所有 CPU 寄存器内容，提供了对虚拟机管理程序的额外保护。&lt;/p&gt;
&lt;!--
The third generation of SEV,
[Secure Nested Paging](https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf)
or SEV-SNP, is designed to prevent software-based integrity attacks and reduce the risk associated with
compromised memory integrity. The basic principle of SEV-SNP integrity is that if a VM can read
a private (encrypted) memory page, it must always read the value it last wrote.

Additionally, by allowing the guest to obtain remote attestation statements dynamically,
SNP enhances the remote attestation capabilities of SEV.
--&gt;
&lt;p&gt;SEV 的第三代，&lt;a href=&#34;https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf&#34;&gt;安全嵌套分页&lt;/a&gt;
或 SEV-SNP，旨在防止基于软件的完整性攻击并降低受损内存完整性相关的风险。
SEV-SNP 完整性的基本原则是，如果虚拟机可以读取私有（加密）内存页，
那么它必须始终读取它最后写入的值。&lt;/p&gt;
&lt;p&gt;此外，通过允许客户端动态获取远程验证声明，SNP 增强了 SEV 的远程验证能力。&lt;/p&gt;
&lt;!--
AMD SEV has been implemented incrementally. New features and improvements have been added with
each new CPU generation. The Linux community makes these features available as part of the KVM hypervisor
and for host and guest kernels. The first SEV features were discussed and implemented in 2016 - see
[AMD x86 Memory Encryption Technologies](https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/kaplan)
from the 2016 Usenix Security Symposium. The latest big addition was
[SEV-SNP guest support in Linux 5.19](https://www.phoronix.com/news/AMD-SEV-SNP-Arrives-Linux-5.19).

[Confidential VMs based on AMD SEV-SNP](https://azure.microsoft.com/en-us/updates/azureconfidentialvm/)
are available in Microsoft Azure since July 2022. Similarly, Google Cloud Platform (GCP) offers
[confidential VMs based on AMD SEV-ES](https://cloud.google.com/compute/confidential-vm/docs/about-cvm).
--&gt;
&lt;p&gt;AMD SEV 是以增量方式实施的。每个新的 CPU 代都增加了新功能和改进。
Linux 社区将这些功能作为 KVM 虚拟机管理程序的一部分提供，适用于主机和客户机内核。
第一批 SEV 功能在 2016 年被讨论并实施 - 参见 2016 年 Usenix 安全研讨会的
&lt;a href=&#34;https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/kaplan&#34;&gt;AMD x86 内存加密技术&lt;/a&gt;。
最新的重大补充是 &lt;a href=&#34;https://cloud.google.com/compute/confidential-vm/docs/about-cvm&#34;&gt;Linux 5.19 中的 SEV-SNP 客户端支持&lt;/a&gt;。&lt;/p&gt;
&lt;p&gt;自 2022 年 7 月以来，Microsoft Azure 提供基于
&lt;a href=&#34;https://azure.microsoft.com/en-us/updates/azureconfidentialvm/&#34;&gt;AMD SEV-SNP 的机密虚拟机&lt;/a&gt;。
类似地，Google Cloud Platform (GCP) 提供基于
&lt;a href=&#34;https://cloud.google.com/compute/confidential-vm/docs/about-cvm&#34;&gt;AMD SEV-ES 的机密虚拟机&lt;/a&gt;。&lt;/p&gt;
&lt;!--
### Intel SGX

Intel&#39;s
[Software Guard Extensions](https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/overview.html)
has been available since 2015 and were introduced with the Skylake architecture.
--&gt;
&lt;h3 id=&#34;intel-sgx&#34;&gt;Intel SGX &lt;/h3&gt;
&lt;p&gt;Intel 的&lt;a href=&#34;https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/overview.html&#34;&gt;软件防护扩展&lt;/a&gt;
自 2015 年起便已推出，并在 Skylake 架构中首次亮相。&lt;/p&gt;
&lt;!--
SGX is an instruction set that enables users to create a protected and isolated process called
an *enclave*. It provides a reverse sandbox that protects enclaves from the operating system,
firmware, and any other privileged execution context.
--&gt;
&lt;p&gt;SGX 是一套指令集，它使用户能够创建一个叫做 &lt;em&gt;Enclave&lt;/em&gt; 的受保护且隔离的进程。
它提供了一个反沙箱机制，保护 Enclave 不受操作系统、固件以及任何其他特权执行上下文的影响。&lt;/p&gt;
&lt;!--
The enclave memory cannot be read or written from outside the enclave, regardless of
the current privilege level and CPU mode. The only way to call an enclave function is
through a new instruction that performs several protection checks. Its memory is encrypted.
Tapping the memory or connecting the DRAM modules to another system will yield only encrypted
data. The memory encryption key randomly changes every power cycle. The key is stored
within the CPU and is not accessible.
--&gt;
&lt;p&gt;Enclave 内存无法从 Enclave 外部读取或写入，无论当前的权限级别和 CPU 模式如何。
调用 Enclave 功能的唯一方式是通过一条执行多个保护检查的新指令。Enclave 的内存是加密的。
窃听内存或将 DRAM 模块连接到另一个系统只会得到加密数据。内存加密密钥在每次上电周期时随机更改。
密钥存储在 CPU 内部，无法访问。&lt;/p&gt;
&lt;!--
Since the enclaves are process isolated, the operating system&#39;s libraries are not usable as is;
therefore, SGX enclave SDKs are required to compile programs for SGX. This also implies applications
need to be designed and implemented to consider the trusted/untrusted isolation boundaries.
On the other hand, applications get built with very minimal TCB.
--&gt;
&lt;p&gt;由于 Enclave 是进程隔离的，操作系统的库不能直接使用；
因此，需要 SGX Enclave SDK 来编译针对 SGX 的程序。
这也意味着应用程序需要在设计和实现时考虑受信任/不受信任的隔离边界。
另一方面，应用程序的构建具有非常小的 TCB。&lt;/p&gt;
&lt;!--
An emerging approach to easily transition to process-based confidential computing
and avoid the need to build custom applications is to utilize library OSes. These OSes
facilitate running native, unmodified Linux applications inside SGX enclaves.
A library OS intercepts all application requests to the host OS and processes them securely
without the application knowing it&#39;s running a TEE.
--&gt;
&lt;p&gt;一种新兴的方法，利用库操作系统（library OSes）来轻松过渡到基于进程的机密计算并避免需要构建自定义应用程序。
这些操作系统有助于在 SGX 安全 Enclave 内运行原生的、未经修改的 Linux 应用程序。
操作系统库会拦截应用对宿主机操作系统的所有请求，并在应用不知情的情况下安全地处理它们，
而应用实际上是在一个受信执行环境（TEE）中运行。&lt;/p&gt;
&lt;!--
The 3rd generation Xeon CPUs (aka Ice Lake Server - &#34;ICX&#34;) and later generations did switch to using a technology called
[Total Memory Encryption - Multi-Key](https://www.intel.com/content/www/us/en/developer/articles/news/runtime-encryption-of-memory-with-intel-tme-mk.html)
(TME-MK) that uses AES-XTS, moving away from the
[Memory Encryption Engine](https://eprint.iacr.org/2016/204.pdf)
that the consumer and Xeon E CPUs used. This increased the possible
[enclave page cache](https://sgx101.gitbook.io/sgx101/sgx-bootstrap/enclave#enclave-page-cache-epc)
(EPC) size (up to 512GB/CPU) and improved performance. More info
about SGX on multi-socket platforms can be found in the
[Whitepaper](https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/supporting-intel-sgx-on-mulit-socket-platforms.pdf).
--&gt;
&lt;p&gt;第三代 Xeon 处理器（又称为 Ice Lake 服务器 - &amp;quot;ICX&amp;quot;）及其后续版本采用了一种名为
&lt;a href=&#34;https://www.intel.com/content/www/us/en/developer/articles/news/runtime-encryption-of-memory-with-intel-tme-mk.html&#34;&gt;全内存加密 - 多密钥&lt;/a&gt;（TME-MK）的技术，
该技术使用 AES-XTS，从消费者和 Xeon E 处理器使用的&lt;a href=&#34;https://eprint.iacr.org/2016/204.pdf&#34;&gt;内存加密引擎&lt;/a&gt;中脱离出来。
这可能增加了 &lt;a href=&#34;https://sgx101.gitbook.io/sgx101/sgx-bootstrap/enclave#enclave-page-cache-epc&#34;&gt;Enclave 页面缓存&lt;/a&gt;
（EPC）大小（每个 CPU 高达 512 GB）并提高了性能。关于多插槽平台上的 SGX 的更多信息可以在
&lt;a href=&#34;https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/supporting-intel-sgx-on-mulit-socket-platforms.pdf&#34;&gt;白皮书&lt;/a&gt;中找到。&lt;/p&gt;
&lt;!--
A [list of supported platforms](https://ark.intel.com/content/www/us/en/ark/search/featurefilter.html?productType=873)
is available from Intel.

SGX is available on
[Azure](https://azure.microsoft.com/de-de/updates/intel-sgx-based-confidential-computing-vms-now-available-on-azure-dedicated-hosts/),
[Alibaba Cloud](https://www.alibabacloud.com/help/en/elastic-compute-service/latest/build-an-sgx-encrypted-computing-environment),
[IBM](https://cloud.ibm.com/docs/bare-metal?topic=bare-metal-bm-server-provision-sgx), and many more.
--&gt;
&lt;p&gt;可以从 Intel 获取&lt;a href=&#34;https://ark.intel.com/content/www/us/en/ark/search/featurefilter.html?productType=873&#34;&gt;支持的平台列表&lt;/a&gt;。&lt;/p&gt;
&lt;p&gt;SGX 在 &lt;a href=&#34;https://azure.microsoft.com/de-de/updates/intel-sgx-based-confidential-computing-vms-now-available-on-azure-dedicated-hosts/&#34;&gt;Azure&lt;/a&gt;、
&lt;a href=&#34;https://www.alibabacloud.com/help/en/elastic-compute-service/latest/build-an-sgx-encrypted-computing-environment&#34;&gt;阿里云&lt;/a&gt;、
&lt;a href=&#34;https://cloud.ibm.com/docs/bare-metal?topic=bare-metal-bm-server-provision-sgx&#34;&gt;IBM&lt;/a&gt; 以及更多平台上可用。&lt;/p&gt;
&lt;!--
### Intel TDX

Where Intel SGX aims to protect the context of a single process,
[Intel&#39;s Trusted Domain Extensions](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html)
protect a full virtual machine and are, therefore, most closely comparable to AMD SEV.
--&gt;
&lt;h3 id=&#34;intel-tdx&#34;&gt;Intel TDX &lt;/h3&gt;
&lt;p&gt;Intel SGX 旨在保护单个进程的上下文，而
&lt;a href=&#34;https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html&#34;&gt;Intel 的可信域扩展&lt;/a&gt;保护整个虚拟机，
因此，它与 AMD SEV 最为相似。&lt;/p&gt;
&lt;!--
As with SEV-SNP, guest support for TDX was [merged in Linux Kernel 5.19](https://www.phoronix.com/news/Intel-TDX-For-Linux-5.19).
However, hardware support will land with [Sapphire Rapids](https://en.wikipedia.org/wiki/Sapphire_Rapids) during 2023:
[Alibaba Cloud provides](https://www.alibabacloud.com/help/en/elastic-compute-service/latest/build-a-tdx-confidential-computing-environment)
invitational preview instances, and
[Azure has announced](https://techcommunity.microsoft.com/t5/azure-confidential-computing/preview-introducing-dcesv5-and-ecesv5-series-confidential-vms/ba-p/3800718)
its TDX preview opportunity.
--&gt;
&lt;p&gt;与 SEV-SNP 一样，对 TDX 的客户端支持已经在
&lt;a href=&#34;https://www.phoronix.com/news/Intel-TDX-For-Linux-5.19&#34;&gt;Linux Kernel 5.19版本中合并&lt;/a&gt;。
然而，硬件支持将在 2023 年与 &lt;a href=&#34;https://en.wikipedia.org/wiki/Sapphire_Rapids&#34;&gt;Sapphire Rapids&lt;/a&gt; 一同发布：
&lt;a href=&#34;https://www.alibabacloud.com/help/en/elastic-compute-service/latest/build-a-tdx-confidential-computing-environment&#34;&gt;阿里云提供&lt;/a&gt;
邀请预览实例，同时，&lt;a href=&#34;https://techcommunity.microsoft.com/t5/azure-confidential-computing/preview-introducing-dcesv5-and-ecesv5-series-confidential-vms/ba-p/3800718&#34;&gt;Azure 已经宣布&lt;/a&gt;
其 TDX 预览机会。&lt;/p&gt;
&lt;!--
## Overhead analysis

The benefits that Confidential Computing technologies provide via strong isolation and enhanced
security to customer data and workloads are not for free. Quantifying this impact is challenging and
depends on many factors: The TEE technology, the benchmark, the metrics, and the type of workload
all have a huge impact on the expected performance overhead.
--&gt;
&lt;h2 id=&#34;overhead-analysis&#34;&gt;开销分析 &lt;/h2&gt;
&lt;p&gt;通过强隔离和增强的安全性，机密计算技术为客户数据和工作负载提供的好处并非免费。
量化这种影响是具有挑战性的，并且取决于许多因素：TEE 技术，基准测试，
度量标准以及工作负载的类型都对预期的性能开销有巨大的影响。&lt;/p&gt;
&lt;!--
Intel SGX-based TEEs are hard to benchmark, as [shown](https://arxiv.org/pdf/2205.06415.pdf)
[by](https://www.ibr.cs.tu-bs.de/users/mahhouk/papers/eurosec2021.pdf)
[different papers](https://dl.acm.org/doi/fullHtml/10.1145/3533737.3535098). The chosen SDK/library
OS, the application itself, as well as the resource requirements (especially large memory requirements)
have a huge impact on performance. A single-digit percentage overhead can be expected if an application
is well suited to run inside an enclave.
--&gt;
&lt;p&gt;基于 Intel SGX 的 TEE 很难进行基准测试，
正如&lt;a href=&#34;https://dl.acm.org/doi/fullHtml/10.1145/3533737.3535098&#34;&gt;不同的论文&lt;/a&gt;所
&lt;a href=&#34;https://arxiv.org/pdf/2205.06415.pdf&#34;&gt;展示&lt;/a&gt;的&lt;a href=&#34;https://www.ibr.cs.tu-bs.de/users/mahhouk/papers/eurosec2021.pdf&#34;&gt;一样&lt;/a&gt;。
所选择的 SDK/操作系统库，应用程序本身以及资源需求（特别是大内存需求）对性能有巨大的影响。
如果应用程序非常适合在 Enclave 内运行，那么通常可以预期会有一个个位数的百分比的开销。&lt;/p&gt;
&lt;!--
Confidential virtual machines based on AMD SEV-SNP require no changes to the executed program
and operating system and are a lot easier to benchmark. A
[benchmark from Azure and AMD](https://community.amd.com/t5/business/microsoft-azure-confidential-computing-powered-by-3rd-gen-epyc/ba-p/497796)
shows that SEV-SNP VM overhead is &lt;10%, sometimes as low as 2%.
--&gt;
&lt;p&gt;基于 AMD SEV-SNP 的机密虚拟机不需要对执行的程序和操作系统进行任何更改，
因此更容易进行基准测试。一个来自
&lt;a href=&#34;https://community.amd.com/t5/business/microsoft-azure-confidential-computing-powered-by-3rd-gen-epyc/ba-p/497796&#34;&gt;Azure 和 AMD 的基准测试&lt;/a&gt;显示，
SEV-SNP VM 的开销 &amp;lt; 10%，有时甚至低至 2%。&lt;/p&gt;
&lt;!--
Although there is a performance overhead, it should be low enough to enable real-world workloads
to run in these protected environments and improve the security and privacy of our data.
--&gt;
&lt;p&gt;尽管存在性能开销，但它应该足够低，以便使真实世界的工作负载能够在这些受保护的环境中运行，
并提高我们数据的安全性和隐私性。&lt;/p&gt;
&lt;!--
## Confidential Computing compared to FHE, ZKP, and MPC

Fully Homomorphic Encryption (FHE), Zero Knowledge Proof/Protocol (ZKP), and Multi-Party
Computations (MPC) are all a form of encryption or cryptographic protocols that offer
similar security guarantees to Confidential Computing but do not require hardware support.
--&gt;
&lt;h2 id=&#34;confidential-computing-compared-to-fhe-zkp-and-mpc&#34;&gt;机密计算与 FHE、ZKP 和 MPC 的比较 &lt;/h2&gt;
&lt;p&gt;全同态加密（FHE），零知识证明/协议（ZKP）和多方计算（MPC）都是加密或密码学协议的形式，
提供与机密计算类似的安全保证，但不需要硬件支持。&lt;/p&gt;
&lt;!--
Fully (also partially and somewhat) homomorphic encryption allows one to perform
computations, such as addition or multiplication, on encrypted data. This provides
the property of encryption in use but does not provide integrity protection or attestation
like confidential computing does. Therefore, these two technologies can [complement to each other](https://confidentialcomputing.io/2023/03/29/confidential-computing-and-homomorphic-encryption/).
--&gt;
&lt;p&gt;全同态加密（也包括部分和有限同态加密）允许在加密数据上执行计算，例如加法或乘法。
这提供了在使用中加密的属性，但不像机密计算那样提供完整性保护或认证。因此，这两种技术可以
&lt;a href=&#34;https://confidentialcomputing.io/2023/03/29/confidential-computing-and-homomorphic-encryption/&#34;&gt;互为补充&lt;/a&gt;。&lt;/p&gt;
&lt;!--
Zero Knowledge Proofs or Protocols are a privacy-preserving technique (PPT) that
allows one party to prove facts about their data without revealing anything else about
the data. ZKP can be used instead of or in addition to Confidential Computing to protect
the privacy of the involved parties and their data. Similarly, Multi-Party Computation
enables multiple parties to work together on a computation, i.e., each party provides
their data to the result without leaking it to any other parties.
--&gt;
&lt;p&gt;零知识证明或协议是一种隐私保护技术（PPT），它允许一方证明其数据的事实而不泄露关于数据的任何其他信息。
ZKP 可以替代或与机密计算一起使用，以保护相关方及其数据的隐私。同样，
多方计算使多个参与方能够共同进行计算，即每个参与方提供其数据以得出结果，
但不会泄露给任何其他参与方。&lt;/p&gt;
&lt;!--
## Use cases of Confidential Computing

The presented Confidential Computing platforms show that both the isolation of a single container
process and, therefore, minimization of the trusted computing base and the isolation of a
``
full virtual machine are possible. This has already enabled a lot of interesting and secure
projects to emerge:
--&gt;
&lt;h2 id=&#34;use-cases-of-confidential-computing&#34;&gt;机密计算的应用场景 &lt;/h2&gt;
&lt;p&gt;前面介绍的机密计算平台表明，既可以实现单个容器进程的隔离，从而最小化可信计算单元，
也可以实现整个虚拟机的隔离。这已经促使很多有趣且安全的项目涌现：&lt;/p&gt;
&lt;!--
### Confidential Containers

[Confidential Containers](https://github.com/confidential-containers) (CoCo) is a
CNCF sandbox project that isolates Kubernetes pods inside of confidential virtual machines.
--&gt;
&lt;h3 id=&#34;confidential-containers&#34;&gt;机密容器 &lt;/h3&gt;
&lt;p&gt;机密容器 (CoCo) 是一个 CNCF 沙箱项目，它在机密虚拟机内隔离 Kubernetes Pod。&lt;/p&gt;
&lt;!--
CoCo can be installed on a Kubernetes cluster with an operator.
The operator will create a set of runtime classes that can be used to deploy
pods inside an enclave on several different platforms, including
AMD SEV, Intel TDX, Secure Execution for IBM Z, and Intel SGX.
--&gt;
&lt;p&gt;CoCo 可以通过 operator 安装在 Kubernetes 集群上。operator 将创建一组运行时类，
这些类可以用于在多个不同的平台上的 Enclave 内部署 Pod，
包括 AMD SEV，Intel TDX，IBM Z 的安全执行和 Intel SGX。&lt;/p&gt;
&lt;!--
CoCo is typically used with signed and/or encrypted container images
which are pulled, verified, and decrypted inside the enclave.
Secrets, such as image decryption keys, are conditionally provisioned
to the enclave by a trusted Key Broker Service that validates the
hardware evidence of the TEE prior to releasing any sensitive information.
--&gt;
&lt;p&gt;CoCo 通常与签名和/或加密的容器镜像一起使用，这些镜像在 Enclave 内部被拉取、验证和解密。
密钥信息，比如镜像解密密钥，经由受信任的 Key Broker 服务有条件地提供给 Enclave，
这个服务在释放任何敏感信息之前验证 TEE 的硬件认证。&lt;/p&gt;
&lt;!--
CoCo has several deployment models. Since the Kubernetes control plane
is outside the TCB, CoCo is suitable for managed environments. CoCo can
be run in virtual environments that don&#39;t support nesting with the help of an
API adaptor that starts pod VMs in the cloud. CoCo can also be run on
bare metal, providing strong isolation even in multi-tenant environments.
--&gt;
&lt;p&gt;CoCo 有几种部署模型。由于 Kubernetes 控制平面在 TCB 之外，因此 CoCo 适合于受管理的环境。
在不支持嵌套的虚拟环境中，CoCo 可以借助 API 适配器运行，该适配器在云中启动 Pod VM。
CoCo 还可以在裸机上运行，在多租户环境中提供强大的隔离。&lt;/p&gt;
&lt;!--
### Managed confidential Kubernetes

[Azure](https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-node-pool-aks) and
[GCP](https://cloud.google.com/blog/products/identity-security/announcing-general-availability-of-confidential-gke-nodes)
both support the use of confidential virtual machines as worker nodes for their managed Kubernetes offerings.
--&gt;
&lt;h3 id=&#34;managed-confidential-kubernetes&#34;&gt;受管理的机密 Kubernetes &lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-node-pool-aks&#34;&gt;Azure&lt;/a&gt;
和 &lt;a href=&#34;https://cloud.google.com/blog/products/identity-security/announcing-general-availability-of-confidential-gke-nodes&#34;&gt;GCP&lt;/a&gt;
都支持将机密虚拟机用作其受管理的 Kubernetes 的工作节点。&lt;/p&gt;
&lt;!--
Both services aim for better workload protection and security guarantees by enabling memory encryption
for container workloads. However, they don&#39;t seek to fully isolate the cluster or workloads against
the service provider or infrastructure. Specifically, they don&#39;t offer a dedicated confidential control
plane or expose attestation capabilities for the confidential cluster/nodes.
--&gt;
&lt;p&gt;这两项服务通过启用容器工作负载的内存加密，旨在提供更好的工作负载保护和安全保证。
然而，它们并没有寻求完全隔离集群或工作负载以防止服务提供者或基础设施的访问。
具体来说，它们不提供专用的机密控制平面，也不为机密集群/节点提供可验证的能力。&lt;/p&gt;
&lt;!--
Azure also enables
[Confidential Containers](https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-nodes-aks-overview)
in their managed Kubernetes offering. They support the creation based on
[Intel SGX enclaves](https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-containers-enclaves)
and [AMD SEV-based VMs](https://techcommunity.microsoft.com/t5/azure-confidential-computing/microsoft-introduces-preview-of-confidential-containers-on-azure/ba-p/3410394).
--&gt;
&lt;p&gt;Azure 在其托管的 Kubernetes 服务中也启用了
&lt;a href=&#34;https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-nodes-aks-overview&#34;&gt;机密容器&lt;/a&gt;。
他们支持基于 &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-containers-enclaves&#34;&gt;Intel SGX Enclave&lt;/a&gt;
和基于 &lt;a href=&#34;https://techcommunity.microsoft.com/t5/azure-confidential-computing/microsoft-introduces-preview-of-confidential-containers-on-azure/ba-p/3410394&#34;&gt;AMD SEV 虚拟机&lt;/a&gt;
创建的机密容器。&lt;/p&gt;
&lt;!--
### Constellation

[Constellation](https://github.com/edgelesssys/constellation) is a Kubernetes engine that aims to
provide the best possible data security. Constellation wraps your entire Kubernetes cluster into
a single confidential context that is shielded from the underlying cloud infrastructure. Everything
inside is always encrypted, including at runtime in memory. It shields both the worker and control
plane nodes. In addition, it already integrates with popular CNCF software such as Cilium for
secure networking and provides extended CSI drivers to write data securely.
--&gt;
&lt;h3 id=&#34;constellation&#34;&gt;Constellation &lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/edgelesssys/constellation&#34;&gt;Constellation&lt;/a&gt;
是一个旨在提供最佳数据安全的 Kubernetes 引擎。
Constellation 将整个 Kubernetes 集群包装到一个机密上下文中，使其免受底层云基础设施的影响。
其中的所有内容始终是加密的，包括在内存中的运行时数据。它保护工作节点和控制平面节点。
此外，它已经与流行的 CNCF 软件（如 Cilium，用于安全网络）集成，
并提供扩展的 CSI 动程序来安全地写入数据。&lt;/p&gt;
&lt;!--
### Occlum and Gramine

[Occlum](https://occlum.io/) and [Gramine](https://gramineproject.io/) are examples of open source
library OS projects that can be used to run unmodified applications in SGX enclaves. They
are member projects under the CCC, but similar projects and products maintained by companies
also exist. With these libOS projects, existing containerized applications can be
easily converted into confidential computing enabled containers. Many curated prebuilt
containers are also available.
--&gt;
&lt;h3 id=&#34;occlum-and-gramine&#34;&gt;Occlum 和 Gramine &lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://occlum.io/&#34;&gt;Occlum&lt;/a&gt; 和 &lt;a href=&#34;https://gramineproject.io/&#34;&gt;Gramine&lt;/a&gt;
是两个开源的操作系统库项目，它们允许在 SGX 信任执行环境（Enclave）中运行未经修改的应用程序。
它们是 CCC（Confidential Computing Consortium）下的成员项目，
但也存在由公司维护的类似项目和产品。通过使用这些操作系统库项目，
现有的容器化应用可以轻松转换为支持机密计算的容器。还有许多经过筛选的预构建容器可供使用。&lt;/p&gt;
&lt;!--
## Where are we today? Vendors, limitations, and FOSS landscape

As we hope you have seen from the previous sections, Confidential Computing is a powerful new concept
to improve security, but we are still in the (early) adoption phase. New products are
starting to emerge to take advantage of the unique properties.
--&gt;
&lt;h2 id=&#34;where-are-we-today-vendors-limitations-and-foss-landscape&#34;&gt;我们现在处于哪个阶段？供应商、局限性和开源软件生态 &lt;/h2&gt;
&lt;p&gt;正如我们希望你从前面的章节中看到的，机密计算是一种强大的新概念，
用于提高安全性，但我们仍处于（早期）阶段。新产品开始涌现，以利用这些独特的属性。&lt;/p&gt;
&lt;!--
Google and Microsoft are the first major cloud providers to have confidential offerings that
can run unmodified applications inside a protected boundary.
Still, these offerings are limited to compute, while end-to-end solutions for confidential
databases, cluster networking, and load balancers have to be self-managed.
--&gt;
&lt;p&gt;谷歌和微软是首批能够让客户在一个受保护的环境内运行未经修改的应用程序的机密计算服务的主要云提供商。
然而，这些服务仅限于计算，而对于机密数据库、集群网络和负载均衡器的端到端解决方案则需要自行管理。&lt;/p&gt;
&lt;!--
These technologies provide opportunities to bring even the most
sensitive workloads into the cloud and enables them to leverage all the
tools in the CNCF landscape.
--&gt;
&lt;p&gt;这些技术为极其敏感的工作负载部署到云中提供了可能，并使其能够充分利用 CNCF 领域中的各种工具。&lt;/p&gt;
&lt;!--
## Call to action

If you are currently working on a high-security product that struggles to run in the
public cloud due to legal requirements or are looking to bring the privacy and security
of your cloud-native project to the next level: Reach out to all the great projects
we have highlighted! Everyone is keen to improve the security of our ecosystem, and you can
play a vital role in that journey.
--&gt;
&lt;h2 id=&#34;call-to-action&#34;&gt;号召行动 &lt;/h2&gt;
&lt;p&gt;如果你目前正在开发一个高安全性的产品，但由于法律要求在公共云上运行面临困难，
或者你希望提升你的云原生项目的隐私和安全性：请联系我们强调的所有出色项目！
每个人都渴望提高我们生态系统的安全性，而你可以在这个过程中扮演至关重要的角色。&lt;/p&gt;
&lt;!--
* [Confidential Containers](https://github.com/confidential-containers)
* [Constellation: Always Encrypted Kubernetes](https://github.com/edgelesssys/constellation)
* [Occlum](https://occlum.io/)
* [Gramine](https://gramineproject.io/)
* CCC also maintains a [list of projects](https://confidentialcomputing.io/projects/)
--&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/confidential-containers&#34;&gt;机密容器&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/edgelesssys/constellation&#34;&gt;Constellation：始终加密的 Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://occlum.io/&#34;&gt;Occlum&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://gramineproject.io/&#34;&gt;Gramine&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;CCC 还维护了一个&lt;a href=&#34;https://confidentialcomputing.io/projects/&#34;&gt;项目列表&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>在 CRI 运行时内验证容器镜像签名</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/06/29/container-image-signature-verification/</link>
      <pubDate>Thu, 29 Jun 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/06/29/container-image-signature-verification/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Verifying Container Image Signatures Within CRI Runtimes&#34;
date: 2023-06-29
slug: container-image-signature-verification
--&gt;
&lt;!--
**Author**: Sascha Grunert
--&gt;
&lt;p&gt;&lt;strong&gt;作者:&lt;/strong&gt; Sascha Grunert&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者:&lt;/strong&gt; &lt;a href=&#34;https://github.com/windsonsea&#34;&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
The Kubernetes community has been signing their container image-based artifacts
since release v1.24. While the graduation of the [corresponding enhancement][kep]
from `alpha` to `beta` in v1.26 introduced signatures for the binary artifacts,
other projects followed the approach by providing image signatures for their
releases, too. This means that they either create the signatures within their
own CI/CD pipelines, for example by using GitHub actions, or rely on the
Kubernetes [image promotion][promo] process to automatically sign the images by
proposing pull requests to the [k/k8s.io][k8s.io] repository. A requirement for
using this process is that the project is part of the `kubernetes` or
`kubernetes-sigs` GitHub organization, so that they can utilize the community
infrastructure for pushing images into staging buckets.
--&gt;
&lt;p&gt;Kubernetes 社区自 v1.24 版本开始对基于容器镜像的工件进行签名。在 v1.26 中，
&lt;a href=&#34;https://github.com/kubernetes/enhancements/issues/3031&#34;&gt;相应的增强特性&lt;/a&gt;从 &lt;code&gt;alpha&lt;/code&gt; 进阶至 &lt;code&gt;beta&lt;/code&gt;，引入了针对二进制工件的签名。
其他项目也采用了类似的方法，为其发布版本提供镜像签名。这意味着这些项目要么使用 GitHub actions
在自己的 CI/CD 流程中创建签名，要么依赖于 Kubernetes 的&lt;a href=&#34;https://github.com/kubernetes-sigs/promo-tools/blob/e2b96dd/docs/image-promotion.md&#34;&gt;镜像推广&lt;/a&gt;流程，
通过向 &lt;a href=&#34;https://github.com/kubernetes/k8s.io/tree/4b95cc2/k8s.gcr.io&#34;&gt;k/k8s.io&lt;/a&gt; 仓库提交 PR 来自动签名镜像。
使用此流程的前提要求是项目必须属于 &lt;code&gt;kubernetes&lt;/code&gt; 或 &lt;code&gt;kubernetes-sigs&lt;/code&gt; GitHub 组织，
这样能够利用社区基础设施将镜像推送到暂存桶中。&lt;/p&gt;
&lt;!--
Assuming that a project now produces signed container image artifacts, how can
one actually verify the signatures? It is possible to do it manually like
outlined in the [official Kubernetes documentation][docs]. The problem with this
approach is that it involves no automation at all and should be only done for
testing purposes. In production environments, tools like the [sigstore
policy-controller][policy-controller] can help with the automation. These tools
provide a higher level API by using [Custom Resource Definitions (CRD)][crd] as
well as an integrated [admission controller and webhook][admission] to verify
the signatures.
--&gt;
&lt;p&gt;假设一个项目现在生成了已签名的容器镜像工件，那么如何实际验证这些签名呢？
你可以按照 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/tasks/administer-cluster/verify-signed-artifacts/#verifying-image-signatures&#34;&gt;Kubernetes 官方文档&lt;/a&gt;所述来手动验证。但是这种方式的问题在于完全没有自动化，
应仅用于测试目的。在生产环境中，&lt;a href=&#34;https://docs.sigstore.dev/policy-controller/overview&#34;&gt;sigstore policy-controller&lt;/a&gt;
这样的工具有助于进行自动化处理。这些工具使用&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources&#34;&gt;自定义资源定义（CRD）&lt;/a&gt;提供了更高级别的 API，
并且利用集成的&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/docs/reference/access-authn-authz/admission-controllers&#34;&gt;准入控制器和 Webhook&lt;/a&gt;来验证签名。&lt;/p&gt;
&lt;!--
The general usage flow for an admission controller based verification is:
--&gt;
&lt;p&gt;基于准入控制器的验证的一般使用流程如下：&lt;/p&gt;
&lt;!--


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/blog/2023/06/29/container-image-signature-verification/flow.svg&#34;
         alt=&#34;Create an instance of the policy and annotate the namespace to validate the signatures. Then create the pod. The controller evaluates the policy and if it passes, then it does the image pull if necessary. If the policy evaluation fails, then it will not admit the pod.&#34;/&gt; 
&lt;/figure&gt;
--&gt;


&lt;figure&gt;
    &lt;img src=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/06/29/container-image-signature-verification/flow.svg&#34;
         alt=&#34;创建一个策略的实例，并对命名空间添加注解以验证签名。然后创建 Pod。控制器会评估策略，如果评估通过，则根据需要执行镜像拉取。如果策略评估失败，则不允许该 Pod 运行。&#34;/&gt; 
&lt;/figure&gt;
&lt;!--
A key benefit of this architecture is simplicity: A single instance within the
cluster validates the signatures before any image pull can happen in the
container runtime on the nodes, which gets initiated by the kubelet. This
benefit also brings along the issue of separation: The node which should pull
the container image is not necessarily the same node that performs the admission. This
means that if the controller is compromised, then a cluster-wide policy
enforcement can no longer be possible.

One way to solve this issue is doing the policy evaluation directly within the
[Container Runtime Interface (CRI)][cri] compatible container runtime. The
runtime is directly connected to the [kubelet][kubelet] on a node and does all
the tasks like pulling images. [CRI-O][cri-o] is one of those available runtimes
and will feature full support for container image signature verification in v1.28.
--&gt;
&lt;p&gt;这种架构的一个主要优点是简单：集群中的单个实例会先验证签名，然后才在节点上的容器运行时中执行镜像拉取操作，
镜像拉取是由 kubelet 触发的。这个优点也带来了分离的问题：应拉取容器镜像的节点不一定是执行准入控制的节点。
这意味着如果控制器受到攻击，那么无法在整个集群范围内强制执行策略。&lt;/p&gt;
&lt;p&gt;解决此问题的一种方式是直接在与&lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/concepts/architecture/cri&#34;&gt;容器运行时接口（CRI）&lt;/a&gt;兼容的容器运行时中进行策略评估。
这种运行时直接连接到节点上的 &lt;a href=&#34;https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/docs/reference/command-line-tools-reference/kubelet&#34;&gt;kubelet&lt;/a&gt;，执行拉取镜像等所有任务。
&lt;a href=&#34;https://github.com/cri-o/cri-o&#34;&gt;CRI-O&lt;/a&gt; 是可用的运行时之一，将在 v1.28 中完全支持容器镜像签名验证。&lt;/p&gt;
&lt;!--
How does it work? CRI-O reads a file called [`policy.json`][policy.json], which
contains all the rules defined for container images. For example, you can define a
policy which only allows signed images `quay.io/crio/signed` for any tag or
digest like this:
--&gt;
&lt;p&gt;容器运行时是如何工作的呢？CRI-O 会读取一个名为 &lt;a href=&#34;https://github.com/containers/image/blob/b3e0ba2/docs/containers-policy.json.5.md#sigstoresigned&#34;&gt;&lt;code&gt;policy.json&lt;/code&gt;&lt;/a&gt; 的文件，
其中包含了为容器镜像定义的所有规则。例如，你可以定义一个策略，
只允许带有以下标记或摘要的已签名镜像 &lt;code&gt;quay.io/crio/signed&lt;/code&gt;：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-json&#34; data-lang=&#34;json&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;default&amp;#34;&lt;/span&gt;: [{ &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;type&amp;#34;&lt;/span&gt;: &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;reject&amp;#34;&lt;/span&gt; }],
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;transports&amp;#34;&lt;/span&gt;: {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;docker&amp;#34;&lt;/span&gt;: {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;quay.io/crio/signed&amp;#34;&lt;/span&gt;: [
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;          &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;type&amp;#34;&lt;/span&gt;: &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;sigstoreSigned&amp;#34;&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;          &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;signedIdentity&amp;#34;&lt;/span&gt;: { &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;type&amp;#34;&lt;/span&gt;: &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;matchRepository&amp;#34;&lt;/span&gt; },
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;          &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;fulcio&amp;#34;&lt;/span&gt;: {
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;            &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;oidcIssuer&amp;#34;&lt;/span&gt;: &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;https://github.com/login/oauth&amp;#34;&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;            &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;subjectEmail&amp;#34;&lt;/span&gt;: &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;sgrunert@redhat.com&amp;#34;&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;            &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;caData&amp;#34;&lt;/span&gt;: &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUI5ekNDQVh5Z0F3SUJBZ0lVQUxaTkFQRmR4SFB3amVEbG9Ed3lZQ2hBTy80d0NnWUlLb1pJemowRUF3TXcKS2pFVk1CTUdBMVVFQ2hNTWMybG5jM1J2Y21VdVpHVjJNUkV3RHdZRFZRUURFd2h6YVdkemRHOXlaVEFlRncweQpNVEV3TURjeE16VTJOVGxhRncwek1URXdNRFV4TXpVMk5UaGFNQ294RlRBVEJnTlZCQW9UREhOcFozTjBiM0psCkxtUmxkakVSTUE4R0ExVUVBeE1JYzJsbmMzUnZjbVV3ZGpBUUJnY3Foa2pPUFFJQkJnVXJnUVFBSWdOaUFBVDcKWGVGVDRyYjNQUUd3UzRJYWp0TGszL09sbnBnYW5nYUJjbFlwc1lCcjVpKzR5bkIwN2NlYjNMUDBPSU9aZHhleApYNjljNWlWdXlKUlErSHowNXlpK1VGM3VCV0FsSHBpUzVzaDArSDJHSEU3U1hyazFFQzVtMVRyMTlMOWdnOTJqCll6QmhNQTRHQTFVZER3RUIvd1FFQXdJQkJqQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUlkKd0I1ZmtVV2xacWw2ekpDaGt5TFFLc1hGK2pBZkJnTlZIU01FR0RBV2dCUll3QjVma1VXbFpxbDZ6SkNoa3lMUQpLc1hGK2pBS0JnZ3Foa2pPUFFRREF3TnBBREJtQWpFQWoxbkhlWFpwKzEzTldCTmErRURzRFA4RzFXV2cxdENNCldQL1dIUHFwYVZvMGpoc3dlTkZaZ1NzMGVFN3dZSTRxQWpFQTJXQjlvdDk4c0lrb0YzdlpZZGQzL1Z0V0I1YjkKVE5NZWE3SXgvc3RKNVRmY0xMZUFCTEU0Qk5KT3NRNHZuQkhKCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0=&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;          },
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;          &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;&amp;#34;rekorPublicKeyData&amp;#34;&lt;/span&gt;: &lt;span style=&#34;color:#b44&#34;&gt;&amp;#34;LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFMkcyWSsydGFiZFRWNUJjR2lCSXgwYTlmQUZ3cgprQmJtTFNHdGtzNEwzcVg2eVlZMHp1ZkJuaEM4VXIvaXk1NUdoV1AvOUEvYlkyTGhDMzBNOStSWXR3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        }
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      ]
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    }
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  }
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
CRI-O has to be started to use that policy as the global source of truth:
--&gt;
&lt;p&gt;CRI-O 必须被启动才能将策略用作全局的可信源：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; sudo crio --log-level debug --signature-policy ./policy.json
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
CRI-O is now able to pull the image while verifying its signatures. This can be
done by using [`crictl` (cri-tools)][cri-tools], for example:
--&gt;
&lt;p&gt;CRI-O 现在可以在验证镜像签名的同时拉取镜像。例如，可以使用 &lt;a href=&#34;https://github.com/kubernetes-sigs/cri-tools&#34;&gt;&lt;code&gt;crictl&lt;/code&gt;（cri-tools）&lt;/a&gt;
来完成此操作：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; sudo crictl -D pull quay.io/crio/signed
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;DEBU[…] get image connection
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;DEBU[…] PullImageRequest: &amp;amp;PullImageRequest{Image:&amp;amp;ImageSpec{Image:quay.io/crio/signed,Annotations:map[string]string{},},Auth:nil,SandboxConfig:nil,}
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;DEBU[…] PullImageResponse: &amp;amp;PullImageResponse{ImageRef:quay.io/crio/signed@sha256:18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a,}
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Image is up to date for quay.io/crio/signed@sha256:18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
The CRI-O debug logs will also indicate that the signature got successfully
validated:
--&gt;
&lt;p&gt;CRI-O 的调试日志也会表明签名已成功验证：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;DEBU[…] IsRunningImageAllowed for image docker:quay.io/crio/signed:latest
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;DEBU[…]  Using transport &amp;#34;docker&amp;#34; specific policy section quay.io/crio/signed
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;DEBU[…] Reading /var/lib/containers/sigstore/crio/signed@sha256=18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a/signature-1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;DEBU[…] Looking for sigstore attachments in quay.io/crio/signed:sha256-18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a.sig
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;DEBU[…] GET https://quay.io/v2/crio/signed/manifests/sha256-18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a.sig
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;DEBU[…] Content-Type from manifest GET is &amp;#34;application/vnd.oci.image.manifest.v1+json&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;DEBU[…] Found a sigstore attachment manifest with 1 layers
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;DEBU[…] Fetching sigstore attachment 1/1: sha256:8276724a208087e73ae5d9d6e8f872f67808c08b0acdfdc73019278807197c45
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;DEBU[…] Downloading /v2/crio/signed/blobs/sha256:8276724a208087e73ae5d9d6e8f872f67808c08b0acdfdc73019278807197c45
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;DEBU[…] GET https://quay.io/v2/crio/signed/blobs/sha256:8276724a208087e73ae5d9d6e8f872f67808c08b0acdfdc73019278807197c45
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;DEBU[…]  Requirement 0: allowed
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;DEBU[…] Overall: allowed
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
All of the defined fields like `oidcIssuer` and `subjectEmail` in the policy
have to match, while `fulcio.caData` and `rekorPublicKeyData` are the public
keys from the upstream [fulcio (OIDC PKI)][fulcio] and [rekor
(transparency log)][rekor] instances.
--&gt;
&lt;p&gt;策略中定义的 &lt;code&gt;oidcIssuer&lt;/code&gt; 和 &lt;code&gt;subjectEmail&lt;/code&gt; 等所有字段都必须匹配，
而 &lt;code&gt;fulcio.caData&lt;/code&gt; 和 &lt;code&gt;rekorPublicKeyData&lt;/code&gt; 是来自上游 &lt;a href=&#34;https://github.com/sigstore/fulcio&#34;&gt;fulcio（OIDC PKI）&lt;/a&gt;
和 &lt;a href=&#34;https://github.com/sigstore/rekor&#34;&gt;rekor（透明日志）&lt;/a&gt; 实例的公钥。&lt;/p&gt;
&lt;!--
This means that if you now invalidate the `subjectEmail` of the policy, for example to
`wrong@mail.com`:
--&gt;
&lt;p&gt;这意味着如果你现在将策略中的 &lt;code&gt;subjectEmail&lt;/code&gt; 作废，例如更改为 &lt;code&gt;wrong@mail.com&lt;/code&gt;：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; jq &lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;.transports.docker.&amp;#34;quay.io/crio/signed&amp;#34;[0].fulcio.subjectEmail = &amp;#34;wrong@mail.com&amp;#34;&amp;#39;&lt;/span&gt; policy.json &amp;gt; new-policy.json
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; mv new-policy.json policy.json
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Then remove the image, since it already exists locally:
--&gt;
&lt;p&gt;然后移除镜像，因为此镜像已存在于本地：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; sudo crictl rmi quay.io/crio/signed
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Now when you pull the image, CRI-O complains that the required email is wrong:
--&gt;
&lt;p&gt;现在当你拉取镜像时，CRI-O 将报错所需的 email 是错的：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; sudo crictl pull quay.io/crio/signed
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;FATA[…] pulling image: rpc error: code = Unknown desc = Source image rejected: Required email wrong@mail.com not found (got []string{&amp;#34;sgrunert@redhat.com&amp;#34;})
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
It is also possible to test an unsigned image against the policy. For that you
have to modify the key `quay.io/crio/signed` to something like
`quay.io/crio/unsigned`:
--&gt;
&lt;p&gt;你还可以对未签名的镜像进行策略测试。为此，你需要将键 &lt;code&gt;quay.io/crio/signed&lt;/code&gt;
修改为类似 &lt;code&gt;quay.io/crio/unsigned&lt;/code&gt;：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; sed -i &lt;span style=&#34;color:#b44&#34;&gt;&amp;#39;s;quay.io/crio/signed;quay.io/crio/unsigned;&amp;#39;&lt;/span&gt; policy.json
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
If you now pull the container image, CRI-O will complain that no signature exists
for it:
--&gt;
&lt;p&gt;如果你现在拉取容器镜像，CRI-O 将报错此镜像不存在签名：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; sudo crictl pull quay.io/crio/unsigned
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;FATA[…] pulling image: rpc error: code = Unknown desc = SignatureValidationFailed: Source image rejected: A signature was required, but no signature exists
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
It is important to mention that CRI-O will match the
`.critical.identity.docker-reference` field within the signature to match with
the image repository. For example, if you verify the image
`registry.k8s.io/kube-apiserver-amd64:v1.28.0-alpha.3`, then the corresponding
`docker-reference` should be `registry.k8s.io/kube-apiserver-amd64`:
--&gt;
&lt;p&gt;需要强调的是，CRI-O 将签名中的 &lt;code&gt;.critical.identity.docker-reference&lt;/code&gt; 字段与镜像仓库进行匹配。
例如，如果你要验证镜像 &lt;code&gt;registry.k8s.io/kube-apiserver-amd64:v1.28.0-alpha.3&lt;/code&gt;，
则相应的 &lt;code&gt;docker-reference&lt;/code&gt; 须是 &lt;code&gt;registry.k8s.io/kube-apiserver-amd64&lt;/code&gt;：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; cosign verify registry.k8s.io/kube-apiserver-amd64:v1.28.0-alpha.3 &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#888&#34;&gt;    --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;    --certificate-oidc-issuer https://accounts.google.com \
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;    | jq -r &amp;#39;.[0].critical.identity.&amp;#34;docker-reference&amp;#34;&amp;#39;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;…
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;&lt;/span&gt;&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#888&#34;&gt;registry.k8s.io/kubernetes/kube-apiserver-amd64
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
The Kubernetes community introduced `registry.k8s.io` as proxy mirror for
various registries. Before the release of [kpromo v4.0.2][kpromo], images
had been signed with the actual mirror rather than `registry.k8s.io`:
--&gt;
&lt;p&gt;Kubernetes 社区引入了 &lt;code&gt;registry.k8s.io&lt;/code&gt; 作为各种镜像仓库的代理镜像。
在 &lt;a href=&#34;https://github.com/kubernetes-sigs/promo-tools/releases/tag/v4.0.2&#34;&gt;kpromo v4.0.2&lt;/a&gt; 版本发布之前，镜像已使用实际镜像签名而不是使用
&lt;code&gt;registry.k8s.io&lt;/code&gt; 签名：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; cosign verify registry.k8s.io/kube-apiserver-amd64:v1.28.0-alpha.2 &lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#b62;font-weight:bold&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#888&#34;&gt;    --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;    --certificate-oidc-issuer https://accounts.google.com \
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;    | jq -r &amp;#39;.[0].critical.identity.&amp;#34;docker-reference&amp;#34;&amp;#39;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;…
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;&lt;/span&gt;&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#888&#34;&gt;asia-northeast2-docker.pkg.dev/k8s-artifacts-prod/images/kubernetes/kube-apiserver-amd64
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
The change of the `docker-reference` to `registry.k8s.io` makes it easier for
end users to validate the signatures, because they cannot know anything about the
underlying infrastructure being used. The feature to set the identity on image
signing has been added to [cosign][cosign-pr] via the flag `sign
--sign-container-identity` as well and will be part of its upcoming release.
--&gt;
&lt;p&gt;将 &lt;code&gt;docker-reference&lt;/code&gt; 更改为 &lt;code&gt;registry.k8s.io&lt;/code&gt; 使最终用户更容易验证签名，
因为他们不需要知道所使用的底层基础设施的详细信息。设置镜像签名身份的特性已通过
&lt;code&gt;sign --sign-container-identity&lt;/code&gt; 标志添加到 &lt;code&gt;cosign&lt;/code&gt;，并将成为即将发布的版本的一部分。&lt;/p&gt;
&lt;!--
The Kubernetes image pull error code `SignatureValidationFailed` got [recently added to
Kubernetes][pr-117717] and will be available from v1.28. This error code allows
end-users to understand image pull failures directly from the kubectl CLI. For
example, if you run CRI-O together with Kubernetes using the policy which requires
`quay.io/crio/unsigned` to be signed, then a pod definition like this:
--&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes/kubernetes/pull/117717&#34;&gt;最近在 Kubernetes 中添加了&lt;/a&gt;镜像拉取错误码 &lt;code&gt;SignatureValidationFailed&lt;/code&gt;，
将从 v1.28 版本开始可用。这个错误码允许最终用户直接从 kubectl CLI 了解镜像拉取失败的原因。
例如，如果你使用要求对 &lt;code&gt;quay.io/crio/unsigned&lt;/code&gt; 进行签名的策略同时运行 CRI-O 和 Kubernetes，
那么 Pod 的定义如下：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;v1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;Pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;pod&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;containers&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;container&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;image&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;quay.io/crio/unsigned&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
Will cause the `SignatureValidationFailed` error when applying the pod manifest:
--&gt;
&lt;p&gt;将在应用 Pod 清单时造成 &lt;code&gt;SignatureValidationFailed&lt;/code&gt; 错误：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; kubectl apply -f pod.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;pod/pod created
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; kubectl get pods
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;NAME   READY   STATUS                      RESTARTS   AGE
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;pod    0/1     SignatureValidationFailed   0          4s
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; kubectl describe pod pod | tail -n8
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  Type     Reason     Age                From               Message
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  ----     ------     ----               ----               -------
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  Normal   Scheduled  58s                default-scheduler  Successfully assigned default/pod to 127.0.0.1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  Normal   BackOff    22s (x2 over 55s)  kubelet            Back-off pulling image &amp;#34;quay.io/crio/unsigned&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  Warning  Failed     22s (x2 over 55s)  kubelet            Error: ImagePullBackOff
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  Normal   Pulling    9s (x3 over 58s)   kubelet            Pulling image &amp;#34;quay.io/crio/unsigned&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  Warning  Failed     6s (x3 over 55s)   kubelet            Failed to pull image &amp;#34;quay.io/crio/unsigned&amp;#34;: SignatureValidationFailed: Source image rejected: A signature was required, but no signature exists
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  Warning  Failed     6s (x3 over 55s)   kubelet            Error: SignatureValidationFailed
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
This overall behavior provides a more Kubernetes native experience and does not
rely on third party software to be installed in the cluster.
--&gt;
&lt;p&gt;这种整体行为提供了更符合 Kubernetes 原生体验的方式，并且不依赖于在集群中安装的第三方软件。&lt;/p&gt;
&lt;!--
There are still a few corner cases to consider: For example, what if you want to
allow policies per namespace in the same way the policy-controller supports it?
Well, there is an upcoming CRI-O feature in v1.28 for that! CRI-O will support
the `--signature-policy-dir` / `signature_policy_dir` option, which defines the
root path for pod namespace-separated signature policies. This means that CRI-O
will lookup that path and assemble a policy like `&lt;SIGNATURE_POLICY_DIR&gt;/&lt;NAMESPACE&gt;.json`,
which will be used on image pull if existing. If no pod namespace is
provided on image pull ([via the sandbox config][sandbox-config]), or the
concatenated path is non-existent, then CRI-O&#39;s global policy will be used as
fallback.
--&gt;
&lt;p&gt;还有一些特殊情况需要考虑：例如，如果你希望像策略控制器那样允许按命名空间设置策略，怎么办？
好消息是，CRI-O 在 v1.28 版本中即将推出这个特性！CRI-O 将支持 &lt;code&gt;--signature-policy-dir&lt;/code&gt; /
&lt;code&gt;signature_policy_dir&lt;/code&gt; 选项，为命名空间隔离的签名策略的 Pod 定义根路径。
这意味着 CRI-O 将查找该路径，并组装一个类似 &lt;code&gt;&amp;lt;SIGNATURE_POLICY_DIR&amp;gt;/&amp;lt;NAMESPACE&amp;gt;.json&lt;/code&gt;
的策略，在镜像拉取时如果存在则使用该策略。如果（&lt;a href=&#34;https://github.com/kubernetes/cri-api/blob/e5515a5/pkg/apis/runtime/v1/api.proto#L1448&#34;&gt;通过沙盒配置&lt;/a&gt;）
在镜像拉取时未提供 Pod 命名空间，或者串接的路径不存在，则 CRI-O 的全局策略将用作后备。&lt;/p&gt;
&lt;!--
Another corner case to consider is critical for the correct signature
verification within container runtimes: The kubelet only invokes container image
pulls if the image does not already exist on disk. This means that an
unrestricted policy from Kubernetes namespace A can allow pulling an image,
while namespace B is not able to enforce the policy because it already exits on
the node. Finally, CRI-O has to verify the policy not only on image pull, but
also on container creation. This fact makes things even a bit more complicated,
because the CRI does not really pass down the user specified image reference on
container creation, but an already resolved image ID, or digest. A [small
change to the CRI][pr-118652] can help with that.
--&gt;
&lt;p&gt;另一个需要考虑的特殊情况对于容器运行时中正确的签名验证至关重要：kubelet
仅在磁盘上不存在镜像时才调用容器镜像拉取。这意味着来自 Kubernetes 命名空间 A
的不受限策略可以允许拉取一个镜像，而命名空间 B 则无法强制执行该策略，
因为它已经存在于节点上了。最后，CRI-O 必须在容器创建时验证策略，而不仅仅是在镜像拉取时。
这一事实使情况变得更加复杂，因为 CRI 在容器创建时并没有真正传递用户指定的镜像引用，
而是传递已经解析过的镜像 ID 或摘要。&lt;a href=&#34;https://github.com/kubernetes/kubernetes/pull/118652&#34;&gt;对 CRI 进行小改动&lt;/a&gt; 有助于解决这个问题。&lt;/p&gt;
&lt;!--
Now that everything happens within the container runtime, someone has to
maintain and define the policies to provide a good user experience around that
feature. The CRDs of the policy-controller are great, while we could imagine that
a daemon within the cluster can write the policies for CRI-O per namespace. This
would make any additional hook obsolete and moves the responsibility of
verifying the image signature to the actual instance which pulls the image. [I
evaluated][thread] other possible paths toward a better container image
signature verification within plain Kubernetes, but I could not find a great fit
for a native API. This means that I believe that a CRD is the way to go, but
users still need an instance which actually serves it.
--&gt;
&lt;p&gt;现在一切都发生在容器运行时中，大家必须维护和定义策略以提供良好的用户体验。
策略控制器的 CRD 非常出色，我们可以想象集群中的一个守护进程可以按命名空间为 CRI-O 编写策略。
这将使任何额外的回调过时，并将验证镜像签名的责任移交给实际拉取镜像的实例。
我已经评估了在纯 Kubernetes 中实现更好的容器镜像签名验证的其他可能途径，
但我没有找到一个很好的原生 API 解决方案。这意味着我相信 CRD 是正确的方法，
但用户仍然需要一个实际有作用的实例。&lt;/p&gt;
&lt;!--
Thank you for reading this blog post! If you&#39;re interested in more, providing
feedback or asking for help, then feel free to get in touch with me directly via
[Slack (#crio)][slack] or the [SIG Node mailing list][mail].
--&gt;
&lt;p&gt;感谢阅读这篇博文！如果你对更多内容感兴趣，想提供反馈或寻求帮助，请随时通过
&lt;a href=&#34;https://kubernetes.slack.com/messages/crio&#34;&gt;Slack (#crio)&lt;/a&gt; 或 &lt;a href=&#34;https://groups.google.com/forum/#!forum/kubernetes-sig-node&#34;&gt;SIG Node 邮件列表&lt;/a&gt;直接联系我。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>dl.k8s.io 采用内容分发网络（CDN）</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/06/09/dl-adopt-cdn/</link>
      <pubDate>Fri, 09 Jun 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/06/09/dl-adopt-cdn/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;dl.k8s.io to adopt a Content Delivery Network&#34;
date: 2023-06-09
slug: dl-adopt-cdn
--&gt;
&lt;!--
**Authors**: Arnaud Meukam (VMware), Hannah Aubry (Fast Forward), Frederico
Muñoz (SAS Institute)
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;：Arnaud Meukam (VMware), Hannah Aubry (Fast Forward), Frederico Muñoz (SAS Institute)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;：&lt;a href=&#34;https://github.com/my-git9&#34;&gt;Xin Li&lt;/a&gt; (Daocloud)&lt;/p&gt;
&lt;!--
We&#39;re happy to announce that dl.k8s.io, home of the official Kubernetes
binaries, will soon be powered by [Fastly](https://www.fastly.com).

Fastly is known for its high-performance content delivery network (CDN) designed
to deliver content quickly and reliably around the world. With its powerful
network, Fastly will help us deliver official Kubernetes binaries to users
faster and more reliably than ever before.
--&gt;
&lt;p&gt;我们很高兴地宣布，官方 Kubernetes 二进制文件的主页 dl.k8s.io 很快将由
&lt;a href=&#34;https://www.fastly.com&#34;&gt;Fastly&lt;/a&gt; 提供支持。&lt;/p&gt;
&lt;p&gt;Fastly 以其高性能内容分发网络（CDN）而闻名，
该网络旨在全球范围内快速可靠地分发内容。凭借其强大的网络，Fastly
将帮助我们实现比以往更快、更可靠地向用户分发官方 Kubernetes 二进制文件。&lt;/p&gt;
&lt;!--
The decision to use Fastly was made after an extensive evaluation process in
which we carefully evaluated several potential content delivery network
providers. Ultimately, we chose Fastly because of their commitment to the open
internet and proven track record of delivering fast and secure digital
experiences to some of the most known open source projects (through their [Fast
Forward](https://www.fastly.com/fast-forward) program).
--&gt;
&lt;p&gt;使用 Fastly 是在经过广泛的评估过程后做出的决定，
在该过程中我们仔细评估了几个潜在的内容分发网络提供商。最终，我们选择
Fastly 是因为他们对开放互联网的承诺以及在为一些著名的开源项目（通过他们的
&lt;a href=&#34;https://www.fastly.com/fast-forward&#34;&gt;Fast Forward&lt;/a&gt;
计划）提供快速和安全的数字体验方面的良好记录。&lt;/p&gt;
&lt;!--
## What you need to know about this change

- On Monday, July 24th, the IP addresses and backend storage associated with the
  dl.k8s.io domain name will change.
- The change will not impact the vast majority of users since the domain
  name will remain the same.
- If you restrict access to specific IP ranges, access to the dl.k8s.io domain
  could stop working.
--&gt;
&lt;h2 id=&#34;关于本次更改你需要了解的信息&#34;&gt;关于本次更改你需要了解的信息&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;7 月 24 日星期一，与 dl.k8s.io 域名关联的 IP 地址和后端存储将发生变化。&lt;/li&gt;
&lt;li&gt;由于域名将保持不变，因此更改不会影响绝大多数用户。&lt;/li&gt;
&lt;li&gt;如果你限制对特定 IP 范围的访问，则对 dl.k8s.io 域的访问可能会停止工作。&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
If you think you may be impacted or want to know more about this change,
please keep reading.
--&gt;
&lt;p&gt;如果你认为你可能会受到影响或想了解有关此次更改的更多信息，请继续阅读。&lt;/p&gt;
&lt;!--
## Why are we making this change

The official Kubernetes binaries site, dl.k8s.io, is used by thousands of users
all over the world, and currently serves _more than 5 petabytes of binaries each
month_. This change will allow us to improve access to those resources by
leveraging a world-wide CDN.
--&gt;
&lt;h2 id=&#34;我们为什么要进行此更改&#34;&gt;我们为什么要进行此更改&lt;/h2&gt;
&lt;p&gt;官方 Kubernetes 二进制文件网站 dl.k8s.io 被全世界成千上万的用户使用，
目前&lt;strong&gt;每月提供超过 5 PB 的二进制文件服务&lt;/strong&gt;。本次更改将通过充分利用全球
CDN 来改善对这些资源的访问。&lt;/p&gt;
&lt;!--
## Does this affect dl.k8s.io only, or are other domains also affected?

Only dl.k8s.io will be affected by this change.
--&gt;
&lt;h2 id=&#34;这只影响-dl-k8s-io-还是其他域也受到影响&#34;&gt;这只影响 dl.k8s.io，还是其他域也受到影响？&lt;/h2&gt;
&lt;p&gt;只有 dl.k8s.io 会受到本次变更的影响。&lt;/p&gt;
&lt;!--
## My company specifies the domain names that we are allowed to be accessed. Will this change affect the domain name?

No, the domain name (`dl.k8s.io`) will remain the same: no change will be
necessary, and access to the Kubernetes release binaries site should not be
affected.
--&gt;
&lt;h2 id=&#34;我公司规定了允许我们访问的域名-此更改会影响域名吗&#34;&gt;我公司规定了允许我们访问的域名，此更改会影响域名吗？&lt;/h2&gt;
&lt;p&gt;不，域名（&lt;code&gt;dl.k8s.io&lt;/code&gt;）将保持不变：无需更改，不会影响对 Kubernetes
发布二进制文件站点的访问。&lt;/p&gt;
&lt;!--
## My company uses some form of IP filtering. Will this change affect access to the site?

If IP-based filtering is in place, it’s possible that access to the site will be
affected when the new IP addresses become active.
--&gt;
&lt;h2 id=&#34;我的公司使用某种形式的-ip-过滤-此更改会影响对站点的访问吗&#34;&gt;我的公司使用某种形式的 IP 过滤，此更改会影响对站点的访问吗？&lt;/h2&gt;
&lt;p&gt;如果已经存在基于 IP 的过滤，则当新 IP 地址变为活动状态时，对站点的访问可能会受到影响。&lt;/p&gt;
&lt;!--
## If my company doesn’t use IP addresses to restrict network traffic, do we need to do anything?

No, the switch to the CDN should be transparent.
--&gt;
&lt;h2 id=&#34;如果我的公司不使用-ip-地址来限制网络流量-我们需要做些什么吗&#34;&gt;如果我的公司不使用 IP 地址来限制网络流量，我们需要做些什么吗？&lt;/h2&gt;
&lt;p&gt;不，切换到 CDN 的过程应该是透明的。&lt;/p&gt;
&lt;!--
## Will there be a dual running period?

**No, it is a cutover.** You can, however, test your networks right now to check
if they can route to the new public IP addresses from Fastly.  You should add
the new IPs to your network&#39;s `allowlist` before July 24th. Once the transfer is
complete, ensure your networks use the new IP addresses to connect to
the `dl.k8s.io` service.
--&gt;
&lt;h2 id=&#34;会有双运行期吗&#34;&gt;会有双运行期吗？&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;不，这是切换。&lt;/strong&gt; 但是，你现在可以测试你的网络，检查它们是否可以从 Fastly
路由到新的公共 IP 地址。 你应该在 7 月 24 日之前将新 IP 添加到你网络的 &lt;code&gt;allowlist&lt;/code&gt;（白名单）中。
切换完成后，确保你的网络使用新的 IP 地址连接到 &lt;code&gt;dl.k8s.io&lt;/code&gt; 服务。&lt;/p&gt;
&lt;!--
## What are the new IP addresses?

If you need to manage an allow list for downloads, you can get the ranges to
match from the Fastly API, in JSON: [public IP address
ranges](https://api.fastly.com/public-ip-list).  You don&#39;t need any credentials
to download that list of ranges.
--&gt;
&lt;h2 id=&#34;新-ip-地址是什么&#34;&gt;新 IP 地址是什么？&lt;/h2&gt;
&lt;p&gt;如果你需要管理下载允许列表，你可以从 Fastly API 中获取需要匹配的范围，
JSON 格式地址：&lt;a href=&#34;https://api.fastly.com/public-ip-list&#34;&gt;公共 IP 地址范围&lt;/a&gt;。&lt;/p&gt;
&lt;p&gt;你不需要任何凭据即可下载该范围列表。&lt;/p&gt;
&lt;!--
## What next steps would you recommend?

If you have IP-based filtering in place, we recommend the following course of
action **before July, 24th**:
--&gt;
&lt;h2 id=&#34;推荐哪些后续操作&#34;&gt;推荐哪些后续操作？&lt;/h2&gt;
&lt;p&gt;如果你已经有了基于 IP 的过滤，我们建议你&lt;strong&gt;在 7 月 24 日&lt;/strong&gt;之前采取以下行动：&lt;/p&gt;
&lt;!--
- Add the new IP addresses to your allowlist.
- Conduct tests with your networks/firewall to ensure your networks can route to
  the new IP addresses.

After the change is made, we recommend double-checking that HTTP calls are
accessing dl.k8s.io with the new IP addresses.
--&gt;
&lt;ul&gt;
&lt;li&gt;将新的 IP 地址添加到你的白名单。&lt;/li&gt;
&lt;li&gt;对你的网络/防火墙进行测试，以确保你的网络可以路由到新的 IP 地址。&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;进行更改后，我们建议仔细检查 HTTP 调用是否正在使用新 IP 地址访问 dl.k8s.io。&lt;/p&gt;
&lt;!--
## What should I do if I detect some abnormality after the cutover date?

If you encounter any weirdness during binaries download, please [open an
issue](https://github.com/kubernetes/k8s.io/issues/new/choose).
--&gt;
&lt;h2 id=&#34;切换后发现异常怎么办&#34;&gt;切换后发现异常怎么办？&lt;/h2&gt;
&lt;p&gt;如果你在二进制文件下载过程中遇到任何异常，
请&lt;a href=&#34;https://github.com/kubernetes/k8s.io/issues/new/choose&#34;&gt;提交 Issue&lt;/a&gt;。&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>使用 OCI 工件为 seccomp、SELinux 和 AppArmor 分发安全配置文件</title>
      <link>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/05/24/oci-security-profiles/</link>
      <pubDate>Wed, 24 May 2023 00:00:00 +0000</pubDate>
      
      <guid>https://deploy-preview-55344--kubernetes-io-main-staging.netlify.app/zh-cn/blog/2023/05/24/oci-security-profiles/</guid>
      <description>
        
        
        &lt;!--
layout: blog
title: &#34;Using OCI artifacts to distribute security profiles for seccomp, SELinux and AppArmor&#34;
date: 2023-05-24
slug: oci-security-profiles
--&gt;
&lt;!--
**Author**: Sascha Grunert
--&gt;
&lt;p&gt;&lt;strong&gt;作者&lt;/strong&gt;: Sascha Grunert&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;译者&lt;/strong&gt;: &lt;a href=&#34;https://github.com/windsonsea&#34;&gt;Michael Yao&lt;/a&gt; (DaoCloud)&lt;/p&gt;
&lt;!--
The [Security Profiles Operator (SPO)][spo] makes managing seccomp, SELinux and
AppArmor profiles within Kubernetes easier than ever. It allows cluster
administrators to define the profiles in a predefined custom resource YAML,
which then gets distributed by the SPO into the whole cluster. Modification and
removal of the security profiles are managed by the operator in the same way,
but that’s a small subset of its capabilities.
--&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/kubernetes-sigs/security-profiles-operator&#34;&gt;Security Profiles Operator (SPO)&lt;/a&gt; 使得在 Kubernetes 中管理
seccomp、SELinux 和 AppArmor 配置文件变得更加容易。
它允许集群管理员在预定义的自定义资源 YAML 中定义配置文件，然后由 SPO 分发到整个集群中。
安全配置文件的修改和移除也由 Operator 以同样的方式进行管理，但这只是其能力的一小部分。&lt;/p&gt;
&lt;!--
Another core feature of the SPO is being able to stack seccomp profiles. This
means that users can define a `baseProfileName` in the YAML specification, which
then gets automatically resolved by the operator and combines the syscall rules.
If a base profile has another `baseProfileName`, then the operator will
recursively resolve the profiles up to a certain depth. A common use case is to
define base profiles for low level container runtimes (like [runc][runc] or
[crun][crun]) which then contain syscalls which are required in any case to run
the container. Alternatively, application developers can define seccomp base
profiles for their standard distribution containers and stack dedicated profiles
for the application logic on top. This way developers can focus on maintaining
seccomp profiles which are way simpler and scoped to the application logic,
without having a need to take the whole infrastructure setup into account.
--&gt;
&lt;p&gt;SPO 的另一个核心特性是能够组合 seccomp 配置文件。这意味着用户可以在 YAML
规约中定义 &lt;code&gt;baseProfileName&lt;/code&gt;，然后 Operator 会自动解析并组合系统调用规则。
如果基本配置文件有另一个 &lt;code&gt;baseProfileName&lt;/code&gt;，那么 Operator 将以递归方式解析配置文件到一定深度。
常见的使用场景是为低级容器运行时（例如 &lt;a href=&#34;https://github.com/opencontainers/runc&#34;&gt;runc&lt;/a&gt; 或 &lt;a href=&#34;https://github.com/containers/crun&#34;&gt;crun&lt;/a&gt;）定义基本配置文件，
在这些配置文件中包含各种情况下运行容器所需的系统调用。另外，应用开发人员可以为其标准分发容器定义
seccomp 基本配置文件，并在其上组合针对应用逻辑的专用配置文件。
这样开发人员就可以专注于维护更简单且范围限制为应用逻辑的 seccomp 配置文件，
而不需要考虑整个基础设施的设置。&lt;/p&gt;
&lt;!--
But how to maintain those base profiles? For example, the amount of required
syscalls for a runtime can change over its release cycle in the same way it can
change for the main application. Base profiles have to be available in the same
cluster, otherwise the main seccomp profile will fail to deploy. This means that
they’re tightly coupled to the main application profiles, which acts against the
main idea of base profiles. Distributing and managing them as plain files feels
like an additional burden to solve.
--&gt;
&lt;p&gt;但是如何维护这些基本配置文件呢？
例如，运行时所需的系统调用数量可能会像主应用一样在其发布周期内发生变化。
基本配置文件必须在同一集群中可用，否则主 seccomp 配置文件将无法部署。
这意味着这些基本配置文件与主应用配置文件紧密耦合，因此违背了基本配置文件的核心理念。
将基本配置文件作为普通文件分发和管理感觉像是需要解决的额外负担。&lt;/p&gt;
&lt;!--
## OCI artifacts to the rescue

The [v0.8.0][spo-latest] release of the Security Profiles Operator supports
managing base profiles as OCI artifacts! Imagine OCI artifacts as lightweight
container images, storing files in layers in the same way images do, but without
a process to be executed. Those artifacts can be used to store security profiles
like regular container images in compatible registries. This means they can be
versioned, namespaced and annotated similar to regular container images.
--&gt;
&lt;h2 id=&#34;oci-artifacts-to-rescue&#34;&gt;OCI 工件成为救命良方  &lt;/h2&gt;
&lt;p&gt;Security Profiles Operator 的 &lt;a href=&#34;https://github.com/kubernetes-sigs/security-profiles-operator/releases/v0.8.0&#34;&gt;v0.8.0&lt;/a&gt; 版本支持将基本配置文件作为
OCI 工件进行管理！将 OCI 工件假想为轻量级容器镜像，采用与镜像相同的方式在各层中存储文件，
但没有要执行的进程。这些工件可以用于像普通容器镜像一样在兼容的镜像仓库中存储安全配置文件。
这意味着这些工件可以被版本化、作用于命名空间并类似常规容器镜像一样添加注解。&lt;/p&gt;
&lt;!--
To see how that works in action, specify a `baseProfileName` prefixed with
`oci://` within a seccomp profile CRD, for example:
--&gt;
&lt;p&gt;若要查看具体的工作方式，可以在 seccomp 配置文件 CRD 内以前缀 &lt;code&gt;oci://&lt;/code&gt;
指定 &lt;code&gt;baseProfileName&lt;/code&gt;，例如：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;apiVersion&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;security-profiles-operator.x-k8s.io/v1beta1&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;kind&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;SeccompProfile&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;metadata&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;name&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;test&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;spec&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;defaultAction&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;SCMP_ACT_ERRNO&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;baseProfileName&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;oci://ghcr.io/security-profiles/runc:v1.1.5&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;syscalls&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;action&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt; &lt;/span&gt;SCMP_ACT_ALLOW&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#008000;font-weight:bold&#34;&gt;names&lt;/span&gt;:&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#bbb&#34;&gt;        &lt;/span&gt;- uname&lt;span style=&#34;color:#bbb&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
The operator will take care of pulling the content by using [oras][oras], as
well as verifying the [sigstore (cosign)][cosign] signatures of the artifact. If
the artifacts are not signed, then the SPO will reject them. The resulting
profile `test` will then contain all base syscalls from the remote `runc`
profile plus the additional allowed `uname` one. It is also possible to
reference the base profile by its digest (SHA256) making the artifact to be
pulled more specific, for example by referencing
`oci://ghcr.io/security-profiles/runc@sha256:380…`.
--&gt;
&lt;p&gt;Operator 将负责使用 &lt;a href=&#34;https://oras.land&#34;&gt;oras&lt;/a&gt; 拉取内容，并验证工件的 &lt;a href=&#34;https://github.com/sigstore/cosign&#34;&gt;sigstore (cosign)&lt;/a&gt; 签名。
如果某些工件未经签名，则 SPO 将拒绝它们。随后生成的配置文件 &lt;code&gt;test&lt;/code&gt; 将包含来自远程
&lt;code&gt;runc&lt;/code&gt; 配置文件的所有基本系统调用加上额外允许的 &lt;code&gt;uname&lt;/code&gt; 系统调用。
你还可以通过摘要（SHA256）来引用基本配置文件，使要被拉取的工件更为确定，
例如通过引用 &lt;code&gt;oci：//ghcr.io/security-profiles/runc@sha256: 380…&lt;/code&gt;。&lt;/p&gt;
&lt;!--
The operator internally caches pulled artifacts up to 24 hours for 1000
profiles, meaning that they will be refreshed after that time period, if the
cache is full or the operator daemon gets restarted.
--&gt;
&lt;p&gt;Operator 在内部缓存已拉取的工件，最多可缓存 1000 个配置文件 24 小时，
这意味着如果缓存已满、Operator 守护进程重启或超出给定时段后这些工件将被刷新。&lt;/p&gt;
&lt;!--
Because the overall resulting syscalls are hidden from the user (I only have the
`baseProfileName` listed in the SeccompProfile, and not the syscalls themselves), I&#39;ll additionally
annotate that SeccompProfile with the final `syscalls`.

Here&#39;s how the SeccompProfile looks after I annotate it:
--&gt;
&lt;p&gt;因为总体生成的系统调用对用户不可见
（我只列出了 SeccompProfile 中的 &lt;code&gt;baseProfileName&lt;/code&gt;，而没有列出系统调用本身），
所以我为该 SeccompProfile 的最终 &lt;code&gt;syscalls&lt;/code&gt; 添加了额外的注解。&lt;/p&gt;
&lt;p&gt;以下是我注解后的 SeccompProfile：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; kubectl describe seccompprofile &lt;span style=&#34;color:#a2f&#34;&gt;test&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Name:         test
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Namespace:    security-profiles-operator
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Labels:       spo.x-k8s.io/profile-id=SeccompProfile-test
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Annotations:  syscalls:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;                [{&amp;#34;names&amp;#34;:[&amp;#34;arch_prctl&amp;#34;,&amp;#34;brk&amp;#34;,&amp;#34;capget&amp;#34;,&amp;#34;capset&amp;#34;,&amp;#34;chdir&amp;#34;,&amp;#34;clone&amp;#34;,&amp;#34;close&amp;#34;,...
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;API Version:  security-profiles-operator.x-k8s.io/v1beta1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
The SPO maintainers provide all public base profiles as part of the [“Security
Profiles” GitHub organization][org].
--&gt;
&lt;p&gt;SPO 维护者们作为 &lt;a href=&#34;https://github.com/orgs/security-profiles/packages&#34;&gt;“Security Profiles” GitHub 组织&lt;/a&gt; 的成员提供所有公开的基本配置文件。&lt;/p&gt;
&lt;!--
## Managing OCI security profiles

Alright, now the official SPO provides a bunch of base profiles, but how can I
define my own? Well, first of all we have to choose a working registry. There
are a bunch of registries that already supports OCI artifacts:
--&gt;
&lt;h2 id=&#34;managing-oci-security-profiles&#34;&gt;管理 OCI 安全配置文件  &lt;/h2&gt;
&lt;p&gt;好的，官方的 SPO 提供了许多基本配置文件，但是我如何定义自己的配置文件呢？
首先，我们必须选择一个可用的镜像仓库。有许多镜像仓库都已支持 OCI 工件：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/distribution/distribution&#34;&gt;CNCF Distribution&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://aka.ms/acr&#34;&gt;Azure Container Registry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://aws.amazon.com/ecr&#34;&gt;Amazon Elastic Container Registry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://cloud.google.com/artifact-registry&#34;&gt;Google Artifact Registry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://docs.github.com/en/packages/guides/about-github-container-registry&#34;&gt;GitHub Packages container registry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://hub.docker.com&#34;&gt;Docker Hub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://zotregistry.io&#34;&gt;Zot Registry&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--
The Security Profiles Operator ships a new command line interface called `spoc`,
which is a little helper tool for managing OCI profiles among doing various other
things which are out of scope of this blog post. But, the command `spoc push`
can be used to push a security profile to a registry:
--&gt;
&lt;p&gt;Security Profiles Operator 交付一个新的名为 &lt;code&gt;spoc&lt;/code&gt; 的命令行界面，
这是一个用于管理 OCI 配置文件的小型辅助工具，该工具提供的各项能力不在这篇博文的讨论范围内。
但 &lt;code&gt;spoc push&lt;/code&gt; 命令可以用于将安全配置文件推送到镜像仓库：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; &lt;span style=&#34;color:#a2f&#34;&gt;export&lt;/span&gt; &lt;span style=&#34;color:#b8860b&#34;&gt;USERNAME&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;my-user
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; &lt;span style=&#34;color:#a2f&#34;&gt;export&lt;/span&gt; &lt;span style=&#34;color:#b8860b&#34;&gt;PASSWORD&lt;/span&gt;&lt;span style=&#34;color:#666&#34;&gt;=&lt;/span&gt;my-pass
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; spoc push -f ./examples/baseprofile-crun.yaml ghcr.io/security-profiles/crun:v1.8.3
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:35:43.899886 Pushing profile ./examples/baseprofile-crun.yaml to: ghcr.io/security-profiles/crun:v1.8.3
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:35:43.899939 Creating file store in: /tmp/push-3618165827
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:35:43.899947 Adding profile to store: ./examples/baseprofile-crun.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:35:43.900061 Packing files
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:35:43.900282 Verifying reference: ghcr.io/security-profiles/crun:v1.8.3
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:35:43.900310 Using tag: v1.8.3
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:35:43.900313 Creating repository for ghcr.io/security-profiles/crun
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:35:43.900319 Using username and password
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:35:43.900321 Copying profile to repository
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:35:46.976108 Signing container image
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Generating ephemeral keys...
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Retrieving signed certificate...
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;&lt;/span&gt;&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#888&#34;&gt;        Note that there may be personally identifiable information associated with this signed artifact.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;        This may include the email address associated with the account with which you authenticate.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;        This information will be used for signing this artifact and will be stored in public transparency logs and cannot be removed later.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;&lt;/span&gt;&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#888&#34;&gt;By typing &amp;#39;y&amp;#39;, you attest that you grant (or have permission to grant) and agree to have this information stored permanently in transparency logs.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Your browser will now be opened to:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;https://oauth2.sigstore.dev/auth/auth?access_type=…
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Successfully verified SCT...
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;tlog entry created with index: 16520520
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;Pushing signature to: ghcr.io/security-profiles/crun
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
You can see that the tool automatically signs the artifact and pushes the
`./examples/baseprofile-crun.yaml` to the registry, which is then directly ready
for usage within the SPO. If username and password authentication is required,
either use the `--username`, `-u` flag or export the `USERNAME` environment
variable. To set the password, export the `PASSWORD` environment variable.
--&gt;
&lt;p&gt;你可以看到该工具自动签署工件并将 &lt;code&gt;./examples/baseprofile-crun.yaml&lt;/code&gt; 推送到镜像仓库中，
然后直接可以在 SPO 中使用此文件。如果需要验证用户名和密码，则可以使用 &lt;code&gt;--username&lt;/code&gt;、
&lt;code&gt;-u&lt;/code&gt; 标志或导出 &lt;code&gt;USERNAME&lt;/code&gt; 环境变量。要设置密码，可以导出 &lt;code&gt;PASSWORD&lt;/code&gt; 环境变量。&lt;/p&gt;
&lt;!--
It is possible to add custom annotations to the security profile by using the
`--annotations` / `-a` flag multiple times in `KEY:VALUE` format. Those have no
effect for now, but at some later point additional features of the operator may
rely them.

The `spoc` client is also able to pull security profiles from OCI artifact
compatible registries. To do that, just run `spoc pull`:
--&gt;
&lt;p&gt;采用 &lt;code&gt;KEY:VALUE&lt;/code&gt; 的格式多次使用 &lt;code&gt;--annotations&lt;/code&gt; / &lt;code&gt;-a&lt;/code&gt; 标志，
可以为安全配置文件添加自定义注解。目前这些对安全配置文件没有影响，
但是在后续某个阶段，Operator 的其他特性可能会依赖于它们。&lt;/p&gt;
&lt;p&gt;&lt;code&gt;spoc&lt;/code&gt; 客户端还可以从兼容 OCI 工件的镜像仓库中拉取安全配置文件。
要执行此操作，只需运行 &lt;code&gt;spoc pull&lt;/code&gt;：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000080;font-weight:bold&#34;&gt;&amp;gt;&lt;/span&gt; spoc pull ghcr.io/security-profiles/runc:v1.1.5
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:32:29.795597 Pulling profile from: ghcr.io/security-profiles/runc:v1.1.5
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:32:29.795610 Verifying signature
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;&lt;/span&gt;&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#888&#34;&gt;Verification for ghcr.io/security-profiles/runc:v1.1.5 --
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;The following checks were performed on each of these signatures:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  - Existence of the claims in the transparency log was verified offline
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;  - The code-signing certificate was verified using trusted certificate authority certificates
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;&lt;/span&gt;&lt;span style=&#34;&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;&#34;&gt;&lt;/span&gt;&lt;span style=&#34;color:#888&#34;&gt;[{&amp;#34;critical&amp;#34;:{&amp;#34;identity&amp;#34;:{&amp;#34;docker-reference&amp;#34;:&amp;#34;ghcr.io/security-profiles/runc&amp;#34;},…}}]
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:32:33.208695 Creating file store in: /tmp/pull-3199397214
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:32:33.208713 Verifying reference: ghcr.io/security-profiles/runc:v1.1.5
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:32:33.208718 Creating repository for ghcr.io/security-profiles/runc
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:32:33.208742 Using tag: v1.1.5
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:32:33.208743 Copying profile from repository
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:32:34.119652 Reading profile
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:32:34.119677 Trying to unmarshal seccomp profile
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:32:34.120114 Got SeccompProfile: runc-v1.1.5
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#888&#34;&gt;16:32:34.120119 Saving profile in: /tmp/profile.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
The profile can be now found in `/tmp/profile.yaml` or the specified output file
`--output-file` / `-o`. We can specify an username and password in the same way
as for `spoc push`.

`spoc` makes it easy to manage security profiles as OCI artifacts, which can be
then consumed directly by the operator itself.
--&gt;
&lt;p&gt;现在可以在 &lt;code&gt;/tmp/profile.yaml&lt;/code&gt; 或 &lt;code&gt;--output-file&lt;/code&gt; / &lt;code&gt;-o&lt;/code&gt; 所指定的输出文件中找到该配置文件。
我们可以像 &lt;code&gt;spoc push&lt;/code&gt; 一样指定用户名和密码。&lt;/p&gt;
&lt;p&gt;&lt;code&gt;spoc&lt;/code&gt; 使得以 OCI 工件的形式管理安全配置文件变得非常容易，这些 OCI 工件可以由 Operator 本身直接使用。&lt;/p&gt;
&lt;!--
That was our compact journey through the latest possibilities of the Security
Profiles Operator! If you&#39;re interested in more, providing feedback or asking
for help, then feel free to get in touch with us directly via [Slack
(#security-profiles-operator)][slack] or [the mailing list][mail].
--&gt;
&lt;p&gt;本文简要介绍了通过 Security Profiles Operator 能够达成的各种最新可能性！
如果你有兴趣了解更多，无论是提出反馈还是寻求帮助，
请通过 &lt;a href=&#34;https://kubernetes.slack.com/messages/security-profiles-operator&#34;&gt;Slack (#security-profiles-operator)&lt;/a&gt; 或&lt;a href=&#34;https://groups.google.com/forum/#!forum/kubernetes-dev&#34;&gt;邮件列表&lt;/a&gt;直接与我们联系。&lt;/p&gt;

      </description>
    </item>
    
  </channel>
</rss>
