Kubernetes Provider Overview
If you’re unfamiliar with Kubernetes terminology, see the Kubernetes documentation .
This article describes how the Kubernetes provider works and how it differs from other providers in Spinnaker. If you’re unfamiliar with Kubernetes terminology, see the Kubernetes documentation .
The manifest-based approach
The Kubernetes provider combines the strengths of Kubernetes’s declarative infrastructure management with Spinnaker’s workflow engine for imperative steps when you need them. You can fully specify all your infrastructure in the native Kubernetes manifest format but still express, for example, a multi-region canary-driven rollout.
This is a significant departure from how deployments are managed in Spinnaker using other providers (including the legacy Kubernetes provider ). The rest of this doc explains the differences.
No restrictive naming policies
You can deploy existing manifests without rewriting them to adhere to Frigga . Resource relationships (for example between applications and clusters) are managed using Kubernetes annotations , and Spinnaker manages these using its Moniker library.
The policies and strategies are configurable per account. See Reserved Annotations for more details.
Accommodating level-based deployments
See the Kubernetes API conventions for a description of edge-based vs. level-based APIs.
Other providers in Spinnaker track operations that modify cloud resources. For example, if you run a resize operation, Spinnaker monitors that operation until the specified resize target is met. But because Kubernetes only tries to satisfy the desired state, and offers a level-based API for this purpose, the Kubernetes provider uses the concept of “manifest stability.”
A deployed manifest is considered stable when the Kubernetes controller-manager
no longer needs to modify it, and it’s deemed “ready.” This assessment is
different, obviously, for different kind
s of manifests: a Deployment
is
stable when its managed pods are updated, available, and ready (running your
desired container and serving traffic). A Service
is stable once it is
created, unless it is of type LoadBalancer
, in which case it is considered
stable once the underlying load balancer has been created and bound to the
Service
.
This manifest stability is how Spinnaker ensures that operations have succeeded. Because there are a number of reasons why a manifest never becomes stable (lack of CPU quota, failing readiness checks, no IP for a service to bind…) every stage that modifies or deploys a manifest waits until your affected manifests are stable, or it times out after a configurable period (30-minute default).
Using externally stored manifests
You can store and version your manifest definitions in Git (or elsewhere outside of the Spinnaker pipeline store).
With Spinnaker’s Artifact mechanism, file modifications/creations are surfaced as artifacts in pipeline executions. For example, you can configure a pipeline that triggers either when…
- a new Docker image is uploaded, or
- your manifest file is changed in Git
Reserved annotations
Several annotations are used as metadata by Spinnaker to describe a resource. Annotations listed below followed by a 📝 symbol may also be written by Spinnaker.
You can always edit or apply annotations using the <code>kubectl annotate</code> command .
Moniker
moniker.spinnaker.io/application
📝The application this resource belongs to.
This affects where the resource is accessible in the UI, and depending on your Spinnaker Authorization setup, can affect which users can read/write to this resource.
moniker.spinnaker.io/cluster
📝The cluster this resource belongs to.
This is purely a logical grouping for rendering resources in the UI and to help with dynamic target selection in Pipeline stages. For example, some stages allow you to select “the newest workload in cluster X”. How you set up these groupings depends on your delivery needs.
moniker.spinnaker.io/stack
📝, andmoniker.spinnaker.io/detail
📝These simply provide ways to group resources using Spinnaker’s cluster filters as well as apply policies such as Traffic Guards .
Caching
caching.spinnaker.io/ignore
When set to
'true'
, tells Spinnaker to ignore this resource. The resource is not cached and does not show up in the Spinnaker UI.
Strategy
strategy.spinnaker.io/versioned
When set to
'true'
or'false'
, this overrides the resource’s default “version” behavior described in the resource management policies . This can be used to force a ConfigMap or Secret to be deployed without appending a new version when the contents change, for example.strategy.spinnaker.io/use-source-capacity
When set to
'true'
or'false'
, this overrides the resource’s replica count with the currently deployed resource’s replica count. This is supported for Deployment, ReplicaSet or StatefulSet. This can be used to allow resizing a resource in the Spinnaker UI or with kubectl without overriding the new size during subsequent manifest deployments.strategy.spinnaker.io/max-version-history
When set to a non-negative integer, this configures how many versions of a resource to keep around. When more than
max-version-history
versions of a Kubernetes artifact exist, Spinnaker deletes all older versions. Resources are sorted by themetadata.creationTimestamp
kubernetes property rather than the version number.Keep in mind, if you are trying to restrict how many copies of a ReplicaSet a Deployment is managing, that is configured by <code>spec.revisionHistoryLimit</code> . If instead Spinnaker is deploying ReplicaSets directly without a Deployment, this annotation does the job.
strategy.spinnaker.io/recreate
As of Spinnaker 1.13, you can force Spinnaker to delete a resource (if it already exists) before creating it again. This is useful for kinds such as <code>Job</code> , which cannot be edited once created, or must be re-created to run again.
When set to
'true'
for a versioned resource, this will only re-create your resource if no edits have been made since the last deployment (i.e. the same version of the resource is redeployed).The default behavior is
'false'
.strategy.spinnaker.io/replace
As of Spinnaker 1.14, you can force Spinnaker to use
replace
instead ofapply
while deploying a Kubernetes resource. This may be useful for resources such asConfigMap
which may exceed the annotation size limit of 262144 characters.When set to
'true'
for a versioned resource, this will update your resources usingreplace
. Refer to Kubernetes Object Management for more details on object configuration and trade-offs.As of Spinnaker 1.35, deploy manifest stages support label selectors. However, label selectors don’t work with kubectl replace, so deploy manifest stages that specify label selectors to deploy resources with the replace strategy fail.
The default behavior is
'false'
.strategy.spinnaker.io/server-side-apply
As of Spinnaker 1.33, you can force Spinnaker to use server-side apply instead of the default client-side apply while deploying a Kubernetes resource. Server-side apply is a new merging algorithm, which allows calculating the final patch to update resources in the Kubernetes api-server instead of the client. This may be useful for
CustomResourceDefinition
orConfigMap
which may exceed the annotation size limit and cannot tolerate thereplace
strategy. Additionally, it will better identify and handle conflicts during merge by analyzing themanagedFields
annotation instead of thelast-applied-configuration
annotation.When set to
'true'
for a resource, this will update your resources using server-side apply. Refer to Kubernetes Server Side Apply for more details.When set to
'force-conflicts'
for a resource, this will update your resources using server-side apply and becomes the sole manager. Refer to Conflicts for more details.Server-side apply feature was introduced as beta in Kubernetes 1.18 and graduated to GA in Kubernetes 1.22 .
The default behavior is
'false'
.
Traffic
traffic.spinnaker.io/load-balancers
As of Spinnaker 1.10, you can specify which load balancers ( Services ) a workload is attached to at deployment time. This will automatically set the required labels on the workload’s Pods to match that of the Services’ label selectors .
This annotation must be supplied as a list of
<kind> <name>
pairs wherekind
andname
refer to the load balancer in the same namespace as the resource. For example:traffic.spinnaker.io/load-balancers: '["service my-service"]'
attaches to the Service namedmy-service
.traffic.spinnaker.io/load-balancers: '["service my-service", "service my-canary-service"]'
attaches to the Services namedmy-service
andmy-canary-service
.
As of Spinnaker 1.14, instead of manually adding the
traffic.spinnaker.io/load-balancers
annotation, you can select which load balancers to associate with a workload from the Deploy (Manifest) stage. Spinnaker will then add the appropriate annotation for you.
Reserved labels
In accordance with Kubernetes’ recommendations on common labels , Spinnaker applies the following labels as of release 1.9:
app.kubernetes.io/name
This is the name of the Spinnaker application this resource is deployed to, and matches the value of the
moniker.spinnaker.io/application
annotation desribed here .app.kubernetes.io/managed-by
Always set to
"spinnaker"
.
This labeling behavior can be disabled by setting the property
kubernetes.v2.applyAppLabels: false
inclouddriver-local.yml
.
How Kubernetes resources are managed by Spinnaker
Resource mapping between Spinnaker and Kubernetes constructs, as well as the introduction of new types of resources, is a lot more flexible in the Kubernetes provider than for other providers, because of how many types of resources Kubernetes supports. Also the Kubernetes extension mechanisms—called Custom Resource Definitions (CRDs) —make it easy to build new types of resources, and Spinnaker accommodates that by making it simple to extend Spinnaker to support a user’s CRDs .
Terminology mapping
It is worth noting that the resource mapping exists primarily to render resources in the UI according to Spinnaker conventions. It does not affect how resources are deployed or managed.
There are three major groupings of resources in Spinnaker:
- server groups
- load balancers
- firewalls
These correspond to Kubernetes resource kinds as follows:
- Server Groups ≈ Workloads
- Load Balancers ≈ Services, Ingresses
- Firewalls ≈ NetworkPolicies
Resource management policies
How you manage the deployment and updates of a Kubernetes resource is dictated by its kind, via the policies that apply to a particular kind. Below are descriptions of these policies, followed by a mapping of kinds to policies.
Operations
There are several operations that can be implemented by each kind:
- Deploy:
Can this resource be deployed and redeployed? It’s worth
mentioning that all deployments are carried out using
kubectl apply
to capitalize onkubectl
's three-way merge on deploy. This is done to accommodate running against your cluster, alongside Spinnaker, other tools that rely on the three-way merge semantics. - Delete: Can this resource be deleted?
- Scale: For workloads only, can this resource be scaled to a desired replica count?
- Undo Rollout: For workloads only, can this resource be rolled back/forward to an existing revision?
- Pause Rollout: For workloads only, when rolling out, can the rollout be stopped?
- Resume Rollout: For workloads only, when the rollout is paused, can it be started again?
- Deploy:
Can this resource be deployed and redeployed? It’s worth
mentioning that all deployments are carried out using
Versioning
If a resource is “versioned”, it is always deployed with a new sequence number
vNNN
, unless no change has been made to it. This is important for resources likeConfigMaps
andReplicaSets
, which don’t have their own built-in update policy likeDeployments
orStatefulSets
do. Making an edit to the resource in place, rather than redeploying, can have unexpected results and can delete history. Regardless, whatever the policy is, it can be overriden during a deploy manifest stage.This policy can be overriden per-manifest using the
strategy.spinnaker.io/versioned
annotation described here .Stability
This describes under what conditions this kind is considered stable after a new
spec
has been submitted.
Workloads
Anything classified as a Spinnaker server group is rendered on the Clusters tab in Spinnaker. If possible, any pods owned by the workload are rendered as well.
Resource | Deploy | Delete | Scale | Undo Rollout | Pause Rollout | Resume Rollout | Versioned | Stability |
---|---|---|---|---|---|---|---|---|
DaemonSet | Yes | Yes | No | Yes | Yes | Yes | No | The status.currentNumberScheduled , status.updatedNumberScheduled , status.numberAvailable , and status.numberReady must all be at least the status.desiredNumberScheduled . |
Deployment | Yes | Yes | Yes | Yes | Yes | Yes | No | The status.updatedReplicas , status.availableReplicas , and status.readyReplicas must all match the desired replica count for the Deployment. |
Pod | Yes | Yes | No | No | No | No | Yes | The pod must be scheduled, and pass all probes. |
ReplicaSet | Yes | Yes | Yes | No | No | No | Yes | The status.fullyLabledReplicas , status.availableReplicas , and status.readyReplicas must all match the desired replica count for the ReplicaSet. |
StatefulSet | Yes | Yes | Yes | Yes | Yes | Yes | No | The status.currentRevision , and status.updatedRevision must match, and status.currentReplicas , and status.readyReplicas must match the spec’s replica count. |
Services, ingresses
Resource | Deploy | Delete | Versioned | Stability |
---|---|---|---|---|
Service | Yes | Yes | No | The status.loadBalancer field reports that a load balancer was found if and only if the service type is LoadBalancer . |
Ingress | Yes | Yes | No | The status.loadBalancer field reports that a load balancer was bound. |
NetworkPolicies
Resource | Deploy | Delete | Versioned | Stability |
---|---|---|---|---|
NetworkPolicy | Yes | Yes | No | Automatically stable . |
ConfigMaps, secrets
Resource | Deploy | Delete | Versioned | Stability |
---|---|---|---|---|
ConfigMap | Yes | Yes | Yes | Automatically stable . |
Secret | Yes | Yes | Yes | Automatically stable . |