Clouddriver 6.4.8
- core: Only log relevant details of description (#4456) (eb210d4d)
Deck 2.13.7
- artifacts: only remove deleted expected artifacts from stages on trigger update (#8071) (ad367c62)
This release includes fixes, features, and performance improvements across a wide feature set in Spinnaker. Here we share a summary of notable improvements, followed by the comprehensive changelog.
In 1.17 we’ve added support for representing Git repositories as artifacts. The intent for this type of artifact is to enable us to build features around tools that work with a collection of files rather than a single file, like the Deploy (Manifest) stage. Currently, this artifact type is only supported by the Bake (Manifest) stage when using the Kustomize rendering engine but other areas are being explored to determine where this might make sense. Halyard support for configuring this artifact type is forthcoming. See the proposal for more details.
Support for Kustomize has been improved to utilize the Git Repo artifact type which should make it more broadly useable. The previous implementation was limited to a small subset of artifact types, like GitHub File. This new artifact support will enable use by teams using any type of Git hosting service including (but not limited to) GitHub, Bitbucket and GitLab. Be aware, if you’ve used Kustomize in a previous release, configuration of the stage has changed and will need to be updated.
After years of demand from the Kubernetes community, a command for initiating a rolling restart of Deployment pods landed in kubectl 1.15 with kubectl rollout restart. Spinnaker now provides first-class support for initiating rolling restarts of Deployment pods alongside Deck’s other ad-hoc infrastructure actions.
Google Compute Engine supports adding additional GPUs to VMs, and there is now first-class support in Spinnaker for configuring additional hardware for the instances of a regional server group with multiple zones explicitly selected.
In a GCE red/black, we previously pinned the source server group’s minimum capacity to its desired capacity before executing the rollout. As of 1.17, we will bring the GCE red/black implementation to parity with the AWS implementation and no longer adjust the source server group’s minimum capacity. The potential drawback to this is that during the period when both the source and target server groups are taking traffic, an autoscaler may scale down the source server group as it will only be taking 50% of the traffic: in the case a rollback is necessary, it will need to scale back up. However, as Netflix has experienced, the potential downsides of pinning the source server group are much worse, as there are many unpredictable ways to get into a state where the server group is never unpinned (see Netflix post-mortem here).
Clouddriver will start up significantly faster for users with many Kubernetes V2 accounts as of Spinnaker 1.17. In addition, an error communicating with one account’s cluster will not affect the functionality of other accounts; users will still be able to see resources for and deploy to unaffected accounts. Prior to this release, an error communicating with one account’s cluster would degrade functionality for other Kubernetes V2 accounts.
Fiat now accepts permissions coming from different sources. The legacy permissions for applications, for example, are stored inside the application itself in front50. However, it is possible now to provide those permissions from multiple sources (the legacy being one of those sources), and to decide how the permissions coming from those different sources are to be resolved.
The default resolution strategy of the permission sources just reads from the legacy source. To override it for applications for example, the user must provide a value other than default
to the parameter auth.permissions.provider.application
. Currently, the only possible values are default
, which only reads from the legacy source, or aggregate
, which reads from all available sources, and adds their permissions.
The currently available sources are the legacy sources, which are enabled by default, but could be disabled by setting the following parameters to false:
auth.permissions.source.account.resource.enabled
auth.permissions.source.application.front50.enabled
auth.permissions.source.build-service.resource.enabled
Applications also have a new source (disabled by default), which applies permissions to any application whose name starts with a given prefix. Below is a sample configuration of this permission source:
auth.permissions.source.application.prefix:
enabled: true
prefixes:
- prefix: "fooapp"
permissions:
READ:
- "foo-readers-role@mycompany.org"
- prefix: "bar*"
permissions:
CREATE:
- "bar-ops-team@mycompany.org"
This will apply the READ restriction only on app fooapp
, and the CREATE restriction on all apps starting with bar. If multiple prefixes match a given app, they are resolved using the resolution strategy provided in auth.permissions.source.application.prefix.resolutionStrategy
, which could either be AGGREGATE
, meaning the permissions will be aggregated from all matching prefixes, or MOST_SPECIFIC
, meaning that only the permissions from the most-specific prefix will be applied.
Before this version, there was no way to control who can create an application. In 1.17, users can restrict application creation by setting fiat.restrictApplicationCreation
to true
, and then providing CREATE
permissions using a permission source (see above). Note that CREATE
permissions provided by the front50 source of applications will be ignored. So currently, the way to provide CREATE
permissions is using the prefix source explained above.
static
from a method (59f69b6f)traverseObject
which deeply walks object properties (1873094f)errorMessage(undefined)
(aad69b04)app
config param for patch manifest stages (be258cd3)locked
instead of lock
when unlocking pipeline (585ac157)ready to merge
label (f95f6b1b)