Smith是Kubernetes工作流引擎/资源管理器
Smith is a Kubernetes workflow engine / resource manager.
It’s functional and under active development.
What if we build a service that allows us to manage Kubernetes’ built-in resources and other
Custom Resources (CRs) in a generic way?
Similar to how AWS CloudFormation (or Google Deployment Manager) allows us to manage any
AWS/GCE and custom resource. Then we could expose all the resources we need
to integrate as Custom Resources and manage them declaratively. This is an open architecture
with Kubernetes as its core. Other controllers can create/update/watch CRs to co-ordinate their work/lifecycle.
A group of resources is defined using a Bundle (just like a Stack for AWS CloudFormation).
The Bundle itself is also a Kubernetes CR.
Smith watches for new instances of a Bundle (and events to existing ones), picks them up and processes them.
Processing involves parsing the bundle, building a dependency graph (which is implicitly defined in the bundle),
walking the graph, and creating/updating necessary resources. Each created/referenced resource gets
a controller owner reference pointing at the origin Bundle.
CR definitions:
For Bundle
see 0-crd.yaml.
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: postgresql-resources.smith.atlassian.com
spec:
group: smith.atlassian.com
names:
kind: PostgresqlResource
plural: postgresqlresources
singular: postgresqlresource
versions:
- name: v1
served: true
storage: true
Bundle:
apiVersion: smith.atlassian.com/v1
kind: Bundle
metadata:
name: bundle1
spec:
resources:
- name: db1
spec:
object:
apiVersion: smith.atlassian.com/v1
kind: PostgresqlResource
metadata:
name: db1
spec:
disk: 100GiB
- name: app1
references:
- resource: db1
spec:
object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1
spec:
replicas: 1
bundle:
metadata:
labels:
app: app1
spec:
containers:
- name: app1
image: quay.io/some/app1
Some resource types can have Outputs:
Resources can reference outputs of other resources within the same bundle. See what is supported.
Resources may depend on each other explicitly via object references. Resources are created in the reverse dependency order.
READY is the state of a Resource when it can be considered created. E.g. if it is
a DB then it means it was provisioned and set up as requested. State is often part of Status but it depends on kind of resource.
Smith does not block while waiting for a resource to reach the READY state. Instead, when walking the dependency
graph, if a resource is not in the READY state (still being created) it skips processing of that resource.
Resources that don’t have their dependencies READY are not processed.
Resources that can be created concurrently are created concurrently.
Full bundle re-processing is triggered by events about the watched resources.
Smith is watching all supported resource kinds and reacts to events to determine which bundle should be re-processed.
This scales better than watching individual resources and much better than polling individual resources.
Smith controller is built according to recommendations
and following the same behaviour, semantics and code “style” as native Kubernetes controllers as closely as possible.
Deployment
, Service
, ConfigMap
, Secret
, Ingress
, ServiceAccount
, HorizontalPodAutoscaler
, PodDisruptionBudget
;ServiceInstance
and ServiceBinding
.Smith has been presented to:
Mirantis App Controller (discussed here https://github.com/kubernetes/kubernetes/issues/29453) is a very similar workflow engine with a few differences.
Helm is a package manager for Kubernetes. Smith operates on a lower level, even though it can be used by a human,
that is not the main use case. Smith is built to be used as a foundation component with human-friendly tooling built
on top of it. E.g. Helm could probably use Smith under the covers to manipulate Kubernetes API objects. Another
use case is a PaaS that delegates (some) object manipulations to Smith.
/status
subresourcego.mod
and go.sum
files.Bazel
is used as the build tool. Please install it.
make setup
Integration tests can be run against any Kubernetes context that is configured locally. To see which contexts are
available run:
kubectl config get-contexts
By default a context named minikube
is used. If you use minikube and want
to run tests against that context then you don’t need to do anything extra. If you want to run against some other
context you may do so by setting the KUBE_CONTEXT
environment variable which is honored by the makefile.
E.g. to run against Kubernetes-for-Docker use KUBE_CONTEXT=docker-for-desktop
.
make integration-test
This command assumes Service Catalog and UPS Broker are installed in the cluster. To install them follow the
make integration-test-sc
make run
# or to run with Service Catalog support enabled
make run-sc
This command only builds the image, which is not very useful. If you want to import it into your Docker run
make docker
make docker-export
Pull requests, issues and comments welcome. For pull requests:
See the existing issues for things to start contributing.
For bigger changes, make sure you start a discussion first by creating an issue and explaining the intended change.
Atlassian requires contributors to sign a Contributor License Agreement, known as a CLA. This serves as a record
stating that the contributor is entitled to contribute the code/documentation/translation to the project and is willing
to have it used in distributions and derivative works (or is willing to transfer ownership).
Prior to accepting your contributions we ask that you please follow the appropriate link below to digitally sign the
CLA. The Corporate CLA is for those who are contributing as a member of an organization and the individual CLA is for
those contributing as an individual.
Copyright (c) 2016-2019 Atlassian and others. Apache 2.0 licensed, see LICENSE file.