Initial commit

Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>
pull/1/head
Andy Goldstein 2017-08-02 13:27:17 -04:00
commit 2fe501f527
2024 changed files with 948288 additions and 0 deletions

29
.gitignore vendored Normal file
View File

@ -0,0 +1,29 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
_output
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test
*.prof
debug
/ark

4
CHANGELOG.md Normal file
View File

@ -0,0 +1,4 @@
# Changelog
#### v0.3.0 - YYYY-MM-DD
* Initial Release

37
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,37 @@
# Heptio Ark Community Code of Conduct
## Contributor Code of Conduct
As contributors and maintainers of this project, and in the interest of fostering
an open and welcoming community, we pledge to respect all people who contribute
through reporting issues, posting feature requests, updating documentation,
submitting pull requests or patches, and other activities.
We are committed to making participation in this project a harassment-free experience for
everyone, regardless of level of experience, gender, gender identity and expression,
sexual orientation, disability, personal appearance, body size, race, ethnicity, age,
religion, or nationality.
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
* Public or private harassment
* Publishing other's private information, such as physical or electronic addresses,
without explicit permission
* Other unethical or unprofessional conduct.
Project maintainers have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are not
aligned to this Code of Conduct. By adopting this Code of Conduct, project maintainers
commit themselves to fairly and consistently applying these principles to every aspect
of managing this project. Project maintainers who do not follow or enforce the Code of
Conduct may be permanently removed from the project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project maintainer(s).
This Code of Conduct is adapted from the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) and [Contributor Covenant](http://contributor-covenant.org/version/1/2/0/), version 1.2.0.

59
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,59 @@
# Contributing
## DCO Sign off
All authors to the project retain copyright to their work. However, to ensure
that they are only submitting work that they have rights to, we are requiring
everyone to acknowldge this by signing their work.
Any copyright notices in this repos should specify the authors as "The
heptio/aws-quickstart authors".
To sign your work, just add a line like this at the end of your commit message:
```
Signed-off-by: Joe Beda <joe@heptio.com>
```
This can easily be done with the `--signoff` option to `git commit`.
By doing this you state that you can certify the following (from https://developercertificate.org/):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```

24
Dockerfile Normal file
View File

@ -0,0 +1,24 @@
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM alpine:3.6
MAINTAINER Andy Goldstein "andy@heptio.com"
RUN apk add --no-cache ca-certificates && \
adduser -S -D -H -u 1000 ark
ADD _output/bin/ark /ark
USER ark
ENTRYPOINT ["/ark"]

1255
Godeps/Godeps.json generated Normal file

File diff suppressed because it is too large Load Diff

5
Godeps/Readme generated Normal file
View File

@ -0,0 +1,5 @@
This directory tree is generated automatically by godep.
Please do not edit.
See https://github.com/tools/godep for more information.

201
LICENSE Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

87
Makefile Normal file
View File

@ -0,0 +1,87 @@
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# project related vars
ROOT_DIR := $(shell dirname $(abspath $(lastword $(MAKEFILE_LIST))))
PROJECT = ark
VERSION ?= 0.3.0
GOTARGET = github.com/heptio/$(PROJECT)
OUTPUT_DIR = $(ROOT_DIR)/_output
BIN_DIR = $(OUTPUT_DIR)/bin
# docker related vars
DOCKER ?= docker
REGISTRY ?= gcr.io/heptio-images
BUILD_IMAGE ?= $(REGISTRY)/golang:1.8-alpine3.6
# go build -i installs compiled packages so they can be reused later.
# This speeds up recompiles.
BUILDCMD = go build -i -v -ldflags "-X $(GOTARGET)/pkg/buildinfo.Version=$(VERSION) -X $(GOTARGET)/pkg/buildinfo.DockerImage=$(REGISTRY)/$(PROJECT)"
BUILDMNT = /go/src/$(GOTARGET)
EXTRA_MNTS ?=
# test related vars
TESTARGS ?= -timeout 60s
TEST_PKGS ?= ./cmd/... ./pkg/...
SKIP_TESTS ?=
# what we're building
BINARIES = ark
local: $(BINARIES)
$(BINARIES):
mkdir -p $(BIN_DIR)
$(BUILDCMD) -o $(BIN_DIR)/$@ $(GOTARGET)/cmd/$@
test:
ifneq ($(SKIP_TESTS), 1)
# go test -i installs compiled packages so they can be reused later
# This speeds up retests.
go test -i -v $(TEST_PKGS)
go test $(TEST_PKGS) $(TESTARGS)
endif
verify:
ifneq ($(SKIP_TESTS), 1)
${ROOT_DIR}/hack/verify-generated-docs.sh
${ROOT_DIR}/hack/verify-generated-clientsets.sh
${ROOT_DIR}/hack/verify-generated-listers.sh
${ROOT_DIR}/hack/verify-generated-informers.sh
endif
update:
${ROOT_DIR}/hack/update-generated-docs.sh
${ROOT_DIR}/hack/update-generated-clientsets.sh
${ROOT_DIR}/hack/update-generated-listers.sh
${ROOT_DIR}/hack/update-generated-informers.sh
all: cbuild container
cbuild:
$(DOCKER) run --rm -v $(ROOT_DIR):$(BUILDMNT) $(EXTRA_MNTS) -w $(BUILDMNT) -e SKIP_TESTS=$(SKIP_TESTS) $(BUILD_IMAGE) /bin/sh -c 'make local verify test'
container: cbuild
$(DOCKER) build -t $(REGISTRY)/$(PROJECT):latest -t $(REGISTRY)/$(PROJECT):$(VERSION) .
container-local: $(BINARIES)
$(DOCKER) build -t $(REGISTRY)/$(PROJECT):latest -t $(REGISTRY)/$(PROJECT):$(VERSION) .
push:
docker -- push $(REGISTRY)/$(PROJECT):$(VERSION)
.PHONY: all local container cbuild push test verify update $(BINARIES)
clean:
rm -rf $(OUTPUT_DIR)
$(DOCKER) rmi $(REGISTRY)/$(PROJECT):latest $(REGISTRY)/$(PROJECT):$(VERSION) 2>/dev/null || :

206
README.md Normal file
View File

@ -0,0 +1,206 @@
# Heptio Ark
**Maintainers:** [Heptio][0]
[![Build Status][1]][2]
## Overview
Heptio Ark is a one-stop shop for managing disaster recovery, specifically for your [Kubernetes][14] cluster resources. It provides a simple, configurable, and operationally robust way to back up and restore your applications and persistent volumes from a series of checkpoints. This allows you to better automate in the following scenarios:
* **Disaster recovery** with reduced TTR (time to respond), in the case of:
* Infrastructure loss
* Data corruption
* Service outages
* **Cross-cloud-provider migration** for Kubernetes API objects (cross-cloud-provider migration of persistent volume snapshots not yet supported)
* **Dev and testing environment setup (+ CI)**, via replication of prod environment
More concretely, Heptio Ark combines an in-cluster service with a CLI that allows you to record both:
1. *Configurable subsets of Kubernetes API objects* -- as tarballs stored in object storage
2. *Disk snapshots of Persistent Volumes* -- via the cloud provider APIs
Heptio Ark currently supports the [AWS][15], [GCP][16], and [Azure][17] cloud provider platforms.
## Quickstart
This guide gets Ark up and running on your cluster, and goes through an example using the following:
* **Minio, an S3-compatible storage service** that runs locally on your cluster. This is the storage service where backup files are uploaded. *Note that Ark is intended to run on a cloud provider--we are using Minio here to keep the example convenient and self-contained.*
* **A sample nginx app** under the `nginx-example` namespace, used to demonstrate Ark's backup and restore functionality.
Note that this example *does not* include a demonstration of PV disk snapshots, because that feature requires integration with a cloud provider API. For snapshotting examples and instructions specific to AWS, GCP, and Azure, see [Cloud Provider Specifics][23].
### 0. Prerequisites
* *You should have access to an up-and-running Kubernetes cluster (minimum version 1.7).* If you do not have a cluster, [choose a setup solution][9] from the official Kubernetes docs.
* *You will need to have a DNS server set up on your cluster for the example files to work.* You can check this with `kubectl get svc -l k8s-app=kube-dns --namespace=kube-system`. If said service does not exist, [these instructions][12] may help.
* *You should have `kubectl` installed.* If not, follow the instructions for [installing via Homebrew (MacOS)][10] or [building the binary (Linux)][11].
### 1. Download
Clone or fork the Heptio Ark repo:
```
git clone git@github.com:heptio/ark.git
```
### 2. Setup
There are two types of Ark instances that work in tandem:
1. **Ark server**: Runs persistently on the cluster.
2. **Ark client**: Launched by the user whenever they want to initiate an operation (e.g. a backup).
To get the server started on your cluster (as well as the local storage service), execute the following commands in Ark's root directory:
```
kubectl apply -f examples/common/00-prereqs.yaml
kubectl apply -f examples/minio/
kubectl apply -f examples/common/10-deployment.yaml
```
*NOTE: If you encounter an error related to Config creation, wait for a minute and run the command again. (The Config CRD does not always finish registering in time.)*
Now deploy the example nginx app:
```
kubectl apply -f examples/nginx-app/base.yaml
```
Check to see that both the Ark and nginx deployments have been successfully created:
```
kubectl get deployments -l component=ark --namespace=heptio-ark
kubectl get deployments --namespace=nginx-example
```
Finally, create an alias for the Ark client's Docker executable. (Make sure that your `KUBECONFIG` environment variable is pointing at the proper config first). This will save a lot of future typing:
```
alias ark='docker run --rm -v $(dirname $KUBECONFIG):/kubeconfig -e KUBECONFIG=/kubeconfig/$(basename $KUBECONFIG) gcr.io/heptio-images/ark:latest'
```
*NOTE*: Depending on how your Kubeconfig is written--if it refers to the Kubernetes API server using the host machine's `localhost`, for instance--you may need to add an additional `--net="host"` flag to the `docker run` command.
### 3. Back up and restore
First, create a backup specifically for any object matching the `app=nginx` label selector:
```
ark backup create nginx-backup --selector app=nginx
```
Now you can mimic a disaster with the following:
```
kubectl delete namespace nginx-example
```
Oh no! The nginx deployment and service are both gone, as you can see (though you may have to wait a minute or two for the namespace be fully cleaned up):
```
kubectl get deployments --namespace=nginx-example
kubectl get services --namespace=nginx-example
```
Neither commands should yield any results. However, because Ark has your back(up), you can run this command:
```
ark restore create nginx-backup
```
To check on the status of the Restore:
```
ark restore get
```
The output should look something like the table below:
```
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
nginx-backup-20170727200524 nginx-backup Completed 0 0 2017-07-27 20:05:24 +0000 UTC <none>
```
If the Restore's `STATUS` column is "Completed", and `WARNINGS` and `ERRORS` are both zero, the restore is a success. All of the objects in the `nginx-example` namespace should be just as they were before.
Otherwise, if there are warnings or errors indicated, you can run the following command to look at them in more detail:
```
ark restore get <RESTORE NAME> -o yaml
```
See the [debugging documentation][18] for more details.
*NOTE*: In the example files, the `storage` volume is defined via `hostPath` for better visibility. If you're curious to see the [structure of the backup files][13] firsthand, you can find the compressed results in `/tmp/minio/ark/nginx-backup`.
### 4. Tear Down
Using the following command, you can remove all Kubernetes objects associated with this example:
```
kubectl delete -f examples/common/
kubectl delete -f examples/minio/
kubectl delete -f examples/nginx-app/base.yaml
```
## Architecture
Each of Heptio Ark's operations (Backups, Schedules, and Restores) are custom resources themselves, defined using [CRDs][20]. Their accompanying [custom controllers][21] handle them when they are submitted to the Kubernetes API server.
As mentioned before, Ark runs in two different modes:
* **Ark client**: Allows you to query, create, and delete the Ark resources as desired.
* **Ark server**: Runs all of the Ark controllers. Each controller watches its respective custom resource for API operations, performs validation, and handles the majority of the cloud API logic (e.g. interfacing with object storage and persistent volumes).
Looking at a specific example--an `ark backup create test-backup --snapshot-volumes` command triggers the following operations:
![19]
1. The *ark client* makes a call to the Kubernetes API server, creating a `Backup` custom resource (which is stored in [etcd][22]).
2. The `BackupController` sees that a new `Backup` has been created, and validates it.
3. Once validation passes, the `BackupController` begins the backup process. It collects data by querying the Kubernetes API Server for resources.
4. Once the data has been aggregated, the `BackupController` makes a call to the object storage service (e.g. Amazon S3) to upload the backup file.
5. If the `--snapshot-volumes` flag is specified, Ark also makes disk snapshots of any persistent volumes, using the appropriate cloud service API.
## Further documentation
To learn more about Heptio Ark operations and their applications, see the [`/docs` directory][3].
## Troubleshooting
If you encounter any problems that the documentation does not address, [file an issue][4].
## Contributing
Thanks for taking the time to join our community and start contributing!
#### Before you start
* Please familiarize yourself with the [Code of Conduct][8] before contributing.
* See [CONTRIBUTING.md][5] for instructions on the developer certificate of origin that we require.
#### Pull requests
* We welcome pull requests. Feel free to dig through the [issues][4] and jump in.
## Changelog
See [the list of releases][6] to find out about feature changes.
[0]: https://github.com/heptio
[1]: https://jenkins.i.heptio.com/buildStatus/icon?job=ark-prbuilder
[2]: https://jenkins.i.heptio.com/job/ark-prbuilder/
[3]: /docs
[4]: https://github.com/heptio/ark/issues
[5]: /CONTRIBUTING.md
[6]: /CHANGELOG.md
[7]: /docs/build-from-scratch.md
[8]: /CODE_OF_CONDUCT.md
[9]: https://kubernetes.io/docs/setup/
[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-homebrew-on-macos
[11]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#tabset-1
[12]: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/README.md
[13]: /docs/output-file-format.md
[14]: https://github.com/kubernetes/kubernetes
[15]: https://aws.amazon.com/
[16]: https://cloud.google.com/
[17]: https://azure.microsoft.com/
[18]: /docs/debugging-restores.md
[19]: /docs/img/backup-process.png
[20]: https://kubernetes.io/docs/concepts/api-extension/custom-resources/#customresourcedefinitions
[21]: https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-controllers
[22]: https://github.com/coreos/etcd
[23]: /docs/cloud-provider-specifics.md

36
cmd/ark/main.go Normal file
View File

@ -0,0 +1,36 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"os"
"path/filepath"
"github.com/golang/glog"
"github.com/heptio/ark/pkg/cmd"
"github.com/heptio/ark/pkg/cmd/ark"
)
func main() {
defer glog.Flush()
baseName := filepath.Base(os.Args[0])
err := ark.NewCommand(baseName).Execute()
cmd.CheckError(err)
}

29
docs/README.md Normal file
View File

@ -0,0 +1,29 @@
# Table of Contents
## User Guide
* [Concepts][1]
* [Build from scratch][0]
* [Cloud provider specifics][9]
* [Debugging restores][4]
## Reference
* [CLI reference][2]
* [Config definition][5]
* [Output file format][6]
* [Sample YAML files][3]
## Scenarios
* [Disaster recovery][7]
* [Cluster migration][8]
[0]: build-from-scratch.md
[1]: concepts.md
[2]: cli-reference
[3]: /examples
[4]: debugging-restores.md
[5]: config-definition.md
[6]: output-file-format.md
[7]: use-cases.md#disaster-recovery
[8]: use-cases.md#cluster-migration
[9]: cloud-provider-specifics.md

View File

@ -0,0 +1,56 @@
# Build From Scratch
While the [README][0] pulls from the Heptio image registry, you can also build your own Heptio Ark container with the following steps:
* [0. Prerequisites][1]
* [1. Download][2]
* [2. Build][3]
* [3. Run][7]
## 0. Prerequisites
In addition to the handling the prerequisites mentioned in the [Quickstart][4], you should have [Go][5] installed (minimum version 1.8).
## 1. Download
Install with go:
```
go get github.com/heptio/ark
```
The files are installed in `$GOPATH/src/github.com/heptio/ark`.
## 2. Build
Set the `$REGISTRY` environment variable (used in the `Makefile`) if you want to push the Heptio Ark images to your own registry. This allows any node in your cluster to pull your locally built image.
`$PROJECT` and `$VERSION` environment variables are also specified in the `Makefile`, and can be similarly modified as desired.
Run the following in the Ark root directory to build your container with the tag `$REGISTRY/$PROJECT:$VERSION`:
```
sudo make all
```
To push your image to a registry, use `make push`.
## 3. Run
When running Heptio Ark, you will need to account for the following (all of which are handled in the [`/examples`][6] manifests):
* Appropriate RBAC permissions in the cluster
* *Read access* for all data from the source cluster and namespaces
* *Write access* to the target cluster and namespaces
* Cloud provider credentials
* *Read/write access* to volumes
* *Read/write access* to object storage for backup data
* A [Config object][8] definition for the Ark server
See [Cloud Provider Specifics][9] for a more detailed guide.
[0]: ../README.md
[1]: #0-prerequisites
[2]: #1-download
[3]: #2-build
[4]: ../README.md#quickstart
[5]: https://golang.org/doc/install
[6]: /examples
[7]: #3-run
[8]: reference.md#ark-config-definition
[9]: cloud-provider-specifics.md

View File

@ -0,0 +1,21 @@
# Command line reference
The Ark client provides a CLI that allows you to initiate ad-hoc backups, scheduled backups, or restores.
*The files in this directory enumerate each of the possible `ark` commands and their flags. Note that you can also find this info with the CLI itself, using the `--help` flag.*
## Running the client
While it is possible to build and run the `ark` executable yourself, it is recommended to use the containerized version. Use the alias described in the quickstart:
```
alias ark='docker run --rm -v $(dirname $KUBECONFIG):/kubeconfig -e KUBECONFIG=/kubeconfig/$(basename $KUBECONFIG) gcr.io/heptio-images/ark:latest'
```
Assuming that your `KUBECONFIG` variable is set, this alias takes care of specifying the appropriate Kubernetes cluster credentials for you.
## Kubernetes cluster credentials
In general, Ark will search for your cluster credentials in the following order:
* `--kubeconfig` command line flag
* `$KUBECONFIG` environment variable
* In-cluster credentials--this only works when you are running Ark in a pod

32
docs/cli-reference/ark.md Normal file
View File

@ -0,0 +1,32 @@
## ark
Back up and restore Kubernetes cluster resources.
### Synopsis
Heptio Ark is a tool for managing disaster recovery, specifically for
Kubernetes cluster resources. It provides a simple, configurable,
and operationally robust way to back up your application state and
associated data.
### Options
```
--alsologtostderr log to standard error as well as files
--kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
```
### SEE ALSO
* [ark backup](ark_backup.md) - Work with backups
* [ark restore](ark_restore.md) - Work with restores
* [ark schedule](ark_schedule.md) - Work with schedules
* [ark server](ark_server.md) - Run the ark server
* [ark version](ark_version.md) - Print the ark version and associated image

View File

@ -0,0 +1,27 @@
## ark backup
Work with backups
### Synopsis
Work with backups
### Options inherited from parent commands
```
--alsologtostderr log to standard error as well as files
--kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
```
### SEE ALSO
* [ark](ark.md) - Back up and restore Kubernetes cluster resources.
* [ark backup create](ark_backup_create.md) - Create a backup
* [ark backup get](ark_backup_get.md) - Get backups

View File

@ -0,0 +1,45 @@
## ark backup create
Create a backup
### Synopsis
Create a backup
```
ark backup create NAME
```
### Options
```
--exclude-namespaces stringArray namespaces to exclude from the backup
--exclude-resources stringArray resources to exclude from the backup, formatted as resource.group, such as storageclasses.storage.k8s.io
--include-namespaces stringArray namespaces to include in the backup (use '*' for all namespaces) (default *)
--include-resources stringArray resources to include in the backup, formatted as resource.group, such as storageclasses.storage.k8s.io (use '*' for all resources)
--label-columns stringArray a comma-separated list of labels to be displayed as columns
--labels mapStringString labels to apply to the backup
-o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'.
-l, --selector labelSelector only back up resources matching this label selector (default <none>)
--show-labels show labels in the last column
--snapshot-volumes take snapshots of PersistentVolumes as part of the backup
--ttl duration how long before the backup can be garbage collected (default 24h0m0s)
```
### Options inherited from parent commands
```
--alsologtostderr log to standard error as well as files
--kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
```
### SEE ALSO
* [ark backup](ark_backup.md) - Work with backups

View File

@ -0,0 +1,38 @@
## ark backup get
Get backups
### Synopsis
Get backups
```
ark backup get
```
### Options
```
--label-columns stringArray a comma-separated list of labels to be displayed as columns
-o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'. (default "table")
-l, --selector string only show items matching this label selector
--show-labels show labels in the last column
```
### Options inherited from parent commands
```
--alsologtostderr log to standard error as well as files
--kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
```
### SEE ALSO
* [ark backup](ark_backup.md) - Work with backups

View File

@ -0,0 +1,28 @@
## ark restore
Work with restores
### Synopsis
Work with restores
### Options inherited from parent commands
```
--alsologtostderr log to standard error as well as files
--kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
```
### SEE ALSO
* [ark](ark.md) - Back up and restore Kubernetes cluster resources.
* [ark restore create](ark_restore_create.md) - Create a restore
* [ark restore delete](ark_restore_delete.md) - Delete a restore
* [ark restore get](ark_restore_get.md) - get restores

View File

@ -0,0 +1,42 @@
## ark restore create
Create a restore
### Synopsis
Create a restore
```
ark restore create BACKUP
```
### Options
```
--label-columns stringArray a comma-separated list of labels to be displayed as columns
--labels mapStringString labels to apply to the restore
--namespace-mappings mapStringString namespace mappings from name in the backup to desired restored name in the form src1:dst1,src2:dst2,...
--namespaces stringArray comma-separated list of namespaces to restore
-o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'.
--restore-volumes whether to restore volumes from snapshots
-l, --selector labelSelector only restore resources matching this label selector (default <none>)
--show-labels show labels in the last column
```
### Options inherited from parent commands
```
--alsologtostderr log to standard error as well as files
--kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
```
### SEE ALSO
* [ark restore](ark_restore.md) - Work with restores

View File

@ -0,0 +1,29 @@
## ark restore delete
Delete a restore
### Synopsis
Delete a restore
```
ark restore delete NAME
```
### Options inherited from parent commands
```
--alsologtostderr log to standard error as well as files
--kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
```
### SEE ALSO
* [ark restore](ark_restore.md) - Work with restores

View File

@ -0,0 +1,38 @@
## ark restore get
get restores
### Synopsis
get restores
```
ark restore get
```
### Options
```
--label-columns stringArray a comma-separated list of labels to be displayed as columns
-o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'. (default "table")
-l, --selector string only show items matching this label selector
--show-labels show labels in the last column
```
### Options inherited from parent commands
```
--alsologtostderr log to standard error as well as files
--kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
```
### SEE ALSO
* [ark restore](ark_restore.md) - Work with restores

View File

@ -0,0 +1,28 @@
## ark schedule
Work with schedules
### Synopsis
Work with schedules
### Options inherited from parent commands
```
--alsologtostderr log to standard error as well as files
--kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
```
### SEE ALSO
* [ark](ark.md) - Back up and restore Kubernetes cluster resources.
* [ark schedule create](ark_schedule_create.md) - Create a schedule
* [ark schedule delete](ark_schedule_delete.md) - Delete a schedule
* [ark schedule get](ark_schedule_get.md) - Get schedules

View File

@ -0,0 +1,46 @@
## ark schedule create
Create a schedule
### Synopsis
Create a schedule
```
ark schedule create NAME
```
### Options
```
--exclude-namespaces stringArray namespaces to exclude from the backup
--exclude-resources stringArray resources to exclude from the backup, formatted as resource.group, such as storageclasses.storage.k8s.io
--include-namespaces stringArray namespaces to include in the backup (use '*' for all namespaces) (default *)
--include-resources stringArray resources to include in the backup, formatted as resource.group, such as storageclasses.storage.k8s.io (use '*' for all resources)
--label-columns stringArray a comma-separated list of labels to be displayed as columns
--labels mapStringString labels to apply to the backup
-o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'.
--schedule string a cron expression specifying a recurring schedule for this backup to run
-l, --selector labelSelector only back up resources matching this label selector (default <none>)
--show-labels show labels in the last column
--snapshot-volumes take snapshots of PersistentVolumes as part of the backup
--ttl duration how long before the backup can be garbage collected (default 24h0m0s)
```
### Options inherited from parent commands
```
--alsologtostderr log to standard error as well as files
--kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
```
### SEE ALSO
* [ark schedule](ark_schedule.md) - Work with schedules

View File

@ -0,0 +1,29 @@
## ark schedule delete
Delete a schedule
### Synopsis
Delete a schedule
```
ark schedule delete NAME
```
### Options inherited from parent commands
```
--alsologtostderr log to standard error as well as files
--kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
```
### SEE ALSO
* [ark schedule](ark_schedule.md) - Work with schedules

View File

@ -0,0 +1,38 @@
## ark schedule get
Get schedules
### Synopsis
Get schedules
```
ark schedule get
```
### Options
```
--label-columns stringArray a comma-separated list of labels to be displayed as columns
-o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'. (default "table")
-l, --selector string only show items matching this label selector
--show-labels show labels in the last column
```
### Options inherited from parent commands
```
--alsologtostderr log to standard error as well as files
--kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
```
### SEE ALSO
* [ark schedule](ark_schedule.md) - Work with schedules

View File

@ -0,0 +1,34 @@
## ark server
Run the ark server
### Synopsis
Run the ark server
```
ark server
```
### Options
```
--kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
```
### Options inherited from parent commands
```
--alsologtostderr log to standard error as well as files
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
```
### SEE ALSO
* [ark](ark.md) - Back up and restore Kubernetes cluster resources.

View File

@ -0,0 +1,29 @@
## ark version
Print the ark version and associated image
### Synopsis
Print the ark version and associated image
```
ark version
```
### Options inherited from parent commands
```
--alsologtostderr log to standard error as well as files
--kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
```
### SEE ALSO
* [ark](ark.md) - Back up and restore Kubernetes cluster resources.

View File

@ -0,0 +1,358 @@
# Cloud Provider Specifics
While the [Quickstart][0] uses a local storage service to quickly set up Heptio Ark as a demonstration, this document details additional configurations that are required when integrating with the cloud providers below:
* [Setup][12]
* [AWS][1]
* [GCP][2]
* [Azure][3]
* [Run][13]
* [Ark server][9]
* [Basic example (no PVs)][10]
* [Snapshot example (with PVs)][11]
## Setup
### AWS
#### IAM user creation
To integrate Heptio Ark with AWS, you should follow the instructions below to create an Ark-specific [IAM user][14].
1. If you do not have the AWS CLI locally installed, follow the [user guide][5] to set it up.
2. Create an IAM user:
```
aws iam create-user --user-name heptio-ark
```
3. Attach a policy to give `heptio-ark` the necessary permissions:
```
aws iam attach-user-policy \
--policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess \
--user-name heptio-ark
aws iam attach-user-policy \
--policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess \
--user-name heptio-ark
```
4. Create an access key for the user:
```
aws iam create-access-key --user-name heptio-ark
```
The result should look like:
```
{
"AccessKey": {
"UserName": "heptio-ark",
"Status": "Active",
"CreateDate": "2017-07-31T22:24:41.576Z",
"SecretAccessKey": <AWS_SECRET_ACCESS_KEY>,
"AccessKeyId": <AWS_ACCESS_KEY_ID>
}
}
```
5. Using the output from the previous command, create an Ark-specific credentials file (`credentials-ark`) in your local directory that looks like the following:
```
[default]
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
```
#### Credentials and configuration
In the Ark root directory, run the following to first set up namespaces, RBAC, and other scaffolding:
```
kubectl apply -f examples/common/00-prereqs.yaml
```
Create a Secret, running this command in the local directory of the credentials file you just created:
```
kubectl create secret generic cloud-credentials \
--namespace heptio-ark \
--from-file cloud=credentials-ark
```
Now that you have your IAM user credentials stored in a Secret, you need to replace some placeholder values in the template files. Specifically, you need to change the following:
* In file `examples/aws/00-ark-config.yaml`:
* Replace `<YOUR_BUCKET>`, `<YOUR_REGION>`, and `<YOUR_AVAILABILITY_ZONE>`. See the [Config definition][6] for details.
* In file `examples/common/10-deployment.yaml`:
* Make sure that `spec.template.spec.containers[*].env.name` is "AWS_SHARED_CREDENTIALS_FILE".
* (Optional) If you are running the Nginx example, in file `examples/nginx-app/with-pv.yaml`:
* Replace `<YOUR_STORAGE_CLASS_NAME>` with `gp2`. This is AWS's default `StorageClass` name.
### GCP
#### Service account creation
To integrate Heptio Ark with GCP, you should follow the instructions below to create an Ark-specific [Service Account][15].
1. If you do not have the gcloud CLI locally installed, follow the [user guide][16] to set it up.
2. View your current config settings:
```
gcloud config list
```
Store the `project` value from the results in the environment variable `$PROJECT_ID`.
2. Create a service account:
```
gcloud iam service-accounts create heptio-ark \
--display-name "Heptio Ark service account"
```
Then list all accounts and find the `heptio-ark` account you just created:
```
gcloud iam service-accounts list
```
Set the `$SERVICE_ACCOUNT_EMAIL` variable to match its `email` value.
3. Attach policies to give `heptio-ark` the necessary permissions to function (replacing placeholders appropriately):
```
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
--role roles/compute.storageAdmin
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
--role roles/storage.admin
```
4. Create a service account key, specifying an output file (`credentials-ark`) in your local directory:
```
gcloud iam service-accounts keys create credentials-ark \
--iam-account $SERVICE_ACCOUNT_EMAIL
```
#### Credentials and configuration
In the Ark root directory, run the following to first set up namespaces, RBAC, and other scaffolding:
```
kubectl apply -f examples/common/00-prereqs.yaml
```
Create a Secret, running this command in the local directory of the credentials file you just created:
```
kubectl create secret generic cloud-credentials \
--namespace heptio-ark \
--from-file cloud=credentials-ark
```
Now that you have your Google Cloud credentials stored in a Secret, you need to replace some placeholder values in the template files. Specifically, you need to change the following:
* In file `examples/gcp/00-ark-config.yaml`:
* Replace `<YOUR_BUCKET>`, `<YOUR_PROJECT>` and `<YOUR_ZONE>`. See the [Config definition][7] for details.
* In file `examples/common/10-deployment.yaml`:
* Change `spec.template.spec.containers[*].env.name` to "GOOGLE_APPLICATION_CREDENTIALS".
* (Optional) If you are running the Nginx example, in file `examples/nginx-app/with-pv.yaml`:
* Replace `<YOUR_STORAGE_CLASS_NAME>` with `standard`. This is GCP's default `StorageClass` name.
### Azure
#### Service principal creation
To integrate Heptio Ark with Azure, you should follow the instructions below to create an Ark-specific [service principal][17].
1. If you do not have the `az` Azure CLI 2.0 locally installed, follow the [user guide][18] to set it up. Once done, run:
```
az login
```
2. There are seven environment variables that need to be set for Heptio Ark to work properly. The following steps detail how to acquire these, in the process of setting up the necessary RBAC.
3. List your account:
```
az account list
```
Save the relevant response values into environment variables: `id` corresponds to `$AZURE_SUBSCRIPTION_ID` and `tenantId` corresponds to `$AZURE_TENANT_ID`.
4. Assuming that you already have a running Kubernetes cluster on Azure, you should have a corresponding resource group as well. List your current groups to find it:
```
az group list
```
Get your cluster's group `name` from the response, and use it to set `$AZURE_RESOURCE_GROUP`. (Also note the `location`--this is later used in the Azure-specific portion of the Ark Config).
5. Create a service principal with the "Contributor" role:
```
az ad sp create-for-rbac --role="Contributor" --name="heptio-ark"
```
From the response, save `appId` into `$AZURE_CLIENT_ID` and `password` into `$AZURE_CLIENT_SECRET`.
6. Login into the `heptio-ark` service principal account:
```
az login --service-principal \
--username http://heptio-ark-test \
--password $AZURE_CLIENT_SECRET \
--tenant $AZURE_TENANT_ID
```
7. Specify a *globally-unique* storage account id and save it in `$AZURE_STORAGE_ACCOUNT_ID`. Then create the storage account, specifying the optional `--location` flag if you do not have defaults from `az configure`:
```
az storage account create \
--name $AZURE_STORAGE_ACCOUNT_ID \
--resource-group $AZURE_RESOURCE_GROUP \
--sku Standard_GRS
```
You will encounter an error message if the storage account ID is not unique; change it accordingly.
8. Get the keys for your storage account:
```
az storage account keys list \
--account-name $AZURE_STORAGE_ACCOUNT_ID \
--resource-group $AZURE_RESOURCE_GROUP
```
Set `$AZURE_STORAGE_KEY` to any one of the `value`s returned.
#### Credentials and configuration
In the Ark root directory, run the following to first set up namespaces, RBAC, and other scaffolding:
```
kubectl apply -f examples/common/00-prereqs.yaml
```
Now you need to create a Secret that contains all the seven environment variables you just set. The command looks like the following:
```
kubectl create secret generic cloud-credentials \
--namespace heptio-ark \
--from-literal AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID} \
--from-literal AZURE_TENANT_ID=${AZURE_TENANT_ID} \
--from-literal AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP} \
--from-literal AZURE_CLIENT_ID=${AZURE_CLIENT_ID} \
--from-literal AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET} \
--from-literal AZURE_STORAGE_ACCOUNT_ID=${AZURE_STORAGE_ACCOUNT_ID} \
--from-literal AZURE_STORAGE_KEY=${AZURE_STORAGE_KEY}
```
Now that you have your Azure credentials stored in a Secret, you need to replace some placeholder values in the template files. Specifically, you need to change the following:
* In file `examples/azure/10-ark-config.yaml`:
* Replace `<YOUR_BUCKET>`, `<YOUR_LOCATION>`, and `<YOUR_TIMEOUT>`. See the [Config definition][8] for details.
## Run
### Ark server
Make sure that you have run `kubectl apply -f examples/common/00-prereqs.yaml` first (this command is incorporated in the previous setup instructions because it creates the necessary namespaces).
* **AWS and GCP**
Start the Ark server itself, using the Config from the appropriate cloud-provider-specific directory:
```
kubectl apply -f examples/common/10-deployment.yaml
kubectl apply -f examples/<CLOUD-PROVIDER>/
```
* **Azure**
Because Azure loads its credentials differently (from environment variables rather than a file), you need to instead run:
```
kubectl apply -f examples/azure/
```
### Basic example (No PVs)
Start the sample nginx app:
```
kubectl apply -f examples/nginx-app/base.yaml
```
Now create a backup with PV snapshotting:
```
ark backup create nginx-backup --selector app=nginx
```
Simulate a disaster:
```
kubectl delete namespaces nginx-example
```
Now restore your lost resources:
```
ark restore create nginx-backup
```
### Snapshot example (With PVs)
> NOTE: For Azure, your Kubernetes cluster needs to be version 1.7.2+ in order to support PV snapshotting of its managed disks.
Label a node so that all nginx pods end up on the same machine (avoiding PV binding issues):
```
nginx_node_name=$(kubectl get nodes -o jsonpath='{.items[0].metadata.name}')
kubectl label nodes $nginx_node_name app=nginx
```
Start the sample nginx app:
```
kubectl apply -f examples/nginx-app/with-pv.yaml
```
Because Kubernetes does not automatically transfer labels from PVCs to dynamically generated PVs, you need to do so manually:
```
nginx_pv_name=$(kubectl get pv -o jsonpath='{.items[?(@.spec.claimRef.name=="nginx-logs")].metadata.name}')
kubectl label pv $nginx_pv_name app=nginx
```
Now create a backup with PV snapshotting:
```
ark backup create nginx-backup --selector app=nginx --snapshot-volumes
```
Simulate a disaster:
```
kubectl delete namespaces nginx-example
kubectl delete pv $nginx_pv_name
```
Now restore your lost resources:
```
ark restore create nginx-backup
```
[0]: /README.md#quickstart
[1]: #aws
[2]: #gcp
[3]: #azure
[4]: /examples/aws
[5]: http://docs.aws.amazon.com/cli/latest/userguide/installing.html
[6]: config-definition.md#aws
[7]: config-definition.md#gcp
[8]: config-definition.md#azure
[9]: #ark-server
[10]: #basic-example-no-pvs
[11]: #snapshot-example-with-pvs
[12]: #setup
[13]: #run
[14]: http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
[15]: https://cloud.google.com/compute/docs/access/service-accounts
[16]: https://cloud.google.com/compute/docs/gcloud-compute
[17]: https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-application-objects
[18]: https://docs.microsoft.com/en-us/azure/storage/storage-azure-cli

66
docs/concepts.md Normal file
View File

@ -0,0 +1,66 @@
# Concepts
* [Overview][0]
* [Operation types][1]
* [1. Backups][2]
* [2. Schedules][3]
* [3. Restores][4]
* [Expired backup deletion][5]
* [Cloud storage sync][6]
## Overview
Heptio Ark provides customizable degrees of recovery for all Kubernetes API objects (Pods, Deployments, Jobs, Custom Resource Definitions, etc.), as well as for persistent volumes. This recovery can be cluster-wide, or fine-tuned according to object type, namespace, or labels.
Ark is ideal for the disaster recovery use case, as well as for snapshotting your application state, prior to performing system operations on your cluster (e.g. upgrades).
## Operation types
This section gives a quick overview of the Ark operation types.
### 1. Backups
The *backup* operation (1) uploads a tarball of copied Kubernetes resources into cloud object storage and (2) uses the cloud provider API to make disk snapshots of persistent volumes, if specified. [Annotations][8] are cleared for PVs but kept for all other object types.
Some things to be aware of:
* *Cluster backups are not strictly atomic.* If API objects are being created or edited at the time of backup, they may or not be included in the backup. In practice, backups happen very quickly and so the odds of capturing inconsistent information are low, but still possible.
* *A backup usually takes no more than a few seconds.* The snapshotting process for persistent volumes is asynchronous, so the runtime of the `ark backup` command isn't dependent on disk size.
These ad-hoc backups are saved with the `<BACKUP NAME>` specified during creation.
### 2. Schedules
The *schedule* operation allows you to back up your data at recurring intervals. The first backup is performed when the schedule is first created, and subsequent backups happen at the schedule's specified interval. These intervals are specified by a Cron expression.
A Schedule acts as a wrapper for Backups; when triggered, it creates them behind the scenes.
Scheduled backups are saved with the name `<SCHEDULE NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as *YYYYMMDDhhmmss*.
### 3. Restores
The *restore* operation allows you to restore all of the objects and persistent volumes from a previously created Backup. Heptio Ark supports multiple namespace remapping--for example, in a single restore, objects in namespace "abc" can be recreated under namespace "def", and the ones in "123" under "456".
Kubernetes API objects that have been restored can be identified with a label that looks like `ark-restore=<BACKUP NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as *YYYYMMDDhhmmss*.
You can also run the Ark server in *restore-only* mode, which disables backup, schedule, and garbage collection functionality during disaster recovery.
## Expired backup deletion
When first creating a backup, you can specify a TTL. If Ark sees that an existing Backup resource has expired, it removes both:
* The Backup resource itself
* The actual backup file from cloud object storage
## Cloud storage sync
Heptio Ark treats object storage as the source of truth. It continuously checks to see that the correct Backup resources are always present. If there is a properly formatted backup file in the storage bucket, but no corresponding Backup resources in the Kubernetes API, Ark synchronizes the information from object storage to Kubernetes.
This allows *restore* functionality to work in a cluster migration scenario, where the original Backup objects do not exist in the new cluster. See the [use case guide][7] for details.
[0]: #overview
[1]: #operation-types
[2]: #1-backups
[3]: #2-schedules
[4]: #3-restores
[5]: #expired-backup-deletion
[6]: #cloud-storage-sync
[7]: use-cases.md#cluster-migration
[8]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/

97
docs/config-definition.md Normal file
View File

@ -0,0 +1,97 @@
# Ark Config definition
* [Overview][10]
* [Example][11]
* [Parameter Reference][8]
* [Main config][9]
* [AWS][0]
* [GCP][1]
* [Azure][2]
## Overview
Heptio Ark defines its own Config object (a custom resource) for specifying Ark backup and cloud provider settings. When the Ark server is first deployed, it waits until you create a Config--specifically one named `default`--in the `heptio-ark` namespace.
> *NOTE*: There is an underlying assumption that you're running the Ark server as a Kubernetes deployment. If the `default` Config is modified, the server shuts down gracefully. Once the kubelet restarts the Ark server pod, the server then uses the updated Config values.
## Example
A sample YAML `Config` looks like the following:
```
apiVersion: ark.heptio.com/v1
kind: Config
metadata:
namespace: heptio-ark
name: default
persistentVolumeProvider:
aws:
region: minio
availabilityZone: minio
s3ForcePathStyle: true
s3Url: http://minio:9000
backupStorageProvider:
bucket: ark
aws:
region: minio
availabilityZone: minio
s3ForcePathStyle: true
s3Url: http://minio:9000
backupSyncPeriod: 60m
gcSyncPeriod: 60m
scheduleSyncPeriod: 1m
restoreOnlyMode: false
```
## Parameter Reference
The configurable parameters are as follows:
### Main config parameters
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `persistentVolumeProvider` | CloudProviderConfig<br><br>(Supported key values are `aws`, `gcp`, and `azure`, but only one can be present. See the corresponding [AWS][0], [GCP][1], and [Azure][2]-specific configs.) | Required Field | The specification for whichever cloud provider the cluster is using for persistent volumes (to be snapshotted).<br><br> *NOTE*: For Azure, your Kubernetes cluster needs to be version 1.7.2+ in order to support PV snapshotting of its managed disks. |
| `backupStorageProvider`/(inline) | CloudProviderConfig<br><br>(Supported key values are `aws`, `gcp`, and `azure`, but only one can be present. See the corresponding [AWS][0], [GCP][1], and [Azure][2]-specific configs.) | Required Field | The specification for whichever cloud provider will be used to actually store the backups. |
| `backupStorageProvider/bucket` | String | Required Field | The storage bucket where backups are to be uploaded. |
| `backupSyncPeriod` | metav1.Duration | 60m0s | How frequently Ark queries the object storage to make sure that the appropriate Backup resources have been created for existing backup files. |
| `gcSyncPeriod` | metav1.Duration | 60m0s | How frequently Ark queries the object storage to delete backup files that have passed their TTL. |
| `scheduleSyncPeriod` | metav1.Duration | 1m0s | How frequently Ark checks its Schedule resource objects to see if a backup needs to be initiated. |
| `resourcePriorities` | []string | `[namespaces, persistentvolumes, persistentvolumeclaims, secrets, configmaps]` | An ordered list that describes the order in which Kubernetes resource objects should be restored (also specified with the `<RESOURCE>.<GROUP>` format.<br><br>If a resource is not in this list, it is restored after all other prioritized resources. |
| `restoreOnlyMode` | bool | `false` | When RestoreOnly mode is on, functionality for backups, schedules, and expired backup deletion is *turned off*. Restores are made from existing backup files in object storage. |
### AWS
**(Or other S3-compatible storage)**
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `region` | string | Required Field | *Example*: "us-east-1"<br><br>See [AWS documentation][3] for the full list. |
| `availabilityZone` | string | Required Field | *Example*: "us-east-1a"<br><br>See [AWS documentation][4] for details. |
| `disableSSL` | bool | `false` | Set this to `true` if you are using Minio (or another local, S3-compatible storage service) and your deployment is not secured. |
| `s3ForcePathStyle` | bool | `false` | Set this to `true` if you are using a local storage service like Minio. |
| `s3Url` | string | Required field for non-AWS-hosted storage| *Example*: http://minio:9000<br><br>You can specify the AWS S3 URL here for explicitness, but Ark can already generate it from `region`, `availabilityZone`, and `bucket`. This field is primarily for local sotrage services like Minio.|
### GCP
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `project` | string | Required Field | *Example*: "project-example-3jsn23"<br><br> See the [Project ID documentation][5] for details. |
| `zone` | string | Required Field | *Example*: "us-central1-a"<br><br>See [GCP documentation][6] for the full list. |
### Azure
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `location` | string | Required Field | *Example*: "Canada East"<br><br>See [the list of available locations][7] (note that this particular page refers to them as "Regions"). |
| `apiTimeout` | metav1.Duration | 1m0s | How long to wait for an API Azure request to complete before timeout. |
[0]: #aws
[1]: #gcp
[2]: #azure
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions
[4]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-availability-zones
[5]: https://cloud.google.com/resource-manager/docs/creating-managing-projects#identifying_projects
[6]: https://cloud.google.com/compute/docs/regions-zones/regions-zones
[7]: https://azure.microsoft.com/en-us/regions/
[8]: #parameter-reference
[9]: #main-config-parameters
[10]: #overview
[11]: #example

View File

@ -0,0 +1,51 @@
# Debugging Restores
* [Example][0]
* [Structure][1]
## Example
When Heptio Ark finishes a Restore, its status changes to "Completed" regardless of whether or not there are issues during the process. The number of warnings and errors are indicated in the output columns from `ark restore get`:
```
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
backup-test-20170726180512 backup-test Completed 155 76 2017-07-26 11:41:14 -0400 EDT <none>
backup-test-20170726180513 backup-test Completed 121 14 2017-07-26 11:48:24 -0400 EDT <none>
backup-test-2-20170726180514 backup-test-2 Completed 0 0 2017-07-26 13:31:21 -0400 EDT <none>
backup-test-2-20170726180515 backup-test-2 Completed 0 1 2017-07-26 13:32:59 -0400 EDT <none>
```
To delve into the warnings and errors into more detail, you can use the `-o` option:
```
kubectl restore get backup-test-20170726180512 -o yaml
```
The output YAML has a `status` field which may look like the following:
```
status:
errors:
ark: null
cluster: null
namespaces: null
phase: Completed
validationErrors: null
warnings:
ark: null
cluster: null
namespaces:
cm1:
- secrets "default-token-t0slk" already exists
```
## Structure
The `status` field in a Restore's YAML has subfields for `errors` and `warnings`. `errors` appear for incomplete or partial restores. `warnings` appear for non-blocking issues (e.g. the restore looks "normal" and all resources referenced in the backup exist in some form, although some of them may have been pre-existing).
Both `errors` and `warnings` are structured in the same way:
* `ark`: A list of system-related issues encountered by the Ark server (e.g. couldn't read directory).
* `cluster`: A list of issues related to the restore of cluster-scoped resources.
* `namespaces`: A map of namespaces to the list of issues related to the restore of their respective resources.
[0]: #example
[1]: #structure

39
docs/generate/ark.go Normal file
View File

@ -0,0 +1,39 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"log"
"os"
"github.com/spf13/cobra/doc"
"github.com/heptio/ark/pkg/cmd/ark"
)
func main() {
cmdName := os.Args[1]
outputDir := os.Args[2]
cmd := ark.NewCommand(cmdName)
// Remove auto-generated timestamps
cmd.DisableAutoGenTag = true
err := doc.GenMarkdownTree(cmd, outputDir)
if err != nil {
log.Fatal(err)
}
}

1
docs/img/README.md Normal file
View File

@ -0,0 +1 @@
Some of these diagrams (for instance backup-process.png), have been created on [draw.io](https://www.draw.io), using the "Include a copy of my diagram" option. If you want to make changes to these diagrams, try importing them into draw.io, and you should have access to the original shapes/text that went into the originals.

BIN
docs/img/backup-process.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

View File

@ -0,0 +1,87 @@
# Output file format
A backup is a gzip-compressed tar file whose name matches the Backup API resource's `metadata.name` (what is specified during `ark backup create <NAME>`).
In cloud object storage, *each backup file is stored in its own subdirectory* beneath the bucket specified in the Ark server configuration. This subdirectory includes an additional file called `ark-backup.json`. The JSON file explicitly lists all info about your associated Backup resource--including any default values used--so that you have a complete historical record of its configuration. It also specifies `status.version`, which corresponds to the output file format.
All together, the directory structure in your cloud storage may look like:
```
rootBucket/
backup1234/
ark-backup.json
backup1234.tar.gz
```
## `ark-backup.json`
An example of this file looks like the following:
```
{
"kind": "Backup",
"apiVersion": "ark.heptio.com/v1",
"metadata": {
"name": "test-backup",
"namespace": "heptio-ark",
"selfLink": "/apis/ark.heptio.com/v1/namespaces/heptio-ark/backups/testtest",
"uid": "a12345cb-75f5-11e7-b4c2-abcdef123456",
"resourceVersion": "337075",
"creationTimestamp": "2017-07-31T13:39:15Z"
},
"spec": {
"includedNamespaces": [
"*"
],
"excludedNamespaces": null,
"includedResources": [
"*"
],
"excludedResources": null,
"labelSelector": null,
"snapshotVolumes": true,
"ttl": "24h0m0s"
},
"status": {
"version": 1,
"expiration": "2017-08-01T13:39:15Z",
"phase": "Completed",
"volumeBackups": {
"pvc-e1e2d345-7583-11e7-b4c2-abcdef123456": {
"snapshotID": "snap-04b1a8e11dfb33ab0",
"type": "gp2",
"iops": 100
}
},
"validationErrors": null
}
}
```
Note that this file includes detailed info about your volume snapshots in the `status.volumeBackups` field, which can be helpful if you want to manually check them in your cloud provider GUI.
## file format version: 1
When unzipped, a typical backup directory (e.g. `backup1234.tar.gz`) looks like the following:
```
cluster/
persistentvolumes/
pv01.json
...
namespaces/
namespace1/
configmaps/
myconfigmap.json
...
pods
mypod.json
...
jobs
awesome-job.json
...
deployments
cool-deployment.json
...
...
namespace2/
...
...
```

54
docs/use-cases.md Normal file
View File

@ -0,0 +1,54 @@
# Use Cases
This doc provides sample Ark commands for the following common scenarios:
* [Disaster recovery][0]
* [Cluster migration][1]
## Disaster recovery
*Using Schedules and Restore-Only Mode*
If you periodically back up your cluster's resources, you are able to return to a previous state in case of some unexpected mishap, such as a service outage. Doing so with Heptio Ark looks like the following:
1. After you first run the Ark server on your cluster, set up a daily backup (replacing `<SCHEDULE NAME>` in the command as desired):
```
ark schedule create <SCHEDULE NAME> --schedule "0 7 * * *"
```
This creates a Backup object with the name `<SCHEDULE NAME>-<TIMESTAMP>`.
2. A disaster happens and you need to recreate your resources.
3. Update the [Ark server Config][3], setting `restoreOnlyMode` to `true`. This prevents Backup objects from being created or deleted during your Restore process.
4. Create a restore with your most recent Ark Backup:
```
ark restore create <SCHEDULE NAME>-<TIMESTAMP>
```
## Cluster migration
*Using Backups and Restores*
Heptio Ark can help you port your resources from one cluster to another, as long as you point each Ark Config to the same cloud object storage. In this scenario, we are also assuming that your clusters are hosted by the same cloud provider. **Note that Heptio Ark does not support the migration of persistent volumes across cloud providers.**
1. *(Cluster 1)* Assuming you haven't already been checkpointing your data with the Ark `schedule` operation, you need to first back up your entire cluster (replacing `<BACKUP-NAME>` as desired):
```
ark backup create <BACKUP-NAME> --snapshot-volumes
```
The default TTL is 24 hours; you can use the `--ttl` flag to change this as necessary.
2. *(Cluster 2)* Make sure that the `persistentVolumeProvider` and `backupStorageProvider` fields in the Ark Config match the ones from *Cluster 1*, so that your new Ark server instance is pointing to the same bucket.
3. *(Cluster 2)* Make sure that the Ark Backup object has been created. Ark resources are [synced][2] with the backup files available in cloud storage.
4. *(Cluster 2)* Once you have confirmed that the right Backup (`<BACKUP-NAME>`) is now present, you can restore everything with:
```
ark restore create <BACKUP-NAME> --restore-volumes
```
[0]: #disaster-recovery
[1]: #cluster-migration
[2]: concepts.md#cloud-storage-sync
[3]: config-definition.md#main-config-parameters

12
examples/README.md Normal file
View File

@ -0,0 +1,12 @@
# Examples
The YAML config files in this directory can be used to quickly deploy a containerized Ark deployment.
* `common/`: Contains manifests to set up Ark. Can be used across cloud provider platforms. (Note that Azure requires its own deployment file due to its unique way of loading credentials).
* `minio/`: Used in the [Quickstart][1] to set up [Minio][0], a local S3-compatible object storage service. It provides a convenient way to test Ark without tying you to a specific cloud provider.
* `aws/`, `azure/`, `gcp/`: Contains manifests specific to the given cloud provider's setup.
[0]: https://github.com/minio/minio
[1]: /README.md#quickstart

View File

@ -0,0 +1,33 @@
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: ark.heptio.com/v1
kind: Config
metadata:
namespace: heptio-ark
name: default
persistentVolumeProvider:
aws:
region: <YOUR_REGION>
availabilityZone: <YOUR_AVAILABILITY_ZONE>
backupStorageProvider:
bucket: <YOUR_BUCKET>
aws:
region: <YOUR_REGION>
availabilityZone: <YOUR_AVAILABILITY_ZONE>
backupSyncPeriod: 30m
gcSyncPeriod: 30m
scheduleSyncPeriod: 1m
restoreOnlyMode: false

View File

@ -0,0 +1,42 @@
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
namespace: heptio-ark
name: ark
spec:
replicas: 1
template:
metadata:
labels:
component: ark
spec:
restartPolicy: Always
serviceAccountName: ark
containers:
- name: ark
image: gcr.io/heptio-images/ark:latest
command:
- /ark
args:
- server
- --logtostderr
- --v
- "4"
envFrom:
- secretRef:
name: cloud-credentials

View File

@ -0,0 +1,33 @@
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: ark.heptio.com/v1
kind: Config
metadata:
namespace: heptio-ark
name: default
persistentVolumeProvider:
azure:
location: <YOUR_LOCATION>
apiTimeout: <YOUR_TIMEOUT>
backupStorageProvider:
bucket: <YOUR_BUCKET>
azure:
location: <YOUR_LOCATION>
apiTimeout: <YOUR_TIMEOUT>
backupSyncPeriod: 30m
gcSyncPeriod: 30m
scheduleSyncPeriod: 1m
restoreOnlyMode: false

View File

@ -0,0 +1,160 @@
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: backups.ark.heptio.com
labels:
component: ark
spec:
group: ark.heptio.com
version: v1
scope: Namespaced
names:
plural: backups
kind: Backup
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: schedules.ark.heptio.com
labels:
component: ark
spec:
group: ark.heptio.com
version: v1
scope: Namespaced
names:
plural: schedules
kind: Schedule
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: restores.ark.heptio.com
labels:
component: ark
spec:
group: ark.heptio.com
version: v1
scope: Namespaced
names:
plural: restores
kind: Restore
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: configs.ark.heptio.com
labels:
component: ark
spec:
group: ark.heptio.com
version: v1
scope: Namespaced
names:
plural: configs
kind: Config
---
apiVersion: v1
kind: Namespace
metadata:
name: heptio-ark
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ark
namespace: heptio-ark
labels:
component: ark
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ark
labels:
component: ark
rules:
- apiGroups:
- "*"
verbs:
- list
- watch
- create
resources:
- "*"
- apiGroups:
- apiextensions.k8s.io
verbs:
- create
resources:
- customresourcedefinitions
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ark
labels:
component: ark
subjects:
- kind: ServiceAccount
namespace: heptio-ark
name: ark
roleRef:
kind: ClusterRole
name: ark
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
namespace: heptio-ark
name: ark
labels:
component: ark
rules:
- apiGroups:
- ark.heptio.com
verbs:
- "*"
resources:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
namespace: heptio-ark
name: ark
labels:
component: ark
subjects:
- kind: ServiceAccount
namespace: heptio-ark
name: ark
roleRef:
kind: Role
name: ark
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,49 @@
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
namespace: heptio-ark
name: ark
spec:
replicas: 1
template:
metadata:
labels:
component: ark
spec:
restartPolicy: Always
serviceAccountName: ark
containers:
- name: ark
image: gcr.io/heptio-images/ark:latest
command:
- /ark
args:
- server
- --logtostderr
- --v
- "4"
volumeMounts:
- name: cloud-credentials
mountPath: /credentials
env:
- name: AWS_SHARED_CREDENTIALS_FILE
value: /credentials/cloud
volumes:
- name: cloud-credentials
secret:
secretName: cloud-credentials

14
examples/common/README.md Normal file
View File

@ -0,0 +1,14 @@
# File Structure
## 00-prereqs.yaml
This file contains the prerequisites necessary to run the Ark server:
- `heptio-ark` namespace
- `ark` service account
- RBAC rules to grant permissions to the `ark` service account
- CRDs for the Ark-specific resources (Backup, Schedule, Restore, Config)
## 10-deployment.yaml
This deploys Ark and be used for AWS, GCP, and Minio. *Note that it cannot be used for Azure.*

View File

@ -0,0 +1,33 @@
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: ark.heptio.com/v1
kind: Config
metadata:
namespace: heptio-ark
name: default
persistentVolumeProvider:
gcp:
project: <YOUR_PROJECT>
zone: <YOUR_ZONE>
backupStorageProvider:
bucket: <YOUR_BUCKET>
gcp:
project: <YOUR_PROJECT>
zone: <YOUR_ZONE>
backupSyncPeriod: 30m
gcSyncPeriod: 30m
scheduleSyncPeriod: 1m
restoreOnlyMode: false

View File

@ -0,0 +1,105 @@
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
namespace: heptio-ark
name: minio
labels:
component: minio
spec:
strategy:
type: Recreate
template:
metadata:
labels:
component: minio
spec:
volumes:
- name: storage
hostPath:
path: /tmp/minio
containers:
- name: minio
image: minio/minio:latest
imagePullPolicy: IfNotPresent
args:
- server
- /storage
env:
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
volumeMounts:
- name: storage
mountPath: "/storage"
---
apiVersion: v1
kind: Service
metadata:
namespace: heptio-ark
name: minio
labels:
component: minio
spec:
type: ClusterIP
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
component: minio
---
apiVersion: v1
kind: Secret
metadata:
namespace: heptio-ark
name: cloud-credentials
labels:
component: minio
stringData:
credentials-ark: |
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
---
apiVersion: batch/v1
kind: Job
metadata:
namespace: heptio-ark
name: minio-setup
labels:
component: minio
spec:
template:
metadata:
name: minio-setup
spec:
restartPolicy: OnFailure
containers:
- name: mc
image: minio/mc:latest
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- "mc config host add ark http://minio:9000 minio minio123 && mc mb -p ark/ark"

View File

@ -0,0 +1,37 @@
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: ark.heptio.com/v1
kind: Config
metadata:
namespace: heptio-ark
name: default
persistentVolumeProvider:
aws:
region: minio
availabilityZone: minio
s3ForcePathStyle: true
s3Url: http://minio:9000
backupStorageProvider:
bucket: ark
aws:
region: minio
availabilityZone: minio
s3ForcePathStyle: true
s3Url: http://minio:9000
backupSyncPeriod: 1m
gcSyncPeriod: 1m
scheduleSyncPeriod: 1m
restoreOnlyMode: false

View File

@ -0,0 +1,15 @@
# Files
This directory contains manifests for two versions of a sample Nginx app under the `nginx-example` namespace.
## `base.yaml`
This is the most basic version of the Nginx app, which can be used to test Ark's backup and restore functionality.
*This can be deployed as is.*
## `with-pv.yaml`
This sets up an Nginx app that logs to a persistent volume, so that Ark's PV snapshotting functionality can also be tested.
*This requires you to first replace the placeholder value `<YOUR_STORAGE_CLASS_NAME>`.*

View File

@ -0,0 +1,56 @@
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: v1
kind: Namespace
metadata:
name: nginx-example
labels:
app: nginx
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx-example
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.7.9
name: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: my-nginx
namespace: nginx-example
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer

View File

@ -0,0 +1,82 @@
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: v1
kind: Namespace
metadata:
name: nginx-example
labels:
app: nginx
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-logs
namespace: nginx-example
labels:
app: nginx
spec:
storageClassName: <YOUR_STORAGE_CLASS_NAME>
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx-example
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: nginx-logs
persistentVolumeClaim:
claimName: nginx-logs
containers:
- image: nginx:1.7.9
name: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/var/log/nginx"
name: nginx-logs
readOnly: false
nodeSelector:
app: nginx
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: my-nginx
namespace: nginx-example
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer

26
hack/godep-save.sh Executable file
View File

@ -0,0 +1,26 @@
#!/bin/bash -e
#
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
SAVES=(
k8s.io/kubernetes/cmd/libs/go2idl/client-gen
k8s.io/kubernetes/cmd/libs/go2idl/lister-gen
k8s.io/kubernetes/cmd/libs/go2idl/informer-gen
)
godep save ./... "${SAVES[@]}"
# remove files we don't want
find vendor \( -name BUILD -o -name .travis.yml \) -exec rm {} \;

View File

@ -0,0 +1,56 @@
#!/bin/bash -e
#
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ARK_ROOT=$(realpath $(dirname ${BASH_SOURCE})/..)
BIN=${ARK_ROOT}/_output/bin
mkdir -p ${BIN}
go build -o ${BIN}/client-gen ./vendor/k8s.io/kubernetes/cmd/libs/go2idl/client-gen
OUTPUT_BASE=""
if [[ -z "${GOPATH}" ]]; then
OUTPUT_BASE="${HOME}/go/src"
else
OUTPUT_BASE="${GOPATH}/src"
fi
verify=""
for i in "$@"; do
if [[ $i == "--verify-only" ]]; then
verify=1
break
fi
done
if [[ -z ${verify} ]]; then
find ${ARK_ROOT}/pkg/generated/clientset \
\( \
-name '*.go' -and \
\( \
! -name '*_expansion.go' \
-or \
-name generated_expansion.go \
\) \
\) -exec rm {} \;
fi
${BIN}/client-gen \
--go-header-file /dev/null \
--output-base ${OUTPUT_BASE} \
--input-base github.com/heptio/ark/pkg/apis \
--clientset-path github.com/heptio/ark/pkg/generated \
--input ark/v1 \
--clientset-name clientset \
$@

32
hack/update-generated-docs.sh Executable file
View File

@ -0,0 +1,32 @@
#!/bin/bash -e
#
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ARK_ROOT=$(realpath $(dirname ${BASH_SOURCE})/..)
BIN=${ARK_ROOT}/_output/bin
mkdir -p ${BIN}
go build -o ${BIN}/docs-gen ./docs/generate/ark.go
if [[ $# -gt 1 ]]; then
echo "usage: ${BASH_SOURCE} [DIRECTORY]"
exit 1
fi
OUTPUT_DIR="$@"
if [[ -z "${OUTPUT_DIR}" ]]; then
OUTPUT_DIR=${ARK_ROOT}/docs/cli-reference
fi
${BIN}/docs-gen ark ${OUTPUT_DIR}

View File

@ -0,0 +1,50 @@
#!/bin/bash -e
#
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ARK_ROOT=$(realpath $(dirname ${BASH_SOURCE})/..)
BIN=${ARK_ROOT}/_output/bin
mkdir -p ${BIN}
go build -o ${BIN}/informer-gen ./vendor/k8s.io/kubernetes/cmd/libs/go2idl/informer-gen
OUTPUT_BASE=""
if [[ -z "${GOPATH}" ]]; then
OUTPUT_BASE="${HOME}/go/src"
else
OUTPUT_BASE="${GOPATH}/src"
fi
verify=""
for i in "$@"; do
if [[ $i == "--verify-only" ]]; then
verify=1
break
fi
done
if [[ -z ${verify} ]]; then
rm -rf ${ARK_ROOT}/pkg/generated/informers
fi
${BIN}/informer-gen \
--logtostderr \
--go-header-file /dev/null \
--output-base ${OUTPUT_BASE} \
--input-dirs github.com/heptio/ark/pkg/apis/ark/v1 \
--output-package github.com/heptio/ark/pkg/generated/informers \
--listers-package github.com/heptio/ark/pkg/generated/listers \
--internal-clientset-package github.com/heptio/ark/pkg/generated/clientset \
--versioned-clientset-package github.com/heptio/ark/pkg/generated/clientset \
$@

View File

@ -0,0 +1,55 @@
#!/bin/bash -e
#
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ARK_ROOT=$(realpath $(dirname ${BASH_SOURCE})/..)
BIN=${ARK_ROOT}/_output/bin
mkdir -p ${BIN}
go build -o ${BIN}/lister-gen ./vendor/k8s.io/kubernetes/cmd/libs/go2idl/lister-gen
OUTPUT_BASE=""
if [[ -z "${GOPATH}" ]]; then
OUTPUT_BASE="${HOME}/go/src"
else
OUTPUT_BASE="${GOPATH}/src"
fi
verify=""
for i in "$@"; do
if [[ $i == "--verify-only" ]]; then
verify=1
break
fi
done
if [[ -z ${verify} ]]; then
find ${ARK_ROOT}/pkg/generated/listers \
\( \
-name '*.go' -and \
\( \
! -name '*_expansion.go' \
-or \
-name generated_expansion.go \
\) \
\) -exec rm {} \;
fi
${BIN}/lister-gen \
--logtostderr \
--go-header-file /dev/null \
--output-base ${OUTPUT_BASE} \
--input-dirs github.com/heptio/ark/pkg/apis/ark/v1 \
--output-package github.com/heptio/ark/pkg/generated/listers \
$@

View File

@ -0,0 +1,22 @@
#!/bin/bash -e
#
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
HACK_DIR=$(dirname "${BASH_SOURCE}")
if ! output=$(${HACK_DIR}/update-generated-clientsets.sh --verify-only 2>&1); then
echo "FAILURE: verification of clientsets failed:"
echo "${output}"
exit 1
fi

37
hack/verify-generated-docs.sh Executable file
View File

@ -0,0 +1,37 @@
#!/bin/bash -e
#
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ARK_ROOT=$(realpath $(dirname ${BASH_SOURCE})/..)
HACK_DIR=$(dirname "${BASH_SOURCE}")
DOCS_DIR=${ARK_ROOT}/docs/cli-reference
TMP_DIR="$(mktemp -d)"
trap cleanup INT TERM HUP EXIT
cleanup() {
rm -rf ${TMP_DIR}
}
${HACK_DIR}/update-generated-docs.sh ${TMP_DIR}
exclude_file="README.md"
output=$(echo "`diff -r ${DOCS_DIR} ${TMP_DIR}`" | sed "/${exclude_file}/d")
if [[ -n "${output}" ]] ; then
echo "FAILURE: verification of docs failed:"
echo "${output}"
exit 1
fi

View File

@ -0,0 +1,22 @@
#!/bin/bash -e
#
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
HACK_DIR=$(dirname "${BASH_SOURCE}")
if ! output=$(${HACK_DIR}/update-generated-informers.sh --verify-only 2>&1); then
echo "FAILURE: verification of informers failed:"
echo "${output}"
exit 1
fi

View File

@ -0,0 +1,22 @@
#!/bin/bash -e
#
# Copyright 2017 Heptio Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
HACK_DIR=$(dirname "${BASH_SOURCE}")
if ! output=$(${HACK_DIR}/update-generated-listers.sh --verify-only 2>&1); then
echo "FAILURE: verification of listers failed:"
echo "${output}"
exit 1
fi

134
pkg/apis/ark/v1/backup.go Normal file
View File

@ -0,0 +1,134 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
// BackupSpec defines the specification for an Ark backup.
type BackupSpec struct {
// IncludedNamespaces is a slice of namespace names to include objects
// from. If empty, all namespaces are included.
IncludedNamespaces []string `json:"includedNamespaces"`
// ExcludedNamespaces contains a list of namespaces that are not
// included in the backup.
ExcludedNamespaces []string `json:"excludedNamespaces"`
// IncludedResources is a slice of resource names to include
// in the backup. If empty, all resources are included.
IncludedResources []string `json:"includedResources"`
// ExcludedResources is a slice of resource names that are not
// included in the backup.
ExcludedResources []string `json:"excludedResources"`
// LabelSelector is a metav1.LabelSelector to filter with
// when adding individual objects to the backup. If empty
// or nil, all objects are included. Optional.
LabelSelector *metav1.LabelSelector `json:"labelSelector"`
// SnapshotVolumes is a bool which specifies whether to take
// cloud snapshots of any PV's referenced in the set of objects
// included in the Backup.
SnapshotVolumes bool `json:"snapshotVolumes"`
// TTL is a time.Duration-parseable string describing how long
// the Backup should be retained for.
TTL metav1.Duration `json:"ttl"`
}
// BackupPhase is a string representation of the lifecycle phase
// of an Ark backup.
type BackupPhase string
const (
// BackupPhaseNew means the backup has been created but not
// yet processed by the BackupController.
BackupPhaseNew BackupPhase = "New"
// BackupPhaseFailedValidation means the backup has failed
// the controller's validations and therefore will not run.
BackupPhaseFailedValidation BackupPhase = "FailedValidation"
// BackupPhaseInProgress means the backup is currently executing.
BackupPhaseInProgress BackupPhase = "InProgress"
// BackupPhaseCompleted means the backup has run successfully without
// errors.
BackupPhaseCompleted BackupPhase = "Completed"
// BackupPhaseFailed mean the backup ran but encountered an error that
// prevented it from completing successfully.
BackupPhaseFailed BackupPhase = "Failed"
)
// BackupStatus captures the current status of an Ark backup.
type BackupStatus struct {
// Version is the backup format version.
Version int `json:"version"`
// Expiration is when this Backup is eligible for garbage-collection.
Expiration metav1.Time `json:"expiration"`
// Phase is the current state of the Backup.
Phase BackupPhase `json:"phase"`
// VolumeBackups is a map of PersistentVolume names to
// information about the backed-up volume in the cloud
// provider API.
VolumeBackups map[string]*VolumeBackupInfo `json:"volumeBackups"`
// ValidationErrors is a slice of all validation errors (if
// applicable).
ValidationErrors []string `json:"validationErrors"`
}
// VolumeBackupInfo captures the required information about
// a PersistentVolume at backup time to be able to restore
// it later.
type VolumeBackupInfo struct {
// SnapshotID is the ID of the snapshot taken in the cloud
// provider API of this volume.
SnapshotID string `json:"snapshotID"`
// Type is the type of the disk/volume in the cloud provider
// API.
Type string `json:"type"`
// Iops is the optional value of provisioned IOPS for the
// disk/volume in the cloud provider API.
Iops *int `json:"iops"`
}
// +genclient=true
// Backup is an Ark resource that respresents the capture of Kubernetes
// cluster state at a point in time (API objects and associated volume state).
type Backup struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata"`
Spec BackupSpec `json:"spec"`
Status BackupStatus `json:"status,omitempty"`
}
// BackupList is a list of Backups.
type BackupList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata"`
Items []Backup `json:"items"`
}

113
pkg/apis/ark/v1/config.go Normal file
View File

@ -0,0 +1,113 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
// ConfigList is a list of Configs.
type ConfigList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata"`
Items []Config `json:"items"`
}
// +genclient=true
// Config is an Ark resource that captures configuration information to be
// used for running the Ark server.
type Config struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata"`
// PersistentVolumeProvider is the configuration information for the cloud where
// the cluster is running and has PersistentVolumes to snapshot or restore.
PersistentVolumeProvider CloudProviderConfig `json:"persistentVolumeProvider"`
// BackupStorageProvider is the configuration information for the cloud where
// Ark backups are stored in object storage. This may be a different cloud than
// where the cluster is running.
BackupStorageProvider ObjectStorageProviderConfig `json:"backupStorageProvider"`
// BackupSyncPeriod is how often the BackupSyncController runs to ensure all
// Ark backups in object storage exist as Backup API objects in the cluster.
BackupSyncPeriod metav1.Duration `json:"backupSyncPeriod"`
// GCSyncPeriod is how often the GCController runs to delete expired backup
// API objects and corresponding backup files in object storage.
GCSyncPeriod metav1.Duration `json:"gcSyncPeriod"`
// ScheduleSyncPeriod is how often the ScheduleController runs to check for
// new backups that should be triggered based on schedules.
ScheduleSyncPeriod metav1.Duration `json:"scheduleSyncPeriod"`
// ResourcePriorities is an ordered slice of resources specifying the desired
// order of resource restores. Any resources not in the list will be restored
// alphabetically after the prioritized resources.
ResourcePriorities []string `json:"resourcePriorities"`
// RestoreOnlyMode is whether Ark should run in a mode where only restores
// are allowed; backups, schedules, and garbage-collection are all disabled.
RestoreOnlyMode bool `json:"restoreOnlyMode"`
}
// CloudProviderConfig is configuration information about how to connect
// to a particular cloud. Only one of the members (AWS, GCP, Azure) may
// be present.
type CloudProviderConfig struct {
// AWS is configuration information for connecting to AWS.
AWS *AWSConfig `json:"aws"`
// GCP is configuration information for connecting to GCP.
GCP *GCPConfig `json:"gcp"`
// Azure is configuration information for connecting to Azure.
Azure *AzureConfig `json:"azure"`
}
// ObjectStorageProviderConfig is configuration information for connecting to
// a particular bucket in object storage to access Ark backups.
type ObjectStorageProviderConfig struct {
// CloudProviderConfig is the configuration information for the cloud where
// Ark backups are stored in object storage.
CloudProviderConfig `json:",inline"`
// Bucket is the name of the bucket in object storage where Ark backups
// are stored.
Bucket string `json:"bucket"`
}
// AWSConfig is configuration information for connecting to AWS.
type AWSConfig struct {
Region string `json:"region"`
AvailabilityZone string `json:"availabilityZone"`
DisableSSL bool `json:"disableSSL"`
S3ForcePathStyle bool `json:"s3ForcePathStyle"`
S3Url string `json:"s3Url"`
}
// GCPConfig is configuration information for connecting to GCP.
type GCPConfig struct {
Project string `json:"project"`
Zone string `json:"zone"`
}
// AzureConfig is configuration information for connecting to Azure.
type AzureConfig struct {
Location string `json:"location"`
APITimeout metav1.Duration `json:"apiTimeout"`
}

View File

@ -0,0 +1,36 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
const (
// DefaultNamespace is the Kubernetes namespace that is used by default for
// the Ark server and API objects.
DefaultNamespace = "heptio-ark"
// RestoreLabelKey is the label key that's applied to all resources that
// are created during a restore. This is applied for ease of identification
// of restored resources. The value will be the restore's name.
RestoreLabelKey = "ark-restore"
// ClusterScopedDir is the name of the directory containing cluster-scoped
// resources within an Ark backup.
ClusterScopedDir = "cluster"
// NamespaceScopedDir is the name of the directory containing namespace-scoped
// resource within an Ark backup.
NamespaceScopedDir = "namespaces"
)

19
pkg/apis/ark/v1/doc.go Normal file
View File

@ -0,0 +1,19 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package v1 is the v1 version of the API.
// +groupName=ark.heptio.com
package v1

View File

@ -0,0 +1,57 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
)
var (
// SchemeBuilder collects the scheme builder functions for the Ark API
SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)
// AddToScheme applies the SchemeBuilder functions to a specified scheme
AddToScheme = SchemeBuilder.AddToScheme
)
// GroupName is the group name for the Ark API
const GroupName = "ark.heptio.com"
// SchemeGroupVersion is the GroupVersion for the Ark API
var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: "v1"}
// Resource gets an Ark GroupResource for a specified resource
func Resource(resource string) schema.GroupResource {
return SchemeGroupVersion.WithResource(resource).GroupResource()
}
func addKnownTypes(scheme *runtime.Scheme) error {
scheme.AddKnownTypes(SchemeGroupVersion,
&Backup{},
&BackupList{},
&Schedule{},
&ScheduleList{},
&Restore{},
&RestoreList{},
&Config{},
&ConfigList{},
)
metav1.AddToGroupVersion(scheme, SchemeGroupVersion)
return nil
}

120
pkg/apis/ark/v1/restore.go Normal file
View File

@ -0,0 +1,120 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
// RestoreSpec defines the specification for an Ark restore.
type RestoreSpec struct {
// BackupName is the unique name of the Ark backup to restore
// from.
BackupName string `json:"backupName"`
// Namespaces is a slice of namespaces in the Ark backup to restore.
Namespaces []string `json:"namespaces"`
// NamespaceMapping is a map of source namespace names
// to target namespace names to restore into. Any source
// namespaces not included in the map will be restored into
// namespaces of the same name.
NamespaceMapping map[string]string `json:"namespaceMapping"`
// LabelSelector is a metav1.LabelSelector to filter with
// when restoring individual objects from the backup. If empty
// or nil, all objects are included. Optional.
LabelSelector *metav1.LabelSelector `json:"labelSelector"`
// RestorePVs is a bool defining whether to restore all included
// PVs from snapshot (via the cloudprovider). Default false.
RestorePVs bool `json:"restorePVs"`
}
// RestorePhase is a string representation of the lifecycle phase
// of an Ark restore
type RestorePhase string
const (
// RestorePhaseNew means the restore has been created but not
// yet processed by the RestoreController
RestorePhaseNew RestorePhase = "New"
// RestorePhaseFailedValidation means the restore has failed
// the controller's validations and therefore will not run.
RestorePhaseFailedValidation RestorePhase = "FailedValidation"
// RestorePhaseInProgress means the restore is currently executing.
RestorePhaseInProgress RestorePhase = "InProgress"
// RestorePhaseCompleted means the restore has finished executing.
// Any relevant warnings or errors will be captured in the Status.
RestorePhaseCompleted RestorePhase = "Completed"
)
// RestoreStatus captures the current status of an Ark restore
type RestoreStatus struct {
// Phase is the current state of the Restore
Phase RestorePhase `json:"phase"`
// ValidationErrors is a slice of all validation errors (if
// applicable)
ValidationErrors []string `json:"validationErrors"`
// Warnings is a collection of all warning messages that were
// generated during execution of the restore
Warnings RestoreResult `json:"warnings"`
// Errors is a collection of all error messages that were
// generated during execution of the restore
Errors RestoreResult `json:"errors"`
}
// RestoreResult is a collection of messages that were generated
// during execution of a restore. This will typically store either
// warning or error messages.
type RestoreResult struct {
// Ark is a slice of messages related to the operation of Ark
// itself (for example, messages related to connecting to the
// cloud, reading a backup file, etc.)
Ark []string `json:"ark"`
// Cluster is a slice of messages related to restoring cluster-
// scoped resources.
Cluster []string `json:"cluster"`
// Namespaces is a map of namespace name to slice of messages
// related to restoring namespace-scoped resources.
Namespaces map[string][]string `json:"namespaces"`
}
// +genclient=true
// Restore is an Ark resource that represents the application of
// resources from an Ark backup to a target Kubernetes cluster.
type Restore struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata"`
Spec RestoreSpec `json:"spec"`
Status RestoreStatus `json:"status,omitempty"`
}
// RestoreList is a list of Restores.
type RestoreList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata"`
Items []Restore `json:"items"`
}

View File

@ -0,0 +1,81 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
// ScheduleSpec defines the specification for an Ark schedule
type ScheduleSpec struct {
// Template is the definition of the Backup to be run
// on the provided schedule
Template BackupSpec `json:"template"`
// Schedule is a Cron expression defining when to run
// the Backup.
Schedule string `json:"schedule"`
}
// SchedulePhase is a string representation of the lifecycle phase
// of an Ark schedule
type SchedulePhase string
const (
// SchedulePhaseNew means the schedule has been created but not
// yet processed by the ScheduleController
SchedulePhaseNew SchedulePhase = "New"
// SchedulePhaseEnabled means the schedule has been validated and
// will now be triggering backups according to the schedule spec.
SchedulePhaseEnabled SchedulePhase = "Enabled"
// SchedulePhaseFailedValidation means the schedule has failed
// the controller's validations and therefore will not trigger backups.
SchedulePhaseFailedValidation SchedulePhase = "FailedValidation"
)
// ScheduleStatus captures the current state of an Ark schedule
type ScheduleStatus struct {
// Phase is the current phase of the Schedule
Phase SchedulePhase `json:"phase"`
// LastBackup is the last time a Backup was run for this
// Schedule schedule
LastBackup metav1.Time `json:"lastBackup"`
// ValidationErrors is a slice of all validation errors (if
// applicable)
ValidationErrors []string `json:"validationErrors"`
}
// +genclient=true
// Schedule is an Ark resource that represents a pre-scheduled or
// periodic Backup that should be run.
type Schedule struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata"`
Spec ScheduleSpec `json:"spec"`
Status ScheduleStatus `json:"status,omitempty"`
}
// ScheduleList is a list of Schedules.
type ScheduleList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata"`
Items []Schedule `json:"items"`
}

401
pkg/backup/backup.go Normal file
View File

@ -0,0 +1,401 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package backup
import (
"archive/tar"
"compress/gzip"
"encoding/json"
"fmt"
"io"
"strings"
"time"
"github.com/golang/glog"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
kuberrs "k8s.io/apimachinery/pkg/util/errors"
api "github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/client"
"github.com/heptio/ark/pkg/discovery"
"github.com/heptio/ark/pkg/util/collections"
)
// Backupper performs backups.
type Backupper interface {
// Backup takes a backup using the specification in the api.Backup and writes backup data to the
// given writers.
Backup(backup *api.Backup, data io.Writer) error
}
// kubernetesBackupper implements Backupper.
type kubernetesBackupper struct {
dynamicFactory client.DynamicFactory
discoveryHelper discovery.Helper
actions map[schema.GroupResource]Action
itemBackupper itemBackupper
}
var _ Backupper = &kubernetesBackupper{}
// Action is an actor that performs an operation on an individual item being backed up.
type Action interface {
// Execute is invoked on an item being backed up. If an error is returned, the Backup is marked as
// failed.
Execute(item map[string]interface{}, backup *api.Backup) error
}
// NewKubernetesBackupper creates a new kubernetesBackupper.
func NewKubernetesBackupper(
discoveryHelper discovery.Helper,
dynamicFactory client.DynamicFactory,
actions map[string]Action,
) (Backupper, error) {
resolvedActions, err := resolveActions(discoveryHelper.Mapper(), actions)
if err != nil {
return nil, err
}
return &kubernetesBackupper{
discoveryHelper: discoveryHelper,
dynamicFactory: dynamicFactory,
actions: resolvedActions,
itemBackupper: &realItemBackupper{},
}, nil
}
// resolveActions resolves the string-based map of group-resources to actions and returns a map of
// schema.GroupResources to actions.
func resolveActions(mapper meta.RESTMapper, actions map[string]Action) (map[schema.GroupResource]Action, error) {
ret := make(map[schema.GroupResource]Action)
for resource, action := range actions {
gr, err := resolveGroupResource(mapper, resource)
if err != nil {
return nil, err
}
ret[gr] = action
}
return ret, nil
}
// getResourceIncludesExcludes takes the lists of resources to include and exclude from the
// backup, uses the RESTMapper to resolve them to fully-qualified group-resource names, and returns
// an IncludesExcludes list.
func getResourceIncludesExcludes(mapper meta.RESTMapper, backup *api.Backup) *collections.IncludesExcludes {
resources := collections.NewIncludesExcludes()
resolve := func(list []string, allowAll bool, f func(string)) {
for _, resource := range list {
if allowAll && resource == "*" {
f("*")
return
}
gr, err := resolveGroupResource(mapper, resource)
if err != nil {
glog.Errorf("unable to include resource %q in backup: %v", resource, err)
continue
}
f(gr.String())
}
}
resolve(backup.Spec.IncludedResources, true, func(s string) { resources.Includes(s) })
resolve(backup.Spec.ExcludedResources, false, func(s string) { resources.Excludes(s) })
return resources
}
// resolveGroupResource uses the RESTMapper to resolve resource to a fully-qualified
// schema.GroupResource. If the RESTMapper is unable to do so, an error is returned instead.
func resolveGroupResource(mapper meta.RESTMapper, resource string) (schema.GroupResource, error) {
gvr, err := mapper.ResourceFor(schema.ParseGroupResource(resource).WithVersion(""))
if err != nil {
return schema.GroupResource{}, err
}
return gvr.GroupResource(), nil
}
// getNamespaceIncludesExcludes returns an IncludesExcludes list containing which namespaces to
// include and exclude from the backup.
func getNamespaceIncludesExcludes(backup *api.Backup) *collections.IncludesExcludes {
return collections.NewIncludesExcludes().Includes(backup.Spec.IncludedNamespaces...).Excludes(backup.Spec.ExcludedNamespaces...)
}
type backupContext struct {
backup *api.Backup
w tarWriter
namespaceIncludesExcludes *collections.IncludesExcludes
resourceIncludesExcludes *collections.IncludesExcludes
// deploymentsBackedUp marks whether we've seen and are backing up the deployments resource, from
// either the apps or extensions api groups. We only want to back them up once, from whichever api
// group we see first.
deploymentsBackedUp bool
// networkPoliciesBackedUp marks whether we've seen and are backing up the networkpolicies
// resource, from either the networking.k8s.io or extensions api groups. We only want to back them
// up once, from whichever api group we see first.
networkPoliciesBackedUp bool
}
// Backup backs up the items specified in the Backup, placing them in a gzip-compressed tar file
// written to data. The finalized api.Backup is written to metadata.
func (kb *kubernetesBackupper) Backup(backup *api.Backup, data io.Writer) error {
gzw := gzip.NewWriter(data)
defer gzw.Close()
tw := tar.NewWriter(gzw)
defer tw.Close()
var errs []error
ctx := &backupContext{
backup: backup,
w: tw,
namespaceIncludesExcludes: getNamespaceIncludesExcludes(backup),
resourceIncludesExcludes: getResourceIncludesExcludes(kb.discoveryHelper.Mapper(), backup),
}
for _, group := range kb.discoveryHelper.Resources() {
glog.V(2).Infof("Backing up group %q\n", group.GroupVersion)
if err := kb.backupGroup(ctx, group); err != nil {
errs = append(errs, err)
}
}
return kuberrs.NewAggregate(errs)
}
type tarWriter interface {
io.Closer
Write([]byte) (int, error)
WriteHeader(*tar.Header) error
}
// backupGroup backs up a single API group.
func (kb *kubernetesBackupper) backupGroup(ctx *backupContext, group *metav1.APIResourceList) error {
var errs []error
for _, resource := range group.APIResources {
glog.V(2).Infof("Backing up resource %s/%s\n", group.GroupVersion, resource.Name)
if err := kb.backupResource(ctx, group, resource); err != nil {
errs = append(errs, err)
}
}
return kuberrs.NewAggregate(errs)
}
const (
appsDeploymentsResource = "deployments.apps"
extensionsDeploymentsResource = "deployments.extensions"
networkingNetworkPoliciesResource = "networkpolicies.networking.k8s.io"
extensionsNetworkPoliciesResource = "networkpolicies.extensions"
)
// backupResource backs up all the objects for a given group-version-resource.
func (kb *kubernetesBackupper) backupResource(
ctx *backupContext,
group *metav1.APIResourceList,
resource metav1.APIResource,
) error {
var errs []error
gv, err := schema.ParseGroupVersion(group.GroupVersion)
if err != nil {
return err
}
gvr := schema.GroupVersionResource{Group: gv.Group, Version: gv.Version}
gr := schema.GroupResource{Group: gv.Group, Resource: resource.Name}
grString := gr.String()
if !ctx.resourceIncludesExcludes.ShouldInclude(grString) {
glog.V(2).Infof("Not including resource %s\n", grString)
return nil
}
if grString == appsDeploymentsResource || grString == extensionsDeploymentsResource {
if ctx.deploymentsBackedUp {
var other string
if grString == appsDeploymentsResource {
other = extensionsDeploymentsResource
} else {
other = appsDeploymentsResource
}
glog.V(4).Infof("Skipping resource %q because it's a duplicate of %q", grString, other)
return nil
}
ctx.deploymentsBackedUp = true
}
if grString == networkingNetworkPoliciesResource || grString == extensionsNetworkPoliciesResource {
if ctx.networkPoliciesBackedUp {
var other string
if grString == networkingNetworkPoliciesResource {
other = extensionsNetworkPoliciesResource
} else {
other = networkingNetworkPoliciesResource
}
glog.V(4).Infof("Skipping resource %q because it's a duplicate of %q", grString, other)
return nil
}
ctx.networkPoliciesBackedUp = true
}
var namespacesToList []string
if resource.Namespaced {
namespacesToList = getNamespacesToList(ctx.namespaceIncludesExcludes)
} else {
namespacesToList = []string{""}
}
for _, namespace := range namespacesToList {
resourceClient, err := kb.dynamicFactory.ClientForGroupVersionResource(gvr, resource, namespace)
if err != nil {
return err
}
labelSelector := ""
if ctx.backup.Spec.LabelSelector != nil {
labelSelector = metav1.FormatLabelSelector(ctx.backup.Spec.LabelSelector)
}
unstructuredList, err := resourceClient.List(metav1.ListOptions{LabelSelector: labelSelector})
if err != nil {
return err
}
// do the backup
items, err := meta.ExtractList(unstructuredList)
if err != nil {
return err
}
action := kb.actions[gr]
for _, item := range items {
unstructured, ok := item.(runtime.Unstructured)
if !ok {
errs = append(errs, fmt.Errorf("unexpected type %T", item))
continue
}
obj := unstructured.UnstructuredContent()
if err := kb.itemBackupper.backupItem(ctx, obj, grString, action); err != nil {
errs = append(errs, err)
}
}
}
return kuberrs.NewAggregate(errs)
}
// getNamespacesToList examines ie and resolves the includes and excludes to a full list of
// namespaces to list. If ie is nil or it includes *, the result is just "" (list across all
// namespaces). Otherwise, the result is a list of every included namespace minus all excluded ones.
func getNamespacesToList(ie *collections.IncludesExcludes) []string {
if ie == nil {
return []string{""}
}
if ie.ShouldInclude("*") {
// "" means all namespaces
return []string{""}
}
var list []string
for _, i := range ie.GetIncludes() {
if ie.ShouldInclude(i) {
list = append(list, i)
}
}
return list
}
type itemBackupper interface {
backupItem(ctx *backupContext, item map[string]interface{}, groupResource string, action Action) error
}
type realItemBackupper struct{}
// backupItem backs up an individual item to tarWriter. The item may be excluded based on the
// namespaces IncludesExcludes list.
func (*realItemBackupper) backupItem(ctx *backupContext, item map[string]interface{}, groupResource string, action Action) error {
// Never save status
delete(item, "status")
metadata, err := collections.GetMap(item, "metadata")
if err != nil {
return err
}
name, err := collections.GetString(metadata, "name")
if err != nil {
return err
}
namespace, err := collections.GetString(metadata, "namespace")
if err == nil {
if !ctx.namespaceIncludesExcludes.ShouldInclude(namespace) {
glog.V(2).Infof("Excluding item %s because namespace %s is excluded\n", name, namespace)
return nil
}
}
if action != nil {
glog.V(4).Infof("Executing action on %s, ns=%s, name=%s", groupResource, namespace, name)
action.Execute(item, ctx.backup)
}
glog.V(2).Infof("Backing up resource=%s, ns=%s, name=%s", groupResource, namespace, name)
var filePath string
if namespace != "" {
filePath = strings.Join([]string{api.NamespaceScopedDir, namespace, groupResource, name + ".json"}, "/")
} else {
filePath = strings.Join([]string{api.ClusterScopedDir, groupResource, name + ".json"}, "/")
}
itemBytes, err := json.Marshal(item)
if err != nil {
return err
}
hdr := &tar.Header{
Name: filePath,
Size: int64(len(itemBytes)),
Typeflag: tar.TypeReg,
Mode: 0755,
ModTime: time.Now(),
}
if err := ctx.w.WriteHeader(hdr); err != nil {
return err
}
if _, err := ctx.w.Write(itemBytes); err != nil {
return err
}
return nil
}

1097
pkg/backup/backup_test.go Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,128 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package backup
import (
"fmt"
"regexp"
"github.com/golang/glog"
"k8s.io/apimachinery/pkg/util/clock"
api "github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/cloudprovider"
"github.com/heptio/ark/pkg/util/collections"
)
// volumeSnapshotAction is a struct that knows how to take snapshots of PersistentVolumes
// that are backed by compatible cloud volumes.
type volumeSnapshotAction struct {
snapshotService cloudprovider.SnapshotService
clock clock.Clock
}
var _ Action = &volumeSnapshotAction{}
func NewVolumeSnapshotAction(snapshotService cloudprovider.SnapshotService) Action {
return &volumeSnapshotAction{
snapshotService: snapshotService,
clock: clock.RealClock{},
}
}
// Execute triggers a snapshot for the volume/disk underlying a PersistentVolume if the provided
// backup has volume snapshots enabled and the PV is of a compatible type. Also records cloud
// disk type and IOPS (if applicable) to be able to restore to current state later.
func (a *volumeSnapshotAction) Execute(volume map[string]interface{}, backup *api.Backup) error {
backupName := fmt.Sprintf("%s/%s", backup.Namespace, backup.Name)
if !backup.Spec.SnapshotVolumes {
glog.V(2).Infof("Backup %q has volume snapshots disabled; skipping volume snapshot action.", backupName)
return nil
}
metadata := volume["metadata"].(map[string]interface{})
name := metadata["name"].(string)
volumeID := getVolumeID(volume)
if volumeID == "" {
return fmt.Errorf("unable to determine volume ID for backup %q, PersistentVolume %q", backupName, name)
}
expiration := a.clock.Now().Add(backup.Spec.TTL.Duration)
glog.Infof("Backup %q: snapshotting PersistenVolume %q, volume-id %q, expiration %v", backupName, name, volumeID, expiration)
snapshotID, err := a.snapshotService.CreateSnapshot(volumeID)
if err != nil {
glog.V(4).Infof("error creating snapshot for backup %q, volume %q, volume-id %q: %v", backupName, name, volumeID, err)
return err
}
volumeType, iops, err := a.snapshotService.GetVolumeInfo(volumeID)
if err != nil {
glog.V(4).Infof("error getting volume info for backup %q, volume %q, volume-id %q: %v", backupName, name, volumeID, err)
return err
}
if backup.Status.VolumeBackups == nil {
backup.Status.VolumeBackups = make(map[string]*api.VolumeBackupInfo)
}
backup.Status.VolumeBackups[name] = &api.VolumeBackupInfo{
SnapshotID: snapshotID,
Type: volumeType,
Iops: iops,
}
return nil
}
var ebsVolumeIDRegex = regexp.MustCompile("vol-.*")
func getVolumeID(pv map[string]interface{}) string {
spec, err := collections.GetMap(pv, "spec")
if err != nil {
return ""
}
if aws, err := collections.GetMap(spec, "awsElasticBlockStore"); err == nil {
volumeID, err := collections.GetString(aws, "volumeID")
if err != nil {
return ""
}
return ebsVolumeIDRegex.FindString(volumeID)
}
if gce, err := collections.GetMap(spec, "gcePersistentDisk"); err == nil {
volumeID, err := collections.GetString(gce, "pdName")
if err != nil {
return ""
}
return volumeID
}
if gce, err := collections.GetMap(spec, "azureDisk"); err == nil {
volumeID, err := collections.GetString(gce, "diskName")
if err != nil {
return ""
}
return volumeID
}
return ""
}

View File

@ -0,0 +1,205 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package backup
import (
"reflect"
"testing"
"time"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/clock"
"github.com/heptio/ark/pkg/apis/ark/v1"
. "github.com/heptio/ark/pkg/util/test"
)
func TestVolumeSnapshotAction(t *testing.T) {
iops := 1000
tests := []struct {
name string
snapshotEnabled bool
pv string
ttl time.Duration
expectError bool
expectedVolumeID string
existingVolumeBackups map[string]*v1.VolumeBackupInfo
volumeInfo map[string]v1.VolumeBackupInfo
}{
{
name: "snapshot disabled",
pv: `{"apiVersion": "v1", "kind": "PersistentVolume", "metadata": {"name": "mypv"}}`,
snapshotEnabled: false,
},
{
name: "can't find volume id - missing spec",
snapshotEnabled: true,
pv: `{"apiVersion": "v1", "kind": "PersistentVolume", "metadata": {"name": "mypv"}}`,
expectError: true,
},
{
name: "can't find volume id - spec but no volume source defined",
snapshotEnabled: true,
pv: `{"apiVersion": "v1", "kind": "PersistentVolume", "metadata": {"name": "mypv"}, "spec": {}}`,
expectError: true,
},
{
name: "can't find volume id - aws but no volume id",
snapshotEnabled: true,
pv: `{"apiVersion": "v1", "kind": "PersistentVolume", "metadata": {"name": "mypv"}, "spec": {"awsElasticBlockStore": {}}}`,
expectError: true,
},
{
name: "can't find volume id - gce but no volume id",
snapshotEnabled: true,
pv: `{"apiVersion": "v1", "kind": "PersistentVolume", "metadata": {"name": "mypv"}, "spec": {"gcePersistentDisk": {}}}`,
expectError: true,
},
{
name: "aws - simple volume id",
snapshotEnabled: true,
pv: `{"apiVersion": "v1", "kind": "PersistentVolume", "metadata": {"name": "mypv"}, "spec": {"awsElasticBlockStore": {"volumeID": "vol-abc123"}}}`,
expectError: false,
expectedVolumeID: "vol-abc123",
ttl: 5 * time.Minute,
volumeInfo: map[string]v1.VolumeBackupInfo{
"vol-abc123": v1.VolumeBackupInfo{Type: "gp", SnapshotID: "snap-1"},
},
},
{
name: "aws - simple volume id with provisioned IOPS",
snapshotEnabled: true,
pv: `{"apiVersion": "v1", "kind": "PersistentVolume", "metadata": {"name": "mypv"}, "spec": {"awsElasticBlockStore": {"volumeID": "vol-abc123"}}}`,
expectError: false,
expectedVolumeID: "vol-abc123",
ttl: 5 * time.Minute,
volumeInfo: map[string]v1.VolumeBackupInfo{
"vol-abc123": v1.VolumeBackupInfo{Type: "io1", Iops: &iops, SnapshotID: "snap-1"},
},
},
{
name: "aws - dynamically provisioned volume id",
snapshotEnabled: true,
pv: `{"apiVersion": "v1", "kind": "PersistentVolume", "metadata": {"name": "mypv"}, "spec": {"awsElasticBlockStore": {"volumeID": "aws://us-west-2a/vol-abc123"}}}`,
expectError: false,
expectedVolumeID: "vol-abc123",
ttl: 5 * time.Minute,
volumeInfo: map[string]v1.VolumeBackupInfo{
"vol-abc123": v1.VolumeBackupInfo{Type: "gp", SnapshotID: "snap-1"},
},
},
{
name: "gce",
snapshotEnabled: true,
pv: `{"apiVersion": "v1", "kind": "PersistentVolume", "metadata": {"name": "mypv"}, "spec": {"gcePersistentDisk": {"pdName": "pd-abc123"}}}`,
expectError: false,
expectedVolumeID: "pd-abc123",
ttl: 5 * time.Minute,
volumeInfo: map[string]v1.VolumeBackupInfo{
"pd-abc123": v1.VolumeBackupInfo{Type: "gp", SnapshotID: "snap-1"},
},
},
{
name: "preexisting volume backup info in backup status",
snapshotEnabled: true,
pv: `{"apiVersion": "v1", "kind": "PersistentVolume", "metadata": {"name": "mypv"}, "spec": {"gcePersistentDisk": {"pdName": "pd-abc123"}}}`,
expectError: false,
expectedVolumeID: "pd-abc123",
ttl: 5 * time.Minute,
existingVolumeBackups: map[string]*v1.VolumeBackupInfo{
"anotherpv": &v1.VolumeBackupInfo{SnapshotID: "anothersnap"},
},
volumeInfo: map[string]v1.VolumeBackupInfo{
"pd-abc123": v1.VolumeBackupInfo{Type: "gp", SnapshotID: "snap-1"},
},
},
{
name: "create snapshot error",
snapshotEnabled: true,
pv: `{"apiVersion": "v1", "kind": "PersistentVolume", "metadata": {"name": "mypv"}, "spec": {"gcePersistentDisk": {"pdName": "pd-abc123"}}}`,
expectError: true,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
backup := &v1.Backup{
ObjectMeta: metav1.ObjectMeta{
Namespace: v1.DefaultNamespace,
Name: "mybackup",
},
Spec: v1.BackupSpec{
SnapshotVolumes: test.snapshotEnabled,
TTL: metav1.Duration{Duration: test.ttl},
},
Status: v1.BackupStatus{
VolumeBackups: test.existingVolumeBackups,
},
}
snapshotService := &FakeSnapshotService{SnapshottableVolumes: test.volumeInfo}
action := NewVolumeSnapshotAction(snapshotService).(*volumeSnapshotAction)
fakeClock := clock.NewFakeClock(time.Now())
action.clock = fakeClock
pv, err := getAsMap(test.pv)
if err != nil {
t.Fatal(err)
}
err = action.Execute(pv, backup)
gotErr := err != nil
if e, a := test.expectError, gotErr; e != a {
t.Errorf("error: expected %v, got %v", e, a)
}
if test.expectError {
return
}
if !test.snapshotEnabled {
// don't need to check anything else if snapshots are disabled
return
}
expectedVolumeBackups := test.existingVolumeBackups
if expectedVolumeBackups == nil {
expectedVolumeBackups = make(map[string]*v1.VolumeBackupInfo)
}
// we should have one snapshot taken exactly
require.Equal(t, 1, snapshotService.SnapshotsTaken.Len())
// the snapshotID should be the one in the entry in snapshotService.SnapshottableVolumes
// for the volume we ran the test for
snapshotID, _ := snapshotService.SnapshotsTaken.PopAny()
expectedVolumeBackups["mypv"] = &v1.VolumeBackupInfo{
SnapshotID: snapshotID,
Type: test.volumeInfo[test.expectedVolumeID].Type,
Iops: test.volumeInfo[test.expectedVolumeID].Iops,
}
if e, a := expectedVolumeBackups, backup.Status.VolumeBackups; !reflect.DeepEqual(e, a) {
t.Errorf("backup.status.VolumeBackups: expected %v, got %v", e, a)
}
})
}
}

27
pkg/buildinfo/version.go Normal file
View File

@ -0,0 +1,27 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package buildinfo holds build-time information like the sonobuoy version.
// This is a separate package so that other packages can import it without
// worrying about introducing circular dependencies.
package buildinfo
// Version is the current version of Ark, set by the go linker's -X flag at build time.
var Version string
// DockerImage is the full path to the docker image for this build, for example
// gcr.io/heptio-images/ark.
var DockerImage string

31
pkg/client/client.go Normal file
View File

@ -0,0 +1,31 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package client
import (
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
)
// Config returns a *rest.Config, using either the kubeconfig (if specified) or an in-cluster
// configuration.
func Config(kubeconfig string) (*rest.Config, error) {
if len(kubeconfig) > 0 {
return clientcmd.BuildConfigFromFlags("", kubeconfig)
}
return rest.InClusterConfig()
}

100
pkg/client/dynamic.go Normal file
View File

@ -0,0 +1,100 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package client
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/dynamic"
)
// DynamicFactory contains methods for retrieving dynamic clients for GroupVersionResources and
// GroupVersionKinds.
type DynamicFactory interface {
// ClientForGroupVersionResource returns a Dynamic client for the given Group and Version
// (specified in gvr) and Resource (specified in resource) for the given namespace.
ClientForGroupVersionResource(gvr schema.GroupVersionResource, resource metav1.APIResource, namespace string) (Dynamic, error)
// ClientForGroupVersionKind returns a Dynamic client for the given Group and Version
// (specified in gvk) and Resource (specified in resource) for the given namespace.
ClientForGroupVersionKind(gvk schema.GroupVersionKind, resource metav1.APIResource, namespace string) (Dynamic, error)
}
// dynamicFactory implements DynamicFactory.
type dynamicFactory struct {
clientPool dynamic.ClientPool
}
var _ DynamicFactory = &dynamicFactory{}
// NewDynamicFactory returns a new ClientPool-based dynamic factory.
func NewDynamicFactory(clientPool dynamic.ClientPool) DynamicFactory {
return &dynamicFactory{clientPool: clientPool}
}
func (f *dynamicFactory) ClientForGroupVersionResource(gvr schema.GroupVersionResource, resource metav1.APIResource, namespace string) (Dynamic, error) {
dynamicClient, err := f.clientPool.ClientForGroupVersionResource(gvr)
if err != nil {
return nil, err
}
return &dynamicResourceClient{
resourceClient: dynamicClient.Resource(&resource, namespace),
}, nil
}
func (f *dynamicFactory) ClientForGroupVersionKind(gvk schema.GroupVersionKind, resource metav1.APIResource, namespace string) (Dynamic, error) {
dynamicClient, err := f.clientPool.ClientForGroupVersionKind(gvk)
if err != nil {
return nil, err
}
return &dynamicResourceClient{
resourceClient: dynamicClient.Resource(&resource, namespace),
}, nil
}
// Dynamic contains client methods that Ark needs for backing up and restoring resources.
type Dynamic interface {
// Create creates an object.
Create(obj *unstructured.Unstructured) (*unstructured.Unstructured, error)
// List lists all the objects of a given resource.
List(metav1.ListOptions) (runtime.Object, error)
// Watch watches for changes to objects of a given resource.
Watch(metav1.ListOptions) (watch.Interface, error)
}
// dynamicResourceClient implements Dynamic.
type dynamicResourceClient struct {
resourceClient *dynamic.ResourceClient
}
var _ Dynamic = &dynamicResourceClient{}
func (d *dynamicResourceClient) Create(obj *unstructured.Unstructured) (*unstructured.Unstructured, error) {
return d.resourceClient.Create(obj)
}
func (d *dynamicResourceClient) List(options metav1.ListOptions) (runtime.Object, error) {
return d.resourceClient.List(options)
}
func (d *dynamicResourceClient) Watch(options metav1.ListOptions) (watch.Interface, error) {
return d.resourceClient.Watch(options)
}

72
pkg/client/factory.go Normal file
View File

@ -0,0 +1,72 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package client
import (
"os"
"github.com/spf13/pflag"
"github.com/heptio/ark/pkg/generated/clientset"
)
// Factory knows how to create an ArkClient.
type Factory interface {
// BindFlags binds common flags such as --kubeconfig to the passed-in FlagSet.
BindFlags(flags *pflag.FlagSet)
// Client returns an ArkClient. It uses the following priority to specify the cluster
// configuration: --kubeconfig flag, KUBECONFIG environment variable, in-cluster configuration.
Client() (clientset.Interface, error)
}
type factory struct {
flags *pflag.FlagSet
kubeconfig string
}
// NewFactory returns a Factory.
func NewFactory() Factory {
f := &factory{
flags: pflag.NewFlagSet("", pflag.ContinueOnError),
}
f.flags.StringVar(&f.kubeconfig, "kubeconfig", "", "Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration")
return f
}
func (f *factory) BindFlags(flags *pflag.FlagSet) {
flags.AddFlagSet(f.flags)
}
func (f *factory) Client() (clientset.Interface, error) {
kubeconfig := f.kubeconfig
if kubeconfig == "" {
// if the command line flag was not specified, try the environment variable
kubeconfig = os.Getenv("KUBECONFIG")
}
clientConfig, err := Config(kubeconfig)
if err != nil {
return nil, err
}
arkClient, err := clientset.NewForConfig(clientConfig)
if err != nil {
return nil, err
}
return arkClient, nil
}

View File

@ -0,0 +1,164 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package aws
import (
"fmt"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/heptio/ark/pkg/cloudprovider"
)
var _ cloudprovider.BlockStorageAdapter = &blockStorageAdapter{}
type blockStorageAdapter struct {
ec2 *ec2.EC2
az string
}
func (op *blockStorageAdapter) CreateVolumeFromSnapshot(snapshotID, volumeType string, iops *int) (volumeID string, err error) {
req := &ec2.CreateVolumeInput{
SnapshotId: &snapshotID,
AvailabilityZone: &op.az,
VolumeType: &volumeType,
}
if iops != nil {
req.SetIops(int64(*iops))
}
res, err := op.ec2.CreateVolume(req)
if err != nil {
return "", err
}
return *res.VolumeId, nil
}
func (op *blockStorageAdapter) GetVolumeInfo(volumeID string) (string, *int, error) {
req := &ec2.DescribeVolumesInput{
VolumeIds: []*string{&volumeID},
}
res, err := op.ec2.DescribeVolumes(req)
if err != nil {
return "", nil, err
}
if len(res.Volumes) != 1 {
return "", nil, fmt.Errorf("Expected one volume from DescribeVolumes for volume ID %v, got %v", volumeID, len(res.Volumes))
}
vol := res.Volumes[0]
var (
volumeType string
iops int
)
if vol.VolumeType != nil {
volumeType = *vol.VolumeType
}
if vol.Iops != nil {
iops = int(*vol.Iops)
}
return volumeType, &iops, nil
}
func (op *blockStorageAdapter) IsVolumeReady(volumeID string) (ready bool, err error) {
req := &ec2.DescribeVolumesInput{
VolumeIds: []*string{&volumeID},
}
res, err := op.ec2.DescribeVolumes(req)
if err != nil {
return false, err
}
if len(res.Volumes) != 1 {
return false, fmt.Errorf("Expected one volume from DescribeVolumes for volume ID %v, got %v", volumeID, len(res.Volumes))
}
return *res.Volumes[0].State == ec2.VolumeStateAvailable, nil
}
func (op *blockStorageAdapter) ListSnapshots(tagFilters map[string]string) ([]string, error) {
req := &ec2.DescribeSnapshotsInput{}
for k, v := range tagFilters {
filter := &ec2.Filter{}
filter.SetName(k)
filter.SetValues([]*string{&v})
req.Filters = append(req.Filters, filter)
}
res, err := op.ec2.DescribeSnapshots(req)
if err != nil {
return nil, err
}
var ret []string
for _, snapshot := range res.Snapshots {
ret = append(ret, *snapshot.SnapshotId)
}
return ret, nil
}
func (op *blockStorageAdapter) CreateSnapshot(volumeID string, tags map[string]string) (string, error) {
req := &ec2.CreateSnapshotInput{
VolumeId: &volumeID,
}
res, err := op.ec2.CreateSnapshot(req)
if err != nil {
return "", err
}
tagsReq := &ec2.CreateTagsInput{}
tagsReq.SetResources([]*string{res.SnapshotId})
ec2Tags := make([]*ec2.Tag, 0, len(tags))
for k, v := range tags {
key := k
val := v
tag := &ec2.Tag{Key: &key, Value: &val}
ec2Tags = append(ec2Tags, tag)
}
tagsReq.SetTags(ec2Tags)
_, err = op.ec2.CreateTags(tagsReq)
return *res.SnapshotId, err
}
func (op *blockStorageAdapter) DeleteSnapshot(snapshotID string) error {
req := &ec2.DeleteSnapshotInput{
SnapshotId: &snapshotID,
}
_, err := op.ec2.DeleteSnapshot(req)
return err
}

View File

@ -0,0 +1,88 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package aws
import (
"io"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/heptio/ark/pkg/cloudprovider"
)
var _ cloudprovider.ObjectStorageAdapter = &objectStorageAdapter{}
type objectStorageAdapter struct {
s3 *s3.S3
}
func (op *objectStorageAdapter) PutObject(bucket string, key string, body io.ReadSeeker) error {
req := &s3.PutObjectInput{
Bucket: &bucket,
Key: &key,
Body: body,
}
_, err := op.s3.PutObject(req)
return err
}
func (op *objectStorageAdapter) GetObject(bucket string, key string) (io.ReadCloser, error) {
req := &s3.GetObjectInput{
Bucket: &bucket,
Key: &key,
}
res, err := op.s3.GetObject(req)
if err != nil {
return nil, err
}
return res.Body, nil
}
func (op *objectStorageAdapter) ListCommonPrefixes(bucket string, delimiter string) ([]string, error) {
req := &s3.ListObjectsV2Input{
Bucket: &bucket,
Delimiter: &delimiter,
}
res, err := op.s3.ListObjectsV2(req)
if err != nil {
return nil, err
}
ret := make([]string, 0, len(res.CommonPrefixes))
for _, prefix := range res.CommonPrefixes {
ret = append(ret, *prefix.Prefix)
}
return ret, nil
}
func (op *objectStorageAdapter) DeleteObject(bucket string, key string) error {
req := &s3.DeleteObjectInput{
Bucket: &bucket,
Key: &key,
}
_, err := op.s3.DeleteObject(req)
return err
}

View File

@ -0,0 +1,62 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package aws
import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/heptio/ark/pkg/cloudprovider"
)
type storageAdapter struct {
blockStorage *blockStorageAdapter
objectStorage *objectStorageAdapter
}
var _ cloudprovider.StorageAdapter = &storageAdapter{}
func NewStorageAdapter(config *aws.Config, availabilityZone string) (cloudprovider.StorageAdapter, error) {
sess, err := session.NewSession(config)
if err != nil {
return nil, err
}
if _, err := sess.Config.Credentials.Get(); err != nil {
return nil, err
}
return &storageAdapter{
blockStorage: &blockStorageAdapter{
ec2: ec2.New(sess),
az: availabilityZone,
},
objectStorage: &objectStorageAdapter{
s3: s3.New(sess),
},
}, nil
}
func (op *storageAdapter) ObjectStorage() cloudprovider.ObjectStorageAdapter {
return op.objectStorage
}
func (op *storageAdapter) BlockStorage() cloudprovider.BlockStorageAdapter {
return op.blockStorage
}

View File

@ -0,0 +1,187 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package azure
import (
"context"
"errors"
"fmt"
"time"
azure "github.com/Azure/azure-sdk-for-go/arm/disk"
"github.com/satori/uuid"
"github.com/heptio/ark/pkg/cloudprovider"
)
type blockStorageAdapter struct {
disks *azure.DisksClient
snaps *azure.SnapshotsClient
subscription string
resourceGroup string
location string
apiTimeout time.Duration
}
var _ cloudprovider.BlockStorageAdapter = &blockStorageAdapter{}
func (op *blockStorageAdapter) CreateVolumeFromSnapshot(snapshotID, volumeType string, iops *int) (string, error) {
fullSnapshotName := getFullSnapshotName(op.subscription, op.resourceGroup, snapshotID)
diskName := "restore-" + uuid.NewV4().String()
disk := azure.Model{
Name: &diskName,
Location: &op.location,
Properties: &azure.Properties{
CreationData: &azure.CreationData{
CreateOption: azure.Copy,
SourceResourceID: &fullSnapshotName,
},
AccountType: azure.StorageAccountTypes(volumeType),
},
}
ctx, cancel := context.WithTimeout(context.Background(), op.apiTimeout)
defer cancel()
_, errChan := op.disks.CreateOrUpdate(op.resourceGroup, *disk.Name, disk, ctx.Done())
err := <-errChan
if err != nil {
return "", err
}
return diskName, nil
}
func (op *blockStorageAdapter) GetVolumeInfo(volumeID string) (string, *int, error) {
res, err := op.disks.Get(op.resourceGroup, volumeID)
if err != nil {
return "", nil, err
}
return string(res.AccountType), nil, nil
}
func (op *blockStorageAdapter) IsVolumeReady(volumeID string) (ready bool, err error) {
res, err := op.disks.Get(op.resourceGroup, volumeID)
if err != nil {
return false, err
}
if res.ProvisioningState == nil {
return false, errors.New("nil ProvisioningState returned from Get call")
}
return *res.ProvisioningState == "Succeeded", nil
}
func (op *blockStorageAdapter) ListSnapshots(tagFilters map[string]string) ([]string, error) {
res, err := op.snaps.ListByResourceGroup(op.resourceGroup)
if err != nil {
return nil, err
}
if res.Value == nil {
return nil, errors.New("nil Value returned from ListByResourceGroup call")
}
ret := make([]string, 0, len(*res.Value))
Snapshot:
for _, snap := range *res.Value {
if snap.Tags == nil && len(tagFilters) > 0 {
continue
}
if snap.ID == nil {
continue
}
// Azure doesn't offer tag-filtering through the API so we have to manually
// filter results. Require all filter keys to be present, with matching vals.
for filterKey, filterVal := range tagFilters {
if val, ok := (*snap.Tags)[filterKey]; !ok || val == nil || *val != filterVal {
continue Snapshot
}
}
ret = append(ret, *snap.Name)
}
return ret, nil
}
func (op *blockStorageAdapter) CreateSnapshot(volumeID string, tags map[string]string) (string, error) {
fullDiskName := getFullDiskName(op.subscription, op.resourceGroup, volumeID)
// snapshot names must be <= 80 characters long
var snapshotName string
suffix := "-" + uuid.NewV4().String()
if len(volumeID) <= (80 - len(suffix)) {
snapshotName = volumeID + suffix
} else {
snapshotName = volumeID[0:80-len(suffix)] + suffix
}
snap := azure.Snapshot{
Name: &snapshotName,
Properties: &azure.Properties{
CreationData: &azure.CreationData{
CreateOption: azure.Copy,
SourceResourceID: &fullDiskName,
},
},
Tags: &map[string]*string{},
Location: &op.location,
}
for k, v := range tags {
val := v
(*snap.Tags)[k] = &val
}
ctx, cancel := context.WithTimeout(context.Background(), op.apiTimeout)
defer cancel()
_, errChan := op.snaps.CreateOrUpdate(op.resourceGroup, *snap.Name, snap, ctx.Done())
err := <-errChan
if err != nil {
return "", err
}
return snapshotName, nil
}
func (op *blockStorageAdapter) DeleteSnapshot(snapshotID string) error {
ctx, cancel := context.WithTimeout(context.Background(), op.apiTimeout)
defer cancel()
_, errChan := op.snaps.Delete(op.resourceGroup, snapshotID, ctx.Done())
err := <-errChan
return err
}
func getFullDiskName(subscription string, resourceGroup string, diskName string) string {
return fmt.Sprintf("/subscriptions/%v/resourceGroups/%v/providers/Microsoft.Compute/disks/%v", subscription, resourceGroup, diskName)
}
func getFullSnapshotName(subscription string, resourceGroup string, snapshotName string) string {
return fmt.Sprintf("/subscriptions/%v/resourceGroups/%v/providers/Microsoft.Compute/snapshots/%v", subscription, resourceGroup, snapshotName)
}

View File

@ -0,0 +1,138 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package azure
import (
"fmt"
"io"
"strings"
"github.com/Azure/azure-sdk-for-go/storage"
"github.com/heptio/ark/pkg/cloudprovider"
)
// ref. https://github.com/Azure-Samples/storage-blob-go-getting-started/blob/master/storageExample.go
type objectStorageAdapter struct {
blobClient *storage.BlobStorageClient
}
var _ cloudprovider.ObjectStorageAdapter = &objectStorageAdapter{}
func (op *objectStorageAdapter) PutObject(bucket string, key string, body io.ReadSeeker) error {
container, err := getContainerReference(op.blobClient, bucket)
if err != nil {
return err
}
blob, err := getBlobReference(container, key)
if err != nil {
return err
}
// TODO having to seek to end/back to beginning to get
// length here is ugly. refactor to make this better.
len, err := body.Seek(0, io.SeekEnd)
if err != nil {
return err
}
blob.Properties.ContentLength = len
if _, err := body.Seek(0, 0); err != nil {
return err
}
return blob.CreateBlockBlobFromReader(body, nil)
}
func (op *objectStorageAdapter) GetObject(bucket string, key string) (io.ReadCloser, error) {
container, err := getContainerReference(op.blobClient, bucket)
if err != nil {
return nil, err
}
blob, err := getBlobReference(container, key)
if err != nil {
return nil, err
}
res, err := blob.Get(nil)
if err != nil {
return nil, err
}
return res, nil
}
func (op *objectStorageAdapter) ListCommonPrefixes(bucket string, delimiter string) ([]string, error) {
container, err := getContainerReference(op.blobClient, bucket)
if err != nil {
return nil, err
}
params := storage.ListBlobsParameters{
Delimiter: delimiter,
}
res, err := container.ListBlobs(params)
if err != nil {
return nil, err
}
// Azure returns prefixes inclusive of the last delimiter. We need to strip
// it.
ret := make([]string, 0, len(res.BlobPrefixes))
for _, prefix := range res.BlobPrefixes {
ret = append(ret, prefix[0:strings.LastIndex(prefix, delimiter)])
}
return ret, nil
}
func (op *objectStorageAdapter) DeleteObject(bucket string, key string) error {
container, err := getContainerReference(op.blobClient, bucket)
if err != nil {
return err
}
blob, err := getBlobReference(container, key)
if err != nil {
return err
}
return blob.Delete(nil)
}
func getContainerReference(blobClient *storage.BlobStorageClient, bucket string) (*storage.Container, error) {
container := blobClient.GetContainerReference(bucket)
if container == nil {
return nil, fmt.Errorf("unable to get container reference for bucket %v", bucket)
}
return container, nil
}
func getBlobReference(container *storage.Container, key string) (*storage.Blob, error) {
blob := container.GetBlobReference(key)
if blob == nil {
return nil, fmt.Errorf("unable to get blob reference for key %v", key)
}
return blob, nil
}

View File

@ -0,0 +1,103 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package azure
import (
"fmt"
"os"
"time"
"github.com/Azure/azure-sdk-for-go/arm/disk"
"github.com/Azure/azure-sdk-for-go/arm/examples/helpers"
"github.com/Azure/azure-sdk-for-go/storage"
"github.com/Azure/go-autorest/autorest/azure"
"github.com/heptio/ark/pkg/cloudprovider"
)
const (
azureClientIDKey string = "AZURE_CLIENT_ID"
azureClientSecretKey string = "AZURE_CLIENT_SECRET"
azureSubscriptionIDKey string = "AZURE_SUBSCRIPTION_ID"
azureTenantIDKey string = "AZURE_TENANT_ID"
azureStorageAccountIDKey string = "AZURE_STORAGE_ACCOUNT_ID"
azureStorageKeyKey string = "AZURE_STORAGE_KEY"
azureResourceGroupKey string = "AZURE_RESOURCE_GROUP"
)
type storageAdapter struct {
objectStorage *objectStorageAdapter
blockStorage *blockStorageAdapter
}
var _ cloudprovider.StorageAdapter = &storageAdapter{}
func NewStorageAdapter(location string, apiTimeout time.Duration) (cloudprovider.StorageAdapter, error) {
cfg := map[string]string{
azureClientIDKey: "",
azureClientSecretKey: "",
azureSubscriptionIDKey: "",
azureTenantIDKey: "",
azureStorageAccountIDKey: "",
azureStorageKeyKey: "",
azureResourceGroupKey: "",
}
for key := range cfg {
cfg[key] = os.Getenv(key)
}
spt, err := helpers.NewServicePrincipalTokenFromCredentials(cfg, azure.PublicCloud.ResourceManagerEndpoint)
if err != nil {
return nil, fmt.Errorf("error creating new service principal: %v", err)
}
disksClient := disk.NewDisksClient(cfg[azureSubscriptionIDKey])
snapsClient := disk.NewSnapshotsClient(cfg[azureSubscriptionIDKey])
disksClient.Authorizer = spt
snapsClient.Authorizer = spt
storageClient, _ := storage.NewBasicClient(cfg[azureStorageAccountIDKey], cfg[azureStorageKeyKey])
blobClient := storageClient.GetBlobService()
if apiTimeout == 0 {
apiTimeout = time.Minute
}
return &storageAdapter{
objectStorage: &objectStorageAdapter{
blobClient: &blobClient,
},
blockStorage: &blockStorageAdapter{
disks: &disksClient,
snaps: &snapsClient,
subscription: cfg[azureSubscriptionIDKey],
resourceGroup: cfg[azureResourceGroupKey],
location: location,
apiTimeout: apiTimeout,
},
}, nil
}
func (op *storageAdapter) ObjectStorage() cloudprovider.ObjectStorageAdapter {
return op.objectStorage
}
func (op *storageAdapter) BlockStorage() cloudprovider.BlockStorageAdapter {
return op.blockStorage
}

View File

@ -0,0 +1,92 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cloudprovider
import (
"context"
"sync"
"time"
"github.com/golang/glog"
"k8s.io/apimachinery/pkg/util/wait"
"github.com/heptio/ark/pkg/apis/ark/v1"
)
// backupCacheBucket holds the backups and error from a GetAllBackups call.
type backupCacheBucket struct {
backups []*v1.Backup
error error
}
// backupCache caches GetAllBackups calls, refreshing them periodically.
type backupCache struct {
delegate BackupGetter
lock sync.RWMutex
// This doesn't really need to be a map right now, but if we ever move to supporting multiple
// buckets, this will be ready for it.
buckets map[string]*backupCacheBucket
}
var _ BackupGetter = &backupCache{}
// NewBackupCache returns a new backup cache that refreshes from delegate every resyncPeriod.
func NewBackupCache(ctx context.Context, delegate BackupGetter, resyncPeriod time.Duration) BackupGetter {
c := &backupCache{
delegate: delegate,
buckets: make(map[string]*backupCacheBucket),
}
// Start the goroutine to refresh all buckets every resyncPeriod. This stops when ctx.Done() is
// available.
go wait.Until(c.refresh, resyncPeriod, ctx.Done())
return c
}
// refresh refreshes all the buckets currently in the cache by doing a live lookup via c.delegate.
func (c *backupCache) refresh() {
c.lock.Lock()
defer c.lock.Unlock()
glog.V(4).Infof("refreshing all cached backup lists from object storage")
for bucketName, bucket := range c.buckets {
glog.V(4).Infof("refreshing bucket %q", bucketName)
bucket.backups, bucket.error = c.delegate.GetAllBackups(bucketName)
}
}
func (c *backupCache) GetAllBackups(bucketName string) ([]*v1.Backup, error) {
c.lock.RLock()
bucket, found := c.buckets[bucketName]
c.lock.RUnlock()
if found {
glog.V(4).Infof("returning cached backup list for bucket %q", bucketName)
return bucket.backups, bucket.error
}
glog.V(4).Infof("bucket %q is not in cache - doing a live lookup", bucketName)
backups, err := c.delegate.GetAllBackups(bucketName)
c.lock.Lock()
c.buckets[bucketName] = &backupCacheBucket{backups: backups, error: err}
c.lock.Unlock()
return backups, err
}

View File

@ -0,0 +1,160 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cloudprovider
import (
"context"
"errors"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/util/test"
)
func TestNewBackupCache(t *testing.T) {
delegate := &test.FakeBackupService{}
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
c := NewBackupCache(ctx, delegate, 100*time.Millisecond)
// nothing in cache, live lookup
bucket1 := []*v1.Backup{
test.NewTestBackup().WithName("backup1").Backup,
test.NewTestBackup().WithName("backup2").Backup,
}
delegate.On("GetAllBackups", "bucket1").Return(bucket1, nil).Once()
// should be updated via refresh
updatedBucket1 := []*v1.Backup{
test.NewTestBackup().WithName("backup2").Backup,
}
delegate.On("GetAllBackups", "bucket1").Return(updatedBucket1, nil)
// nothing in cache, live lookup
bucket2 := []*v1.Backup{
test.NewTestBackup().WithName("backup5").Backup,
test.NewTestBackup().WithName("backup6").Backup,
}
delegate.On("GetAllBackups", "bucket2").Return(bucket2, nil).Once()
// should be updated via refresh
updatedBucket2 := []*v1.Backup{
test.NewTestBackup().WithName("backup7").Backup,
}
delegate.On("GetAllBackups", "bucket2").Return(updatedBucket2, nil)
backups, err := c.GetAllBackups("bucket1")
assert.Equal(t, bucket1, backups)
assert.NoError(t, err)
backups, err = c.GetAllBackups("bucket2")
assert.Equal(t, bucket2, backups)
assert.NoError(t, err)
var done1, done2 bool
for {
select {
case <-ctx.Done():
t.Fatal("timed out")
default:
if done1 && done2 {
return
}
}
backups, err = c.GetAllBackups("bucket1")
if len(backups) == 1 {
if assert.Equal(t, updatedBucket1[0], backups[0]) {
done1 = true
}
}
backups, err = c.GetAllBackups("bucket2")
if len(backups) == 1 {
if assert.Equal(t, updatedBucket2[0], backups[0]) {
done2 = true
}
}
time.Sleep(100 * time.Millisecond)
}
}
func TestBackupCacheRefresh(t *testing.T) {
delegate := &test.FakeBackupService{}
c := &backupCache{
delegate: delegate,
buckets: map[string]*backupCacheBucket{
"bucket1": {},
"bucket2": {},
},
}
bucket1 := []*v1.Backup{
test.NewTestBackup().WithName("backup1").Backup,
test.NewTestBackup().WithName("backup2").Backup,
}
delegate.On("GetAllBackups", "bucket1").Return(bucket1, nil)
delegate.On("GetAllBackups", "bucket2").Return(nil, errors.New("bad"))
c.refresh()
assert.Equal(t, bucket1, c.buckets["bucket1"].backups)
assert.NoError(t, c.buckets["bucket1"].error)
assert.Empty(t, c.buckets["bucket2"].backups)
assert.EqualError(t, c.buckets["bucket2"].error, "bad")
}
func TestBackupCacheGetAllBackupsUsesCacheIfPresent(t *testing.T) {
delegate := &test.FakeBackupService{}
bucket1 := []*v1.Backup{
test.NewTestBackup().WithName("backup1").Backup,
test.NewTestBackup().WithName("backup2").Backup,
}
c := &backupCache{
delegate: delegate,
buckets: map[string]*backupCacheBucket{
"bucket1": {
backups: bucket1,
},
},
}
bucket2 := []*v1.Backup{
test.NewTestBackup().WithName("backup3").Backup,
test.NewTestBackup().WithName("backup4").Backup,
}
delegate.On("GetAllBackups", "bucket2").Return(bucket2, nil)
backups, err := c.GetAllBackups("bucket1")
assert.Equal(t, bucket1, backups)
assert.NoError(t, err)
backups, err = c.GetAllBackups("bucket2")
assert.Equal(t, bucket2, backups)
assert.NoError(t, err)
}

View File

@ -0,0 +1,184 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cloudprovider
import (
"context"
"fmt"
"io"
"io/ioutil"
"time"
"github.com/golang/glog"
"k8s.io/apimachinery/pkg/util/errors"
api "github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/generated/clientset/scheme"
)
// BackupService contains methods for working with backups in object storage.
type BackupService interface {
BackupGetter
// UploadBackup uploads the specified Ark backup of a set of Kubernetes API objects, whose manifests are
// stored in the specified file, into object storage in an Ark bucket, tagged with Ark metadata. Returns
// an error if a problem is encountered accessing the file or performing the upload via the cloud API.
UploadBackup(bucket, name string, metadata, backup io.ReadSeeker) error
// DownloadBackup downloads an Ark backup with the specified object key from object storage via the cloud API.
// It returns the snapshot metadata and data (separately), or an error if a problem is encountered
// downloading or reading the file from the cloud API.
DownloadBackup(bucket, name string) (io.ReadCloser, error)
// DeleteBackup deletes the backup content in object storage for the given api.Backup.
DeleteBackup(bucket, backupName string) error
}
// BackupGetter knows how to list backups in object storage.
type BackupGetter interface {
// GetAllBackups lists all the api.Backups in object storage for the given bucket.
GetAllBackups(bucket string) ([]*api.Backup, error)
}
const (
metadataFileFormatString string = "%s/ark-backup.json"
backupFileFormatString string = "%s/%s.tar.gz"
)
type backupService struct {
objectStorage ObjectStorageAdapter
}
var _ BackupService = &backupService{}
var _ BackupGetter = &backupService{}
// NewBackupService creates a backup service using the provided object storage adapter
func NewBackupService(objectStorage ObjectStorageAdapter) BackupService {
return &backupService{
objectStorage: objectStorage,
}
}
func (br *backupService) UploadBackup(bucket, backupName string, metadata, backup io.ReadSeeker) error {
// upload metadata file
metadataKey := fmt.Sprintf(metadataFileFormatString, backupName)
if err := br.objectStorage.PutObject(bucket, metadataKey, metadata); err != nil {
return err
}
// upload tar file
if err := br.objectStorage.PutObject(bucket, fmt.Sprintf(backupFileFormatString, backupName, backupName), backup); err != nil {
// try to delete the metadata file since the data upload failed
deleteErr := br.objectStorage.DeleteObject(bucket, metadataKey)
return errors.NewAggregate([]error{err, deleteErr})
}
return nil
}
func (br *backupService) DownloadBackup(bucket, backupName string) (io.ReadCloser, error) {
return br.objectStorage.GetObject(bucket, fmt.Sprintf(backupFileFormatString, backupName, backupName))
}
func (br *backupService) GetAllBackups(bucket string) ([]*api.Backup, error) {
prefixes, err := br.objectStorage.ListCommonPrefixes(bucket, "/")
if err != nil {
return nil, err
}
if len(prefixes) == 0 {
return []*api.Backup{}, nil
}
output := make([]*api.Backup, 0, len(prefixes))
decoder := scheme.Codecs.UniversalDecoder(api.SchemeGroupVersion)
for _, backupDir := range prefixes {
err := func() error {
key := fmt.Sprintf(metadataFileFormatString, backupDir)
res, err := br.objectStorage.GetObject(bucket, key)
if err != nil {
return err
}
defer res.Close()
data, err := ioutil.ReadAll(res)
if err != nil {
return err
}
obj, _, err := decoder.Decode(data, nil, nil)
if err != nil {
return err
}
backup, ok := obj.(*api.Backup)
if !ok {
return fmt.Errorf("unexpected type for %s/%s: %T", bucket, key, obj)
}
output = append(output, backup)
return nil
}()
if err != nil {
return nil, err
}
}
return output, nil
}
func (br *backupService) DeleteBackup(bucket, backupName string) error {
var errs []error
key := fmt.Sprintf(backupFileFormatString, backupName, backupName)
glog.V(4).Infof("Trying to delete bucket=%s, key=%s", bucket, key)
if err := br.objectStorage.DeleteObject(bucket, key); err != nil {
errs = append(errs, err)
}
key = fmt.Sprintf(metadataFileFormatString, backupName)
glog.V(4).Infof("Trying to delete bucket=%s, key=%s", bucket, key)
if err := br.objectStorage.DeleteObject(bucket, key); err != nil {
errs = append(errs, err)
}
return errors.NewAggregate(errs)
}
// cachedBackupService wraps a real backup service with a cache for getting cloud backups.
type cachedBackupService struct {
BackupService
cache BackupGetter
}
// NewBackupServiceWithCachedBackupGetter returns a BackupService that uses a cache for
// GetAllBackups().
func NewBackupServiceWithCachedBackupGetter(ctx context.Context, delegate BackupService, resyncPeriod time.Duration) BackupService {
return &cachedBackupService{
BackupService: delegate,
cache: NewBackupCache(ctx, delegate, resyncPeriod),
}
}
func (c *cachedBackupService) GetAllBackups(bucketName string) ([]*api.Backup, error) {
return c.cache.GetAllBackups(bucketName)
}

View File

@ -0,0 +1,407 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cloudprovider
import (
"bytes"
"encoding/json"
"errors"
"io"
"io/ioutil"
"strings"
"testing"
"github.com/stretchr/testify/assert"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/sets"
api "github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/util/encode"
)
func TestUploadBackup(t *testing.T) {
tests := []struct {
name string
bucket string
bucketExists bool
backupName string
metadata io.ReadSeeker
backup io.ReadSeeker
objectStoreErrs map[string]map[string]interface{}
expectedErr bool
expectedRes map[string][]byte
}{
{
name: "normal case",
bucket: "test-bucket",
bucketExists: true,
backupName: "test-backup",
metadata: newStringReadSeeker("foo"),
backup: newStringReadSeeker("bar"),
expectedErr: false,
expectedRes: map[string][]byte{
"test-backup/ark-backup.json": []byte("foo"),
"test-backup/test-backup.tar.gz": []byte("bar"),
},
},
{
name: "no such bucket causes error",
bucket: "test-bucket",
bucketExists: false,
backupName: "test-backup",
expectedErr: true,
},
{
name: "error on metadata upload does not upload data",
bucket: "test-bucket",
bucketExists: true,
backupName: "test-backup",
metadata: newStringReadSeeker("foo"),
backup: newStringReadSeeker("bar"),
objectStoreErrs: map[string]map[string]interface{}{
"putobject": map[string]interface{}{
"test-bucket||test-backup/ark-backup.json": true,
},
},
expectedErr: true,
expectedRes: make(map[string][]byte),
},
{
name: "error on data upload deletes metadata",
bucket: "test-bucket",
bucketExists: true,
backupName: "test-backup",
metadata: newStringReadSeeker("foo"),
backup: newStringReadSeeker("bar"),
objectStoreErrs: map[string]map[string]interface{}{
"putobject": map[string]interface{}{
"test-bucket||test-backup/test-backup.tar.gz": true,
},
},
expectedErr: true,
expectedRes: make(map[string][]byte),
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
objStore := &fakeObjectStorage{
returnErrors: test.objectStoreErrs,
storage: make(map[string]map[string][]byte),
}
if test.bucketExists {
objStore.storage[test.bucket] = make(map[string][]byte)
}
backupService := NewBackupService(objStore)
err := backupService.UploadBackup(test.bucket, test.backupName, test.metadata, test.backup)
assert.Equal(t, test.expectedErr, err != nil, "got error %v", err)
assert.Equal(t, test.expectedRes, objStore.storage[test.bucket])
})
}
}
func TestDownloadBackup(t *testing.T) {
tests := []struct {
name string
bucket string
backupName string
storage map[string]map[string][]byte
expectedErr bool
expectedRes []byte
}{
{
name: "normal case",
bucket: "test-bucket",
backupName: "test-backup",
storage: map[string]map[string][]byte{
"test-bucket": map[string][]byte{
"test-backup/test-backup.tar.gz": []byte("foo"),
},
},
expectedErr: false,
expectedRes: []byte("foo"),
},
{
name: "no such bucket causes error",
bucket: "test-bucket",
backupName: "test-backup",
storage: map[string]map[string][]byte{},
expectedErr: true,
},
{
name: "no such key causes error",
bucket: "test-bucket",
backupName: "test-backup",
storage: map[string]map[string][]byte{
"test-bucket": map[string][]byte{},
},
expectedErr: true,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
objStore := &fakeObjectStorage{storage: test.storage}
backupService := NewBackupService(objStore)
rdr, err := backupService.DownloadBackup(test.bucket, test.backupName)
assert.Equal(t, test.expectedErr, err != nil, "got error %v", err)
if err == nil {
res, err := ioutil.ReadAll(rdr)
assert.Nil(t, err)
assert.Equal(t, test.expectedRes, res)
}
})
}
}
func TestDeleteBackup(t *testing.T) {
tests := []struct {
name string
bucket string
backupName string
storage map[string]map[string][]byte
expectedErr bool
expectedRes map[string][]byte
}{
{
name: "normal case",
bucket: "test-bucket",
backupName: "bak",
storage: map[string]map[string][]byte{
"test-bucket": map[string][]byte{
"bak/bak.tar.gz": nil,
"bak/ark-backup.json": nil,
},
},
expectedErr: false,
expectedRes: make(map[string][]byte),
},
{
name: "failed delete of backup doesn't prevent metadata delete but returns error",
bucket: "test-bucket",
backupName: "bak",
storage: map[string]map[string][]byte{
"test-bucket": map[string][]byte{
"bak/ark-backup.json": nil,
},
},
expectedErr: true,
expectedRes: make(map[string][]byte),
},
{
name: "failed delete of metadata returns error",
bucket: "test-bucket",
backupName: "bak",
storage: map[string]map[string][]byte{
"test-bucket": map[string][]byte{
"bak/bak.tar.gz": nil,
},
},
expectedErr: true,
expectedRes: make(map[string][]byte),
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
objStore := &fakeObjectStorage{storage: test.storage}
backupService := NewBackupService(objStore)
res := backupService.DeleteBackup(test.bucket, test.backupName)
assert.Equal(t, test.expectedErr, res != nil, "got error %v", res)
assert.Equal(t, test.expectedRes, objStore.storage[test.bucket])
})
}
}
func TestGetAllBackups(t *testing.T) {
tests := []struct {
name string
bucket string
storage map[string]map[string][]byte
expectedRes []*api.Backup
expectedErr bool
}{
{
name: "normal case",
bucket: "test-bucket",
storage: map[string]map[string][]byte{
"test-bucket": map[string][]byte{
"backup-1/ark-backup.json": encodeToBytes(&api.Backup{ObjectMeta: metav1.ObjectMeta{Name: "backup-1"}}),
"backup-2/ark-backup.json": encodeToBytes(&api.Backup{ObjectMeta: metav1.ObjectMeta{Name: "backup-2"}}),
},
},
expectedErr: false,
expectedRes: []*api.Backup{
&api.Backup{
TypeMeta: metav1.TypeMeta{Kind: "Backup", APIVersion: "ark.heptio.com/v1"},
ObjectMeta: metav1.ObjectMeta{Name: "backup-1"},
},
&api.Backup{
TypeMeta: metav1.TypeMeta{Kind: "Backup", APIVersion: "ark.heptio.com/v1"},
ObjectMeta: metav1.ObjectMeta{Name: "backup-2"},
},
},
},
{
name: "decode error returns nil/error",
bucket: "test-bucket",
storage: map[string]map[string][]byte{
"test-bucket": map[string][]byte{
"backup-1/ark-backup.json": encodeToBytes(&api.Backup{ObjectMeta: metav1.ObjectMeta{Name: "backup-1"}}),
"backup-2/ark-backup.json": []byte("this is not valid backup JSON"),
},
},
expectedErr: true,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
objStore := &fakeObjectStorage{storage: test.storage}
backupService := NewBackupService(objStore)
res, err := backupService.GetAllBackups(test.bucket)
assert.Equal(t, test.expectedErr, err != nil, "got error %v", err)
assert.Equal(t, test.expectedRes, res)
})
}
}
func jsonMarshal(obj interface{}) []byte {
res, err := json.Marshal(obj)
if err != nil {
panic(err)
}
return res
}
func encodeToBytes(obj runtime.Object) []byte {
res, err := encode.Encode(obj, "json")
if err != nil {
panic(err)
}
return res
}
type stringReadSeeker struct {
*strings.Reader
}
func newStringReadSeeker(s string) *stringReadSeeker {
return &stringReadSeeker{
Reader: strings.NewReader(s),
}
}
func (srs *stringReadSeeker) Seek(offset int64, whence int) (int64, error) {
panic("not implemented")
}
type fakeObjectStorage struct {
storage map[string]map[string][]byte
returnErrors map[string]map[string]interface{}
}
func (os *fakeObjectStorage) PutObject(bucket string, key string, body io.ReadSeeker) error {
if os.returnErrors["putobject"] != nil && os.returnErrors["putobject"][bucket+"||"+key] != nil {
return errors.New("error")
}
if os.storage[bucket] == nil {
return errors.New("bucket not found")
}
data, err := ioutil.ReadAll(body)
if err != nil {
return err
}
os.storage[bucket][key] = data
return nil
}
func (os *fakeObjectStorage) GetObject(bucket string, key string) (io.ReadCloser, error) {
if os.storage == nil {
return nil, errors.New("storage not initialized")
}
if os.storage[bucket] == nil {
return nil, errors.New("bucket not found")
}
if os.storage[bucket][key] == nil {
return nil, errors.New("key not found")
}
return ioutil.NopCloser(bytes.NewReader(os.storage[bucket][key])), nil
}
func (os *fakeObjectStorage) ListCommonPrefixes(bucket string, delimiter string) ([]string, error) {
if os.storage == nil {
return nil, errors.New("storage not initialized")
}
if os.storage[bucket] == nil {
return nil, errors.New("bucket not found")
}
prefixes := sets.NewString()
for key := range os.storage[bucket] {
delimIdx := strings.LastIndex(key, delimiter)
if delimIdx == -1 {
prefixes.Insert(key)
}
prefixes.Insert(key[0:delimIdx])
}
return prefixes.List(), nil
}
func (os *fakeObjectStorage) DeleteObject(bucket string, key string) error {
if os.storage == nil {
return errors.New("storage not initialized")
}
if os.storage[bucket] == nil {
return errors.New("bucket not found")
}
if _, exists := os.storage[bucket][key]; !exists {
return errors.New("key not found")
}
delete(os.storage[bucket], key)
return nil
}

View File

@ -0,0 +1,154 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package gcp
import (
"strings"
"time"
uuid "github.com/satori/go.uuid"
"google.golang.org/api/compute/v0.beta"
"k8s.io/apimachinery/pkg/util/wait"
"github.com/heptio/ark/pkg/cloudprovider"
)
type blockStorageAdapter struct {
gce *compute.Service
project string
zone string
}
var _ cloudprovider.BlockStorageAdapter = &blockStorageAdapter{}
func (op *blockStorageAdapter) CreateVolumeFromSnapshot(snapshotID string, volumeType string, iops *int) (volumeID string, err error) {
res, err := op.gce.Snapshots.Get(op.project, snapshotID).Do()
if err != nil {
return "", err
}
disk := &compute.Disk{
Name: "restore-" + uuid.NewV4().String(),
SourceSnapshot: res.SelfLink,
Type: volumeType,
}
if _, err = op.gce.Disks.Insert(op.project, op.zone, disk).Do(); err != nil {
return "", err
}
return disk.Name, nil
}
func (op *blockStorageAdapter) GetVolumeInfo(volumeID string) (string, *int, error) {
res, err := op.gce.Disks.Get(op.project, op.zone, volumeID).Do()
if err != nil {
return "", nil, err
}
return res.Type, nil, nil
}
func (op *blockStorageAdapter) IsVolumeReady(volumeID string) (ready bool, err error) {
disk, err := op.gce.Disks.Get(op.project, op.zone, volumeID).Do()
if err != nil {
return false, err
}
// TODO can we consider a disk ready while it's in the RESTORING state?
return disk.Status == "READY", nil
}
func (op *blockStorageAdapter) ListSnapshots(tagFilters map[string]string) ([]string, error) {
useParentheses := len(tagFilters) > 1
subFilters := make([]string, 0, len(tagFilters))
for k, v := range tagFilters {
fs := k + " eq " + v
if useParentheses {
fs = "(" + fs + ")"
}
subFilters = append(subFilters, fs)
}
filter := strings.Join(subFilters, " ")
res, err := op.gce.Snapshots.List(op.project).Filter(filter).Do()
if err != nil {
return nil, err
}
ret := make([]string, 0, len(res.Items))
for _, snap := range res.Items {
ret = append(ret, snap.Name)
}
return ret, nil
}
func (op *blockStorageAdapter) CreateSnapshot(volumeID string, tags map[string]string) (string, error) {
// snapshot names must adhere to RFC1035 and be 1-63 characters
// long
var snapshotName string
suffix := "-" + uuid.NewV4().String()
if len(volumeID) <= (63 - len(suffix)) {
snapshotName = volumeID + suffix
} else {
snapshotName = volumeID[0:63-len(suffix)] + suffix
}
gceSnap := compute.Snapshot{
Name: snapshotName,
}
_, err := op.gce.Disks.CreateSnapshot(op.project, op.zone, volumeID, &gceSnap).Do()
if err != nil {
return "", err
}
// the snapshot is not immediately available after creation for putting labels
// on it. poll for a period of time.
if pollErr := wait.Poll(1*time.Second, 30*time.Second, func() (bool, error) {
if res, err := op.gce.Snapshots.Get(op.project, gceSnap.Name).Do(); err == nil {
gceSnap = *res
return true, nil
}
return false, nil
}); pollErr != nil {
return "", err
}
labels := &compute.GlobalSetLabelsRequest{
Labels: tags,
LabelFingerprint: gceSnap.LabelFingerprint,
}
_, err = op.gce.Snapshots.SetLabels(op.project, gceSnap.Name, labels).Do()
if err != nil {
return "", err
}
return gceSnap.Name, nil
}
func (op *blockStorageAdapter) DeleteSnapshot(snapshotID string) error {
_, err := op.gce.Snapshots.Delete(op.project, snapshotID).Do()
return err
}

View File

@ -0,0 +1,73 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package gcp
import (
"io"
"strings"
storage "google.golang.org/api/storage/v1"
"github.com/heptio/ark/pkg/cloudprovider"
)
type objectStorageAdapter struct {
project string
zone string
gcs *storage.Service
}
var _ cloudprovider.ObjectStorageAdapter = &objectStorageAdapter{}
func (op *objectStorageAdapter) PutObject(bucket string, key string, body io.ReadSeeker) error {
obj := &storage.Object{
Name: key,
}
_, err := op.gcs.Objects.Insert(bucket, obj).Media(body).Do()
return err
}
func (op *objectStorageAdapter) GetObject(bucket string, key string) (io.ReadCloser, error) {
res, err := op.gcs.Objects.Get(bucket, key).Download()
if err != nil {
return nil, err
}
return res.Body, nil
}
func (op *objectStorageAdapter) ListCommonPrefixes(bucket string, delimiter string) ([]string, error) {
res, err := op.gcs.Objects.List(bucket).Delimiter(delimiter).Do()
if err != nil {
return nil, err
}
// GCP returns prefixes inclusive of the last delimiter. We need to strip
// it.
ret := make([]string, 0, len(res.Prefixes))
for _, prefix := range res.Prefixes {
ret = append(ret, prefix[0:strings.LastIndex(prefix, delimiter)])
}
return ret, nil
}
func (op *objectStorageAdapter) DeleteObject(bucket string, key string) error {
return op.gcs.Objects.Delete(bucket, key).Do()
}

View File

@ -0,0 +1,72 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package gcp
import (
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
"google.golang.org/api/compute/v0.beta"
"google.golang.org/api/storage/v1"
"github.com/heptio/ark/pkg/cloudprovider"
)
type storageAdapter struct {
blockStorage *blockStorageAdapter
objectStorage *objectStorageAdapter
}
var _ cloudprovider.StorageAdapter = &storageAdapter{}
func NewStorageAdapter(project string, zone string) (cloudprovider.StorageAdapter, error) {
client, err := google.DefaultClient(oauth2.NoContext, compute.ComputeScope, storage.DevstorageReadWriteScope)
if err != nil {
return nil, err
}
gce, err := compute.New(client)
if err != nil {
return nil, err
}
gcs, err := storage.New(client)
if err != nil {
return nil, err
}
return &storageAdapter{
objectStorage: &objectStorageAdapter{
gcs: gcs,
project: project,
zone: zone,
},
blockStorage: &blockStorageAdapter{
gce: gce,
project: project,
zone: zone,
},
}, nil
}
func (op *storageAdapter) ObjectStorage() cloudprovider.ObjectStorageAdapter {
return op.objectStorage
}
func (op *storageAdapter) BlockStorage() cloudprovider.BlockStorageAdapter {
return op.blockStorage
}

View File

@ -0,0 +1,121 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cloudprovider
import (
"fmt"
"time"
)
// SnapshotService exposes Ark-specific operations for snapshotting and restoring block
// volumes.
type SnapshotService interface {
// GetAllSnapshots returns a slice of all snapshots found in the cloud API that
// are tagged with Ark metadata. Returns an error if a problem is encountered accessing
// the cloud API.
GetAllSnapshots() ([]string, error)
// CreateSnapshot triggers a snapshot for the specified cloud volume and tags it with metadata.
// it returns the cloud snapshot ID, or an error if a problem is encountered triggering the snapshot via
// the cloud API.
CreateSnapshot(volumeID string) (string, error)
// CreateVolumeFromSnapshot triggers a restore operation to create a new cloud volume from the specified
// snapshot and volume characteristics. Returns the cloud volume ID, or an error if a problem is
// encountered triggering the restore via the cloud API.
CreateVolumeFromSnapshot(snapshotID, volumeType string, iops *int) (string, error)
// DeleteSnapshot triggers a deletion of the specified Ark snapshot via the cloud API. It returns an
// error if a problem is encountered triggering the deletion via the cloud API.
DeleteSnapshot(snapshotID string) error
// GetVolumeInfo gets the type and IOPS (if applicable) from the cloud API.
GetVolumeInfo(volumeID string) (string, *int, error)
}
const (
volumeCreateWaitTimeout = 30 * time.Second
volumeCreatePollInterval = 1 * time.Second
snapshotTagKey = "tag-key"
snapshotTagVal = "ark-snapshot"
)
type snapshotService struct {
blockStorage BlockStorageAdapter
}
var _ SnapshotService = &snapshotService{}
// NewSnapshotService creates a snapshot service using the provided block storage adapter
func NewSnapshotService(blockStorage BlockStorageAdapter) SnapshotService {
return &snapshotService{
blockStorage: blockStorage,
}
}
func (sr *snapshotService) CreateVolumeFromSnapshot(snapshotID string, volumeType string, iops *int) (string, error) {
volumeID, err := sr.blockStorage.CreateVolumeFromSnapshot(snapshotID, volumeType, iops)
if err != nil {
return "", err
}
// wait for volume to be ready (up to a maximum time limit)
ticker := time.NewTicker(volumeCreatePollInterval)
defer ticker.Stop()
timeout := time.NewTimer(volumeCreateWaitTimeout)
for {
select {
case <-timeout.C:
return "", fmt.Errorf("timeout reached waiting for volume %v to be ready", volumeID)
case <-ticker.C:
if ready, err := sr.blockStorage.IsVolumeReady(volumeID); err == nil && ready {
return volumeID, nil
}
}
}
}
func (sr *snapshotService) GetAllSnapshots() ([]string, error) {
tags := map[string]string{
snapshotTagKey: snapshotTagVal,
}
res, err := sr.blockStorage.ListSnapshots(tags)
if err != nil {
return nil, err
}
return res, nil
}
func (sr *snapshotService) CreateSnapshot(volumeID string) (string, error) {
tags := map[string]string{
snapshotTagKey: snapshotTagVal,
}
return sr.blockStorage.CreateSnapshot(volumeID, tags)
}
func (sr *snapshotService) DeleteSnapshot(snapshotID string) error {
return sr.blockStorage.DeleteSnapshot(snapshotID)
}
func (sr *snapshotService) GetVolumeInfo(volumeID string) (string, *int, error) {
return sr.blockStorage.GetVolumeInfo(volumeID)
}

View File

@ -0,0 +1,72 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cloudprovider
import "io"
// ObjectStorageAdapter exposes basic object-storage operations required
// by Ark.
type ObjectStorageAdapter interface {
// PutObject creates a new object using the data in body within the specified
// object storage bucket with the given key.
PutObject(bucket string, key string, body io.ReadSeeker) error
// GetObject retrieves the object with the given key from the specified
// bucket in object storage.
GetObject(bucket string, key string) (io.ReadCloser, error)
// ListCommonPrefixes gets a list of all object key prefixes that come
// before the provided delimiter (this is often used to simulate a directory
// hierarchy in object storage).
ListCommonPrefixes(bucket string, delimiter string) ([]string, error)
// DeleteObject removes object with the specified key from the given
// bucket.
DeleteObject(bucket string, key string) error
}
// BlockStorageAdapter exposes basic block-storage operations required
// by Ark.
type BlockStorageAdapter interface {
// CreateVolumeFromSnapshot creates a new block volume, initialized from the provided snapshot,
// and with the specified type and IOPS (if using provisioned IOPS).
CreateVolumeFromSnapshot(snapshotID, volumeType string, iops *int) (volumeID string, err error)
// GetVolumeInfo returns the type and IOPS (if using provisioned IOPS) for a specified block
// volume.
GetVolumeInfo(volumeID string) (string, *int, error)
// IsVolumeReady returns whether the specified volume is ready to be used.
IsVolumeReady(volumeID string) (ready bool, err error)
// ListSnapshots returns a list of all snapshots matching the specified set of tag key/values.
ListSnapshots(tagFilters map[string]string) ([]string, error)
// CreateSnapshot creates a snapshot of the specified block volume, and applies the provided
// set of tags to the snapshot.
CreateSnapshot(volumeID string, tags map[string]string) (snapshotID string, err error)
// DeleteSnapshot deletes the specified volume snapshot.
DeleteSnapshot(snapshotID string) error
}
// StorageAdapter exposes object- and block-storage interfaces and associated methods
// for a given storage provider.
type StorageAdapter interface {
ObjectStorage() ObjectStorageAdapter
BlockStorage() BlockStorageAdapter
}

57
pkg/cmd/ark/ark.go Normal file
View File

@ -0,0 +1,57 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package ark
import (
"flag"
"github.com/spf13/cobra"
"github.com/heptio/ark/pkg/client"
"github.com/heptio/ark/pkg/cmd/cli/backup"
"github.com/heptio/ark/pkg/cmd/cli/restore"
"github.com/heptio/ark/pkg/cmd/cli/schedule"
"github.com/heptio/ark/pkg/cmd/server"
"github.com/heptio/ark/pkg/cmd/version"
)
func NewCommand(name string) *cobra.Command {
c := &cobra.Command{
Use: name,
Short: "Back up and restore Kubernetes cluster resources.",
Long: `Heptio Ark is a tool for managing disaster recovery, specifically for
Kubernetes cluster resources. It provides a simple, configurable,
and operationally robust way to back up your application state and
associated data.`,
}
f := client.NewFactory()
f.BindFlags(c.PersistentFlags())
c.AddCommand(
backup.NewCommand(f),
schedule.NewCommand(f),
restore.NewCommand(f),
server.NewCommand(),
version.NewCommand(),
)
// add the glog flags
c.PersistentFlags().AddGoFlagSet(flag.CommandLine)
return c
}

View File

@ -0,0 +1,45 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package backup
import (
"github.com/spf13/cobra"
"github.com/heptio/ark/pkg/client"
)
func NewCommand(f client.Factory) *cobra.Command {
c := &cobra.Command{
Use: "backup",
Short: "Work with backups",
Long: "Work with backups",
}
c.AddCommand(
NewCreateCommand(f),
NewGetCommand(f),
// Will implement describe later
// NewDescribeCommand(f),
// If you delete a backup and it still exists in object storage, the backup sync controller will
// recreate it. Until we have a good UX around this, we're disabling the delete command.
// NewDeleteCommand(f),
)
return c
}

View File

@ -0,0 +1,138 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package backup
import (
"errors"
"fmt"
"time"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
api "github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/client"
"github.com/heptio/ark/pkg/cmd"
"github.com/heptio/ark/pkg/cmd/util/flag"
"github.com/heptio/ark/pkg/cmd/util/output"
)
func NewCreateCommand(f client.Factory) *cobra.Command {
o := NewCreateOptions()
c := &cobra.Command{
Use: "create NAME",
Short: "Create a backup",
Run: func(c *cobra.Command, args []string) {
cmd.CheckError(o.Validate(c, args))
cmd.CheckError(o.Complete(args))
cmd.CheckError(o.Run(c, f))
},
}
o.BindFlags(c.Flags())
output.BindFlags(c.Flags())
output.ClearOutputFlagDefault(c)
return c
}
type CreateOptions struct {
Name string
TTL time.Duration
SnapshotVolumes bool
IncludeNamespaces flag.StringArray
ExcludeNamespaces flag.StringArray
IncludeResources flag.StringArray
ExcludeResources flag.StringArray
Labels flag.Map
Selector flag.LabelSelector
}
func NewCreateOptions() *CreateOptions {
return &CreateOptions{
TTL: 24 * time.Hour,
IncludeNamespaces: flag.NewStringArray("*"),
Labels: flag.NewMap(),
}
}
func (o *CreateOptions) BindFlags(flags *pflag.FlagSet) {
flags.DurationVar(&o.TTL, "ttl", o.TTL, "how long before the backup can be garbage collected")
flags.BoolVar(&o.SnapshotVolumes, "snapshot-volumes", o.SnapshotVolumes, "take snapshots of PersistentVolumes as part of the backup")
flags.Var(&o.IncludeNamespaces, "include-namespaces", "namespaces to include in the backup (use '*' for all namespaces)")
flags.Var(&o.ExcludeNamespaces, "exclude-namespaces", "namespaces to exclude from the backup")
flags.Var(&o.IncludeResources, "include-resources", "resources to include in the backup, formatted as resource.group, such as storageclasses.storage.k8s.io (use '*' for all resources)")
flags.Var(&o.ExcludeResources, "exclude-resources", "resources to exclude from the backup, formatted as resource.group, such as storageclasses.storage.k8s.io")
flags.Var(&o.Labels, "labels", "labels to apply to the backup")
flags.VarP(&o.Selector, "selector", "l", "only back up resources matching this label selector")
}
func (o *CreateOptions) Validate(c *cobra.Command, args []string) error {
if len(args) != 1 {
return errors.New("you must specify only one argument, the backup's name")
}
if err := output.ValidateFlags(c); err != nil {
return err
}
return nil
}
func (o *CreateOptions) Complete(args []string) error {
o.Name = args[0]
return nil
}
func (o *CreateOptions) Run(c *cobra.Command, f client.Factory) error {
arkClient, err := f.Client()
if err != nil {
return err
}
backup := &api.Backup{
ObjectMeta: metav1.ObjectMeta{
Namespace: api.DefaultNamespace,
Name: o.Name,
Labels: o.Labels.Data(),
},
Spec: api.BackupSpec{
IncludedNamespaces: o.IncludeNamespaces,
ExcludedNamespaces: o.ExcludeNamespaces,
IncludedResources: o.IncludeResources,
ExcludedResources: o.ExcludeResources,
LabelSelector: o.Selector.LabelSelector,
SnapshotVolumes: o.SnapshotVolumes,
TTL: metav1.Duration{Duration: o.TTL},
},
}
if printed, err := output.PrintWithFormat(c, backup); printed || err != nil {
return err
}
_, err = arkClient.ArkV1().Backups(backup.Namespace).Create(backup)
if err != nil {
return err
}
fmt.Printf("Backup %q created successfully.\n", backup.Name)
return nil
}

View File

@ -0,0 +1,53 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package backup
import (
"fmt"
"os"
"github.com/spf13/cobra"
api "github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/client"
"github.com/heptio/ark/pkg/cmd"
)
func NewDeleteCommand(f client.Factory) *cobra.Command {
c := &cobra.Command{
Use: "delete NAME",
Short: "Delete a backup",
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.Usage()
os.Exit(1)
}
arkClient, err := f.Client()
cmd.CheckError(err)
backupName := args[0]
err = arkClient.ArkV1().Backups(api.DefaultNamespace).Delete(backupName, nil)
cmd.CheckError(err)
fmt.Printf("Backup %q deleted\n", backupName)
},
}
return c
}

View File

@ -0,0 +1,34 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package backup
import (
"github.com/spf13/cobra"
"github.com/heptio/ark/pkg/client"
)
func NewDescribeCommand(f client.Factory) *cobra.Command {
c := &cobra.Command{
Use: "describe",
Short: "Describe a backup",
Run: func(c *cobra.Command, args []string) {
},
}
return c
}

66
pkg/cmd/cli/backup/get.go Normal file
View File

@ -0,0 +1,66 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package backup
import (
"github.com/spf13/cobra"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
api "github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/client"
"github.com/heptio/ark/pkg/cmd"
"github.com/heptio/ark/pkg/cmd/util/output"
)
func NewGetCommand(f client.Factory) *cobra.Command {
var listOptions metav1.ListOptions
c := &cobra.Command{
Use: "get",
Short: "Get backups",
Run: func(c *cobra.Command, args []string) {
err := output.ValidateFlags(c)
cmd.CheckError(err)
arkClient, err := f.Client()
cmd.CheckError(err)
var backups *api.BackupList
if len(args) > 0 {
backups = new(api.BackupList)
for _, name := range args {
backup, err := arkClient.Ark().Backups(api.DefaultNamespace).Get(name, metav1.GetOptions{})
cmd.CheckError(err)
backups.Items = append(backups.Items, *backup)
}
} else {
backups, err = arkClient.ArkV1().Backups(api.DefaultNamespace).List(metav1.ListOptions{})
cmd.CheckError(err)
}
_, err = output.PrintWithFormat(c, backups)
cmd.CheckError(err)
},
}
c.Flags().StringVarP(&listOptions.LabelSelector, "selector", "l", listOptions.LabelSelector, "only show items matching this label selector")
output.BindFlags(c.Flags())
return c
}

View File

@ -0,0 +1,38 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"github.com/spf13/cobra"
"github.com/heptio/ark/pkg/client"
)
func NewCommand(f client.Factory) *cobra.Command {
c := &cobra.Command{
Use: "config",
Short: "Work with config",
Long: "Work with config",
}
c.AddCommand(
NewGetCommand(f),
NewSetCommand(f),
)
return c
}

29
pkg/cmd/cli/config/get.go Normal file
View File

@ -0,0 +1,29 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"github.com/spf13/cobra"
"github.com/heptio/ark/pkg/client"
)
func NewGetCommand(f client.Factory) *cobra.Command {
c := &cobra.Command{}
return c
}

29
pkg/cmd/cli/config/set.go Normal file
View File

@ -0,0 +1,29 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"github.com/spf13/cobra"
"github.com/heptio/ark/pkg/client"
)
func NewSetCommand(f client.Factory) *cobra.Command {
c := &cobra.Command{}
return c
}

View File

@ -0,0 +1,129 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package restore
import (
"errors"
"fmt"
"time"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
api "github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/client"
"github.com/heptio/ark/pkg/cmd"
"github.com/heptio/ark/pkg/cmd/util/flag"
"github.com/heptio/ark/pkg/cmd/util/output"
)
func NewCreateCommand(f client.Factory) *cobra.Command {
o := NewCreateOptions()
c := &cobra.Command{
Use: "create BACKUP",
Short: "Create a restore",
Run: func(c *cobra.Command, args []string) {
cmd.CheckError(o.Validate(c, args))
cmd.CheckError(o.Complete(args))
cmd.CheckError(o.Run(c, f))
},
}
o.BindFlags(c.Flags())
output.BindFlags(c.Flags())
output.ClearOutputFlagDefault(c)
return c
}
type CreateOptions struct {
BackupName string
RestoreVolumes bool
Labels flag.Map
Namespaces flag.StringArray
NamespaceMappings flag.Map
Selector flag.LabelSelector
}
func NewCreateOptions() *CreateOptions {
return &CreateOptions{
Labels: flag.NewMap(),
NamespaceMappings: flag.NewMap().WithEntryDelimiter(",").WithKeyValueDelimiter(":"),
}
}
func (o *CreateOptions) BindFlags(flags *pflag.FlagSet) {
flags.BoolVar(&o.RestoreVolumes, "restore-volumes", o.RestoreVolumes, "whether to restore volumes from snapshots")
flags.Var(&o.Labels, "labels", "labels to apply to the restore")
flags.Var(&o.Namespaces, "namespaces", "comma-separated list of namespaces to restore")
flags.Var(&o.NamespaceMappings, "namespace-mappings", "namespace mappings from name in the backup to desired restored name in the form src1:dst1,src2:dst2,...")
flags.VarP(&o.Selector, "selector", "l", "only restore resources matching this label selector")
}
func (o *CreateOptions) Validate(c *cobra.Command, args []string) error {
if len(args) != 1 {
return errors.New("you must specify only one argument, the backup's name")
}
if err := output.ValidateFlags(c); err != nil {
return err
}
return nil
}
func (o *CreateOptions) Complete(args []string) error {
o.BackupName = args[0]
return nil
}
func (o *CreateOptions) Run(c *cobra.Command, f client.Factory) error {
arkClient, err := f.Client()
if err != nil {
return err
}
restore := &api.Restore{
ObjectMeta: metav1.ObjectMeta{
Namespace: api.DefaultNamespace,
Name: fmt.Sprintf("%s-%s", o.BackupName, time.Now().Format("20060102150405")),
Labels: o.Labels.Data(),
},
Spec: api.RestoreSpec{
BackupName: o.BackupName,
Namespaces: o.Namespaces,
NamespaceMapping: o.NamespaceMappings.Data(),
LabelSelector: o.Selector.LabelSelector,
RestorePVs: o.RestoreVolumes,
},
}
if printed, err := output.PrintWithFormat(c, restore); printed || err != nil {
return err
}
restore, err = arkClient.ArkV1().Restores(restore.Namespace).Create(restore)
if err != nil {
return err
}
fmt.Printf("Restore %q created successfully.\n", restore.Name)
return nil
}

View File

@ -0,0 +1,53 @@
/*
Copyright 2017 Heptio Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package restore
import (
"fmt"
"os"
"github.com/spf13/cobra"
api "github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/client"
"github.com/heptio/ark/pkg/cmd"
)
func NewDeleteCommand(f client.Factory) *cobra.Command {
c := &cobra.Command{
Use: "delete NAME",
Short: "Delete a restore",
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.Usage()
os.Exit(1)
}
arkClient, err := f.Client()
cmd.CheckError(err)
name := args[0]
err = arkClient.ArkV1().Restores(api.DefaultNamespace).Delete(name, nil)
cmd.CheckError(err)
fmt.Printf("Restore %q deleted\n", name)
},
}
return c
}

Some files were not shown because too many files have changed in this diff Show More