Add 2.6 docs (#3786)

This commit is contained in:
qwerty287 2024-06-13 19:31:54 +02:00 committed by GitHub
parent 760a903a30
commit 4f7df39edd
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
94 changed files with 1571 additions and 2001 deletions

View file

@ -23,7 +23,7 @@ To configure cron jobs you need at least push access to the repository.
![cron settings](./cron-settings.png)
The supported schedule syntax can be found at <https://pkg.go.dev/github.com/robfig/cron?utm_source=godoc#hdr-CRON_Expression_Format>. If you need general understanding of the cron syntax <https://crontab.guru/> is a good place to start and experiment.
The supported schedule syntax can be found at <https://pkg.go.dev/github.com/robfig/cron?utm_source=godoc#hdr-CRON_Expression_Format>. If you need general understanding of the cron syntax <https://it-tools.tech/crontab-generator> is a good place to start and experiment.
Examples: `@every 5m`, `@daily`, `0 30 * * * *` ...

View file

@ -243,25 +243,25 @@ const config: Config = {
sidebarPath: require.resolve('./sidebars.js'),
editUrl: 'https://github.com/woodpecker-ci/woodpecker/edit/main/docs/',
includeCurrentVersion: true,
lastVersion: '2.5',
lastVersion: '2.6',
onlyIncludeVersions:
process.env.NODE_ENV === 'development' ? ['current', '2.5'] : ['current', '2.5', '2.4', '2.3', '1.0'],
process.env.NODE_ENV === 'development' ? ['current', '2.6'] : ['current', '2.6', '2.5', '2.4', '1.0'],
versions: {
current: {
label: 'Next 🚧',
banner: 'unreleased',
},
'2.6': {
label: '2.6.x',
},
'2.5': {
label: '2.5.x',
label: '2.5.x 💀',
banner: 'unmaintained',
},
'2.4': {
label: '2.4.x 💀',
banner: 'unmaintained',
},
'2.3': {
label: '2.3.x 💀',
banner: 'unmaintained',
},
'1.0': {
label: '1.0.x 💀',
banner: 'unmaintained',

View file

@ -33,6 +33,7 @@ Here you can find documentation for previous versions of Woodpecker.
| | | |
| ------- | ---------- | ------------------------------------------------------------------------------------- |
| 2.5.0 | 2024-06-01 | [Documentation](https://github.com/woodpecker-ci/woodpecker/tree/v2.5.0/docs/docs/) |
| 2.4.1 | 2024-03-20 | [Documentation](https://github.com/woodpecker-ci/woodpecker/tree/v2.4.1/docs/docs/) |
| 2.4.0 | 2024-03-19 | [Documentation](https://github.com/woodpecker-ci/woodpecker/tree/v2.4.0/docs/docs/) |
| 2.3.0 | 2024-01-31 | [Documentation](https://github.com/woodpecker-ci/woodpecker/tree/v2.3.0/docs/docs/) |

View file

@ -1,13 +0,0 @@
# Forges
## Supported features
| Feature | [GitHub](github/) | [Gitea / Forgejo](gitea/) | [Gitlab](gitlab/) | [Bitbucket](bitbucket/) |
| ------------------------------------------------------------- | :----------------: | :-----------------------: | :----------------: | :---------------------: |
| Event: Push | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Event: Tag | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Event: Pull-Request | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Event: Release | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: |
| Event: Deploy | :white_check_mark: | :x: | :x: | :x: |
| [Multiple workflows](../../20-usage/25-workflows.md) | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| [when.path filter](../../20-usage/20-workflow-syntax.md#path) | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: |

View file

@ -1,217 +0,0 @@
---
toc_max_heading_level: 2
---
# Kubernetes backend
The kubernetes backend executes steps inside standalone pods. A temporary PVC is created for the lifetime of the pipeline to transfer files between steps.
## Job specific configuration
### Resources
The kubernetes backend also allows for specifying requests and limits on a per-step basic, most commonly for CPU and memory.
We recommend to add a `resources` definition to all steps to ensure efficient scheduling.
Here is an example definition with an arbitrary `resources` definition below the `backend_options` section:
```yaml
steps:
- name: 'My kubernetes step'
image: alpine
commands:
- echo "Hello world"
backend_options:
kubernetes:
resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 400Mi
cpu: 1000m
```
### `serviceAccountName`
Specify the name of the ServiceAccount which the build pod will mount. This serviceAccount must be created externally.
See the [kubernetes documentation](https://kubernetes.io/docs/concepts/security/service-accounts/) for more information on using serviceAccounts.
### `nodeSelector`
Specifies the label which is used to select the node on which the job will be executed.
Labels defined here will be appended to a list which already contains `"kubernetes.io/arch"`.
By default `"kubernetes.io/arch"` is inferred from the agents' platform. One can override it by setting that label in the `nodeSelector` section of the `backend_options`.
Without a manual overwrite, builds will be randomly assigned to the runners and inherit their respective architectures.
To overwrite this, one needs to set the label in the `nodeSelector` section of the `backend_options`.
A practical example for this is when running a matrix-build and delegating specific elements of the matrix to run on a specific architecture.
In this case, one must define an arbitrary key in the matrix section of the respective matrix element:
```yaml
matrix:
include:
- NAME: runner1
ARCH: arm64
```
And then overwrite the `nodeSelector` in the `backend_options` section of the step(s) using the name of the respective env var:
```yaml
[...]
backend_options:
kubernetes:
nodeSelector:
kubernetes.io/arch: "${ARCH}"
```
### `tolerations`
When you use nodeSelector and the node pool is configured with Taints, you need to specify the Tolerations. Tolerations allow the scheduler to schedule pods with matching taints.
See the [kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) for more information on using tolerations.
Example pipeline configuration:
```yaml
steps:
- name: build
image: golang
commands:
- go get
- go build
- go test
backend_options:
kubernetes:
serviceAccountName: 'my-service-account'
resources:
requests:
memory: 128Mi
cpu: 1000m
limits:
memory: 256Mi
nodeSelector:
beta.kubernetes.io/instance-type: p3.8xlarge
tolerations:
- key: 'key1'
operator: 'Equal'
value: 'value1'
effect: 'NoSchedule'
tolerationSeconds: 3600
```
### Volumes
To mount volumes a persistent volume (PV) and persistent volume claim (PVC) are needed on the cluster which can be referenced in steps via the `volume:` option.
Assuming a PVC named "woodpecker-cache" exists, it can be referenced as follows in a step:
```yaml
steps:
- name: "Restore Cache"
image: meltwater/drone-cache
volumes:
- woodpecker-cache:/woodpecker/src/cache
settings:
mount:
- "woodpecker-cache"
[...]
```
### `securityContext`
Use the following configuration to set the `securityContext` for the pod/container running a given pipeline step:
```yaml
steps:
- name: test
image: alpine
commands:
- echo Hello world
backend_options:
kubernetes:
securityContext:
runAsUser: 999
runAsGroup: 999
privileged: true
[...]
```
Note that the `backend_options.kubernetes.securityContext` object allows you to set both pod and container level security context options in one object.
By default, the properties will be set at the pod level. Properties that are only supported on the container level will be set there instead. So, the
configuration shown above will result in something like the following pod spec:
```yaml
kind: Pod
spec:
securityContext:
runAsUser: 999
runAsGroup: 999
containers:
- name: wp-01hcd83q7be5ymh89k5accn3k6-0-step-0
image: alpine
securityContext:
privileged: true
[...]
```
See the [kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for more information on using `securityContext`.
## Tips and tricks
### CRI-O
CRI-O users currently need to configure the workspace for all workflows in order for them to run correctly. Add the following at the beginning of your configuration:
```yaml
workspace:
base: '/woodpecker'
path: '/'
```
See [this issue](https://github.com/woodpecker-ci/woodpecker/issues/2510) for more details.
## Configuration
These env vars can be set in the `env:` sections of the agent.
### `WOODPECKER_BACKEND_K8S_NAMESPACE`
> Default: `woodpecker`
The namespace to create worker pods in.
### `WOODPECKER_BACKEND_K8S_VOLUME_SIZE`
> Default: `10G`
The volume size of the pipeline volume.
### `WOODPECKER_BACKEND_K8S_STORAGE_CLASS`
> Default: empty
The storage class to use for the pipeline volume.
### `WOODPECKER_BACKEND_K8S_STORAGE_RWX`
> Default: `true`
Determines if `RWX` should be used for the pipeline volume's [access mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). If false, `RWO` is used instead.
### `WOODPECKER_BACKEND_K8S_POD_LABELS`
> Default: empty
Additional labels to apply to worker pods. Must be a YAML object, e.g. `{"example.com/test-label":"test-value"}`.
### `WOODPECKER_BACKEND_K8S_POD_ANNOTATIONS`
> Default: empty
Additional annotations to apply to worker pods. Must be a YAML object, e.g. `{"example.com/test-annotation":"test-value"}`.
### `WOODPECKER_BACKEND_K8S_SECCTX_NONROOT`
> Default: `false`
Determines if containers must be required to run as non-root users.

View file

@ -1,83 +0,0 @@
# Secrets encryption
:::danger
Secrets encryption is currently broken and therefore disabled by default. It will be fixed in an upcoming release.
Check:
- <https://github.com/woodpecker-ci/woodpecker/issues/1541> and
- <https://github.com/woodpecker-ci/woodpecker/pull/2300>
:::
By default, Woodpecker does not encrypt secrets in its database. You can enable encryption
using simple AES key or more advanced [Google TINK](https://developers.google.com/tink) encryption.
## Common
### Enabling secrets encryption
To enable secrets encryption and encrypt all existing secrets in database set
`WOODPECKER_ENCRYPTION_KEY`, `WOODPECKER_ENCRYPTION_KEY_FILE` or `WOODPECKER_ENCRYPTION_TINK_KEYSET_PATH` environment
variable depending on encryption method of your choice.
After encryption is enabled you will be unable to start Woodpecker server without providing valid encryption key!
### Disabling encryption and decrypting all secrets
To disable secrets encryption and decrypt database you need to start server with valid
`WOODPECKER_ENCRYPTION_KEY` or `WOODPECKER_ENCRYPTION_TINK_KEYSET_FILE` environment variable set depending on
enabled encryption method, and `WOODPECKER_ENCRYPTION_DISABLE` set to true.
After secrets was decrypted server will proceed working in unencrypted mode. You will not need to use "disable encryption"
variable or encryption keys to start server anymore.
## AES
Simple AES encryption.
### Configuration
You can manage encryption on server using these environment variables:
- `WOODPECKER_ENCRYPTION_KEY` - encryption key
- `WOODPECKER_ENCRYPTION_KEY_FILE` - file to read encryption key from
- `WOODPECKER_ENCRYPTION_DISABLE` - disable encryption flag used to decrypt all data on server
## TINK
TINK uses AEAD encryption instead of simple AES and supports key rotation.
### Configuration
You can manage encryption on server using these two environment variables:
- `WOODPECKER_ENCRYPTION_TINK_KEYSET_FILE` - keyset filepath
- `WOODPECKER_ENCRYPTION_DISABLE` - disable encryption flag used to decrypt all data on server
### Encryption keys
You will need plaintext AEAD-compatible Google TINK keyset to encrypt your data.
To generate it and then rotate keys if needed, install `tinkey`([installation guide](https://developers.google.com/tink/install-tinkey))
Keyset contains one or more keys, used to encrypt or decrypt your data, and primary key ID, used to determine which key
to use while encrypting new data.
Keyset generation example:
```bash
tinkey create-keyset --key-template AES256_GCM --out-format json --out keyset.json
```
### Key rotation
Use `tinkey` to rotate encryption keys in your existing keyset:
```bash
tinkey rotate-keyset --in keyset_v1.json --out keyset_v2.json --key-template AES256_GCM
```
Then you just need to replace server keyset file with the new one. At the moment server detects new encryption
keyset it will re-encrypt all existing secrets with the new key, so you will be unable to start server with previous
keyset anymore.

View file

@ -1,45 +0,0 @@
# Addons
:::warning
Addons are still experimental. Their implementation can change and break at any time.
:::
:::danger
You need to trust the author of the addons you use. Depending on their type, addons can access forge authentication codes, your secrets or other sensitive information.
:::
To adapt Woodpecker to your needs beyond the [configuration](../10-server-config.md), Woodpecker has its own **addon** system, built ontop of [Go's internal plugin system](https://go.dev/pkg/plugin).
Addons can be used for:
- Forges
- Agent backends
- Config services
- Secret services
- Environment services
- Registry services
## Restrictions
Addons are restricted by how Go plugins work. This includes the following restrictions:
- only supported on Linux, FreeBSD, and macOS
- addons must have been built for the correct Woodpecker version. If an addon is not provided specifically for this version, you likely won't be able to use it.
## Usage
To use an addon, download the addon version built for your Woodpecker version. Then, you can add the following to your configuration:
```ini
WOODPECKER_ADDONS=/path/to/your/addon/file.so
```
In case you run Woodpecker as container, you probably want to mount the addon binaries to `/opt/addons/`.
You can list multiple addons, Woodpecker will automatically determine their type. If you specify multiple addons with the same type, only the first one will be used.
Using an addon always overwrites Woodpecker's internal setup. This means, that a forge addon will be used if specified, no matter what's configured for the forges natively supported by Woodpecker.
### Bug reports
If you experience bugs, please check which component has the issue. If it's the addon, **do not raise an issue in the main repository**, but rather use the separate addon repositories. To check which component is responsible for the bug, look at the logs. Logs from addons are marked with a special field `addon` containing their addon file name.

View file

@ -1,102 +0,0 @@
# Creating addons
Addons are written in Go.
## Writing your code
An addon consists of two variables/functions in Go.
1. The `Type` variable. Specifies the type of the addon and must be directly accessed from `shared/addons/types/types.go`.
2. The `Addon` function which is the main point of your addon.
This function takes the `zerolog` logger you should use to log errors, warnings, etc. as argument.
It returns two values:
1. The actual addon. For type reference see [table below](#return-types).
2. An error. If this error is not `nil`, Woodpecker exits.
Directly import Woodpecker's Go package (`go.woodpecker-ci.org/woodpecker/woodpecker/v2`) and use the interfaces and types defined there.
### Return types
| Addon type | Return type |
| -------------------- | -------------------------------------------------------------------------------- |
| `Forge` | `"go.woodpecker-ci.org/woodpecker/woodpecker/v2/server/forge".Forge` |
| `Backend` | `"go.woodpecker-ci.org/woodpecker/woodpecker/v2/pipeline/backend/types".Backend` |
| `ConfigService` | `"go.woodpecker-ci.org/woodpecker/v2/server/plugins/config".Extension` |
| `SecretService` | `"go.woodpecker-ci.org/woodpecker/v2/server/model".SecretService` |
| `EnvironmentService` | `"go.woodpecker-ci.org/woodpecker/v2/server/model".EnvironmentService` |
| `RegistryService` | `"go.woodpecker-ci.org/woodpecker/v2/server/model".RegistryService` |
### Using configurations
If you write a plugin for the server (`Forge` and the services), you can access the server config.
Therefore, use the `"go.woodpecker-ci.org/woodpecker/v2/server".Config` variable.
:::warning
The config is not available when your addon is initialized, i.e., the `Addon` function is called.
Only use the config in the interface methods.
:::
## Compiling
After you write your addon code, compile your addon:
```sh
go build -buildmode plugin
```
The output file is your addon that is now ready to be used.
## Restrictions
Addons must directly depend on Woodpecker's core (`go.woodpecker-ci.org/woodpecker/woodpecker/v2`).
The addon must have been built with **exactly the same code** as the Woodpecker instance you'd like to use it on. This means: If you build your addon with a specific commit from Woodpecker `next`, you can likely only use it with the Woodpecker version compiled from this commit.
Also, if you change something inside Woodpecker without committing, it might fail because you need to recompile your addon with this code first.
In addition to this, addons are only supported on Linux, FreeBSD, and macOS.
:::info
It is recommended to at least support the latest version of Woodpecker.
:::
### Compile for different versions
As long as there are no changes to Woodpecker's interfaces,
or they are backwards-compatible, you can compile the addon for multiple versions
by changing the version of `go.woodpecker-ci.org/woodpecker/woodpecker/v2` using `go get` before compiling.
## Logging
The entrypoint receives a `zerolog.Logger` as input. **Do not use any other logging solution.** This logger follows the configuration of the Woodpecker instance and adds a special field `addon` to the log entries which allows users to find out which component is writing the log messages.
## Example structure
```go
package main
import (
"context"
"net/http"
"github.com/rs/zerolog"
"go.woodpecker-ci.org/woodpecker/v2/server/forge"
forge_types "go.woodpecker-ci.org/woodpecker/v2/server/forge/types"
"go.woodpecker-ci.org/woodpecker/v2/server/model"
addon_types "go.woodpecker-ci.org/woodpecker/v2/shared/addon/types"
)
var Type = addon_types.TypeForge
func Addon(logger zerolog.Logger) (forge.Forge, error) {
logger.Info().Msg("hello world from addon")
return &config{l: logger}, nil
}
type config struct {
l zerolog.Logger
}
// In this case, `config` must implement `forge.Forge`. You must directly use Woodpecker's packages - see imports above.
```

View file

@ -1,6 +0,0 @@
label: 'Addons'
collapsible: true
collapsed: true
link:
type: 'doc'
id: 'overview'

File diff suppressed because it is too large Load diff

View file

@ -75,13 +75,13 @@ kubectl apply -f $PLUGIN_TEMPLATE
```yaml title=".woodpecker.yaml"
steps:
deploy-to-k8s:
- name: deploy-to-k8s
image: laszlocloud/my-k8s-plugin
settings:
template: config/k8s/service.yaml
```
See [plugin docs](./20-usage/51-plugins/10-overview.md).
See [plugin docs](./20-usage/51-plugins/51-overview.md).
## Continue reading

View file

@ -0,0 +1,37 @@
# Troubleshooting
## How to debug clone issues
(And what to do with an error message like `fatal: could not read Username for 'https://<url>': No such device or address`)
This error can have multiple causes. If you use internal repositories you might have to enable `WOODPECKER_AUTHENTICATE_PUBLIC_REPOS`:
```ini
WOODPECKER_AUTHENTICATE_PUBLIC_REPOS=true
```
If that does not work, try to make sure the container can reach your git server. In order to do that disable git checkout and make the container "hang":
```yaml
skip_clone: true
steps:
build:
image: debian:stable-backports
commands:
- apt update
- apt install -y inetutils-ping wget
- ping -c 4 git.example.com
- wget git.example.com
- sleep 9999999
```
Get the container id using `docker ps` and copy the id from the first column. Enter the container with: `docker exec -it 1234asdf bash` (replace `1234asdf` with the docker id). Then try to clone the git repository with the commands from the failing pipeline:
```bash
git init
git remote add origin https://git.example.com/username/repo.git
git fetch --no-tags origin +refs/heads/branch:
```
(replace the url AND the branch with the correct values, use your username and password as log in values)

View file

@ -31,6 +31,7 @@
- **YAML File**: A file format used to define and configure [workflows][Workflow].
- **Dependency**: [Workflows][Workflow] can depend on each other, and if possible, they are executed in parallel.
- **Status**: Status refers to the outcome of a step or [workflow][Workflow] after it has been executed, determined by the internal command exit code. At the end of a [workflow][Workflow], its status is sent to the [forge][Forge].
- **Service extension**: Some parts of Woodpecker internal services like secrets storage or config fetcher can be replaced through service extensions.
## Pipeline events
@ -49,13 +50,14 @@ Sometimes there are multiple terms that can be used to describe something. This
- Environment variables `*_LINK` should be called `*_URL`. In the code use `URL()` instead of `Link()`
- Use the term **pipelines** instead of the previous **builds**
- Use the term **steps** instead of the previous **jobs**
- Use the prefix `WOODPECKER_EXPERT_` for advanced environment variables that are normally not required to be set by users
<!-- References -->
[Pipeline]: ../20-workflow-syntax.md
[Workflow]: ../25-workflows.md
[Forge]: ../../30-administration/11-forges/10-overview.md
[Plugin]: ../51-plugins/10-overview.md
[Forge]: ../../30-administration/11-forges/11-overview.md
[Plugin]: ../51-plugins/51-overview.md
[Workspace]: ../20-workflow-syntax.md#workspace
[Matrix]: ../30-matrix-workflows.md
[Docker]: ../../30-administration/22-backends/10-docker.md

View file

@ -1,6 +1,10 @@
# Workflow syntax
The workflow section defines a list of steps to build, test and deploy your code. Steps are executed serially, in the order in which they are defined. If a step returns a non-zero exit code, the workflow and therefore all other workflows and the pipeline immediately aborts and returns a failure status.
The Workflow section defines a list of steps to build, test and deploy your code. The steps are executed serially in the order in which they are defined. If a step returns a non-zero exit code, the workflow and therefore the entire pipeline terminates immediately and returns an error status.
:::note
An exception to this rule are steps with a [`status: [failure]`](#status) condition, which ensures that they are executed in the case of a failed run.
:::
Example steps:
@ -50,7 +54,8 @@ git commit -m "updated README [CI SKIP]"
## Steps
Every step of your workflow executes commands inside a specified container. The defined commands are executed serially.
Every step of your workflow executes commands inside a specified container.<br>
The defined steps are executed in sequence by default, if they should run in parallel you can use [`depends_on`](./20-workflow-syntax.md#depends_on).<br>
The associated commit is checked out with git to a workspace which is mounted to every step of the workflow as the working directory.
```diff
@ -160,17 +165,20 @@ Only build steps can define commands. You cannot use commands with plugins or se
Allows you to specify the entrypoint for containers. Note that this must be a list of the command and its arguments (e.g. `["/bin/sh", "-c"]`).
If you define [`commands`](#commands), the default entrypoint will be `["/bin/sh", "-c", "echo $CI_SCRIPT | base64 -d | /bin/sh -e"]`.
You can also use a custom shell with `CI_SCRIPT` (Base64-encoded) if you set `commands`.
### `environment`
Woodpecker provides the ability to pass environment variables to individual steps.
For more details check the [environment docs](./50-environment.md).
For more details, check the [environment docs](./50-environment.md).
### `secrets`
Woodpecker provides the ability to store named parameters external to the YAML configuration file, in a central secret store. These secrets can be passed to individual steps of the workflow at runtime.
For more details check the [secrets docs](./40-secrets.md).
For more details, check the [secrets docs](./40-secrets.md).
### `failure`
@ -188,7 +196,8 @@ Some of the steps may be allowed to fail without causing the whole workflow and
### `when` - Conditional Execution
Woodpecker supports defining a list of conditions for a step by using a `when` block. If at least one of the conditions in the `when` block evaluate to true the step is executed, otherwise it is skipped. A condition can be a check like:
Woodpecker supports defining a list of conditions for a step by using a `when` block. If at least one of the conditions in the `when` block evaluate to true the step is executed, otherwise it is skipped. A condition is evaluated to true if _all_ subconditions are true.
A condition can be a check like:
```diff
steps:
@ -203,6 +212,11 @@ Woodpecker supports defining a list of conditions for a step by using a `when` b
+ branch: main
```
The `slack` step is executed if one of these conditions is met:
1. The pipeline is executed from a pull request in the repo `test/test`
2. The pipeline is executed from a push to `maiǹ`
#### `repo`
Example conditional execution by repository:
@ -352,16 +366,6 @@ when:
- platform: [linux/*, windows/amd64]
```
#### `environment`
Execute a step for deployment events matching the target deployment environment:
```yaml
when:
- environment: production
- event: deployment
```
#### `matrix`
Execute a step for a single matrix permutation:
@ -398,16 +402,19 @@ when:
You can use [glob patterns](https://github.com/bmatcuk/doublestar#patterns) to match the changed files and specify if the step should run if a file matching that pattern has been changed `include` or if some files have **not** been changed `exclude`.
For pipelines without file changes (empty commits or on events without file changes like `tag`), you can use `on_empty` to set whether this condition should be **true** _(default)_ or **false** in these cases.
```yaml
when:
- path:
include: ['.woodpecker/*.yaml', '*.ini']
exclude: ['*.md', 'docs/**']
ignore_message: '[ALL]'
on_empty: true
```
:::info
Passing a defined ignore-message like `[ALL]` inside the commit message will ignore all path conditions.
Passing a defined ignore-message like `[ALL]` inside the commit message will ignore all path conditions and the `on_empty` setting.
:::
#### `evaluate`
@ -474,6 +481,19 @@ Normally steps of a workflow are executed serially in the order in which they ar
- go test
```
:::note
You can define a step to start immediately without dependencies by adding an empty `depends_on: []`. By setting `depends_on` on a single step all other steps will be immediately executed as well if no further dependencies are specified.
```yaml
steps:
- name: check code format
image: mstruebing/editorconfig-checker
depends_on: [] # enable parallel steps
...
```
:::
### `volumes`
Woodpecker gives the ability to define Docker volumes in the YAML. You can use this parameter to mount files or folders on the host machine into your containers.
@ -556,8 +576,12 @@ git clone https://github.com/octocat/hello-world \
/go/src/github.com/octocat/hello-world
```
<!-- markdownlint-disable no-duplicate-heading -->
## `matrix`
<!-- markdownlint-enable no-duplicate-heading -->
Woodpecker has integrated support for matrix builds. Woodpecker executes a separate build task for each combination in the matrix, allowing you to build and test a single commit against multiple configurations.
For more details check the [matrix build docs](./30-matrix-workflows.md).
@ -566,10 +590,10 @@ For more details check the [matrix build docs](./30-matrix-workflows.md).
You can set labels for your workflow to select an agent to execute the workflow on. An agent will pick up and run a workflow when **every** label assigned to it matches the agents labels.
To set additional agent labels check the [agent configuration options](../30-administration/15-agent-config.md#woodpecker_filter_labels). Agents will have at least four default labels: `platform=agent-os/agent-arch`, `hostname=my-agent`, `backend=docker` (type of the agent backend) and `repo=*`. Agents can use a `*` as a wildcard for a label. For example `repo=*` will match every repo.
To set additional agent labels, check the [agent configuration options](../30-administration/15-agent-config.md#woodpecker_filter_labels). Agents will have at least four default labels: `platform=agent-os/agent-arch`, `hostname=my-agent`, `backend=docker` (type of the agent backend) and `repo=*`. Agents can use a `*` as a wildcard for a label. For example `repo=*` will match every repo.
Workflow labels with an empty value will be ignored.
By default each workflow has at least the `repo=your-user/your-repo-name` label. If you have set the [platform attribute](#platform) for your workflow it will have a label like `platform=your-os/your-arch` as well.
By default, each workflow has at least the `repo=your-user/your-repo-name` label. If you have set the [platform attribute](#platform) for your workflow it will have a label like `platform=your-os/your-arch` as well.
You can add additional labels as a key value map:
@ -644,7 +668,7 @@ Example configuration to use a custom clone plugin:
```diff
clone:
git:
- name: git
+ image: octocat/custom-git-plugin
```
@ -694,28 +718,9 @@ skip_clone: true
## `when` - Global workflow conditions
Woodpecker gives the ability to skip whole workflows (not just steps #when---conditional-execution-1) based on certain conditions by a `when` block. If all conditions in the `when` block evaluate to true the workflow is executed, otherwise it is skipped, but treated as successful and other workflows depending on it will still continue.
Woodpecker gives the ability to skip whole workflows ([not just steps](#when---conditional-execution)) based on certain conditions by a `when` block. If all conditions in the `when` block evaluate to true the workflow is executed, otherwise it is skipped, but treated as successful and other workflows depending on it will still continue.
### `repo`
Example conditional execution by repository:
```diff
+when:
+ repo: test/test
+
steps:
- name: slack
image: plugins/slack
settings:
channel: dev
```
### `branch`
:::note
Branch conditions are not applied to tags.
:::
For more information about the specific filters, take a look at the [step-specific `when` filters](#when---conditional-execution).
Example conditional execution by branch:
@ -730,126 +735,14 @@ Example conditional execution by branch:
channel: dev
```
The step now triggers on `main`, but also if the target branch of a pull request is `main`. Add an event condition to limit it further to pushes on main only.
The workflow now triggers on `main`, but also if the target branch of a pull request is `main`.
Execute a step if the branch is `main` or `develop`:
```yaml
when:
branch: [main, develop]
```
Execute a step if the branch starts with `prefix/*`:
```yaml
when:
branch: prefix/*
```
Execute a step using custom include and exclude logic:
```yaml
when:
branch:
include: [main, release/*]
exclude: [release/1.0.0, release/1.1.*]
```
### `event`
:::warning
Some events like the release event will be triggered for multiple actions like: releases, pre-releases and drafts. If you want to apply further filters checkout the [evaluate](#evaluate) filter and the available [environment variables](./50-environment.md#built-in-environment-variables).
:::
Execute a step if the build event is a `tag`:
```yaml
when:
event: tag
```
Execute a step if the pipeline event is a `push` to a specified branch:
```diff
when:
event: push
+ branch: main
```
Execute a step for all non-pull request events:
```yaml
when:
event: [push, tag, deployment]
```
Execute a step for all build events:
```yaml
when:
event: [push, pull_request, pull_request_closed, tag, deployment, release]
```
### `ref`
The `ref` filter compares the git reference against which the pipeline is executed.
This allows you to filter, for example, tags that must start with **v**:
```yaml
when:
event: tag
ref: refs/tags/v*
```
### `environment`
Execute a step for deployment events matching the target deployment environment:
```yaml
when:
environment: production
event: deployment
```
### `instance`
Execute a step only on a certain Woodpecker instance matching the specified hostname:
```yaml
when:
instance: stage.woodpecker.company.com
```
### `path`
:::info
Path conditions are applied only to **push** and **pull_request** events.
It is currently **only available** for GitHub, GitLab and Gitea (version 1.18.0 and newer)
:::
Execute a step only on a pipeline with certain files being changed:
```yaml
when:
path: 'src/*'
```
You can use [glob patterns](https://github.com/bmatcuk/doublestar#patterns) to match the changed files and specify if the step should run if a file matching that pattern has been changed `include` or if some files have **not** been changed `exclude`.
```yaml
when:
path:
include: ['.woodpecker/*.yaml', '*.ini']
exclude: ['*.md', 'docs/**']
ignore_message: '[ALL]'
```
:::info
Passing a defined ignore-message like `[ALL]` inside the commit message will ignore all path conditions.
:::
<!-- markdownlint-disable no-duplicate-heading -->
## `depends_on`
<!-- markdownlint-enable no-duplicate-heading -->
Woodpecker supports to define multiple workflows for a repository. Those workflows will run independent from each other. To depend them on each other you can use the [`depends_on`](./25-workflows.md#flow-control) keyword.
## `runs_on`
@ -861,7 +754,7 @@ Workflows that should run even on failure should set the `runs_on` tag. See [her
Woodpecker gives the ability to configure privileged mode in the YAML. You can use this parameter to launch containers with escalated capabilities.
:::info
Privileged mode is only available to trusted repositories and for security reasons should only be used in private environments. See [project settings](./71-project-settings.md#trusted) to enable trusted mode.
Privileged mode is only available to trusted repositories and for security reasons should only be used in private environments. See [project settings](./75-project-settings.md#trusted) to enable trusted mode.
:::
```diff

View file

@ -6,7 +6,7 @@ In case there is a single configuration in `.woodpecker.yaml` Woodpecker will cr
By placing the configurations in a folder which is by default named `.woodpecker/` Woodpecker will create a pipeline with multiple workflows each named by the file they are defined in. Only `.yml` and `.yaml` files will be used and files in any subfolders like `.woodpecker/sub-folder/test.yaml` will be ignored.
You can also set some custom path like `.my-ci/pipelines/` instead of `.woodpecker/` in the [project settings](./71-project-settings.md).
You can also set some custom path like `.my-ci/pipelines/` instead of `.woodpecker/` in the [project settings](./75-project-settings.md).
## Benefits of using workflows
@ -18,7 +18,7 @@ You can also set some custom path like `.my-ci/pipelines/` instead of `.woodpeck
:::warning
Please note that files are only shared between steps of the same workflow (see [File changes are incremental](./20-workflow-syntax.md#file-changes-are-incremental)). That means you cannot access artifacts e.g. from the `build` workflow in the `deploy` workflow.
If you still need to pass artifacts between the workflows you need use some storage [plugin](./51-plugins/10-overview.md) (e.g. one which stores files in an Amazon S3 bucket).
If you still need to pass artifacts between the workflows you need use some storage [plugin](./51-plugins/51-overview.md) (e.g. one which stores files in an Amazon S3 bucket).
:::
```bash

View file

@ -139,5 +139,5 @@ steps:
```
:::note
If you want to control the architecture of a pipeline on a Kubernetes runner, see [the nodeSelector documentation of the Kubernetes backend](../30-administration/22-backends/40-kubernetes.md#nodeselector).
If you want to control the architecture of a pipeline on a Kubernetes runner, see [the nodeSelector documentation of the Kubernetes backend](../30-administration/22-backends/40-kubernetes.md#node-selector).
:::

View file

@ -21,23 +21,27 @@ once their usage is declared in the `secrets` section:
- name: docker
image: docker
commands:
+ - echo $DOCKER_USERNAME
+ - echo $docker_username
+ - echo $DOCKER_PASSWORD
+ secrets: [ docker_username, docker_password ]
+ secrets: [ docker_username, DOCKER_PASSWORD ]
```
### Use secrets in settings
The case of the environment variables is not changed, but secret matching is done case-insensitively. In the example above, `DOCKER_PASSWORD` would also match if the secret is called `docker_password`.
Alternatively, you can get a `setting` from secrets using the `from_secret` syntax.
In this example, the secret named `secret_token` would be passed to the setting named `token`, which will be available in the plugin as environment variable named `PLUGIN_TOKEN`. See [Plugins](./51-plugins/20-creating-plugins.md#settings) for details.
### Use secrets in settings and environment
**NOTE:** the `from_secret` syntax only works with the newer `settings` block.
You can set an setting or environment value from secrets using the `from_secret` syntax.
In this example, the secret named `secret_token` would be passed to the setting named `token`,which will be available in the plugin as environment variable named `PLUGIN_TOKEN` (See [plugins](./51-plugins/20-creating-plugins.md#settings) for details), and to the environment variable `TOKEN_ENV`.
```diff
steps:
- name: docker
image: my-plugin
settings:
+ environment:
+ TOKEN_ENV:
+ from_secret: secret_token
+ settings:
+ token:
+ from_secret: secret_token
```
@ -51,33 +55,20 @@ Please note parameter expressions are subject to pre-processing. When using secr
- name: docker
image: docker
commands:
- - echo ${DOCKER_USERNAME}
- - echo ${docker_username}
- - echo ${DOCKER_PASSWORD}
+ - echo $${DOCKER_USERNAME}
+ - echo $${docker_username}
+ - echo $${DOCKER_PASSWORD}
secrets: [ docker_username, docker_password ]
```
### Alternate Names
There may be scenarios where you are required to store secrets using alternate names. You can map the alternate secret name to the expected name using the below syntax:
```diff
steps:
- name: docker
image: plugins/docker
repo: octocat/hello-world
tags: latest
+ secrets:
+ - source: docker_prod_password
+ target: docker_password
secrets: [ docker_username, DOCKER_PASSWORD ]
```
### Use in Pull Requests events
Secrets are not exposed to pull requests by default. You can override this behavior by creating the secret and enabling the `pull_request` event type, either in UI or by CLI, see below.
**NOTE:** Please be careful when exposing secrets to pull requests. If your repository is open source and accepts pull requests your secrets are not safe. A bad actor can submit a malicious pull request that exposes your secrets.
:::note
Please be careful when exposing secrets to pull requests. If your repository is open source and accepts pull requests your secrets are not safe. A bad actor can submit a malicious pull request that exposes your secrets.
:::
## Image filter

View file

@ -35,6 +35,10 @@ Example registry hostname matching logic:
- Hostname `docker.io` matches `bradyrydzewski/golang`
- Hostname `docker.io` matches `bradyrydzewski/golang:latest`
:::note
The flow above doesn't work in Kubernetes. There is [workaround](../30-administration/22-backends/40-kubernetes.md#images-from-private-registries).
:::
## Global registry support
To make a private registry globally available, check the [server configuration docs](../30-administration/10-server-config.md#global-registry-setting).

View file

@ -19,11 +19,11 @@ To configure cron jobs you need at least push access to the repository.
+ cron: "name of the cron job" # if you only want to execute this step by a specific cron job
```
1. Create a new cron job in the repository settings:
2. Create a new cron job in the repository settings:
![cron settings](./cron-settings.png)
The supported schedule syntax can be found at <https://pkg.go.dev/github.com/robfig/cron?utm_source=godoc#hdr-CRON_Expression_Format>. If you need general understanding of the cron syntax <https://crontab.guru/> is a good place to start and experiment.
The supported schedule syntax can be found at <https://pkg.go.dev/github.com/robfig/cron?utm_source=godoc#hdr-CRON_Expression_Format>. If you need general understanding of the cron syntax <https://it-tools.tech/crontab-generator> is a good place to start and experiment.
Examples: `@every 5m`, `@daily`, `0 30 * * * *` ...

View file

@ -7,9 +7,9 @@ Woodpecker provides the ability to pass environment variables to individual pipe
- name: build
image: golang
+ environment:
+ - CGO=0
+ - GOOS=linux
+ - GOARCH=amd64
+ CGO: 0
+ GOOS: linux
+ GOARCH: amd64
commands:
- go build
- go test
@ -81,10 +81,11 @@ This is the reference list of all environment variables available to your pipeli
| | **Current pipeline** |
| `CI_PIPELINE_NUMBER` | pipeline number |
| `CI_PIPELINE_PARENT` | number of parent pipeline |
| `CI_PIPELINE_EVENT` | pipeline event (see [pipeline events](../20-usage/15-terminiology/index.md#pipeline-events)) |
| `CI_PIPELINE_EVENT` | pipeline event (see [pipeline events](../20-usage/15-terminology/index.md#pipeline-events)) |
| `CI_PIPELINE_URL` | link to the web UI for the pipeline |
| `CI_PIPELINE_FORGE_URL` | link to the forge's web UI for the commit(s) or tag that triggered the pipeline |
| `CI_PIPELINE_DEPLOY_TARGET` | pipeline deploy target for `deployment` events (i.e. production) |
| `CI_PIPELINE_DEPLOY_TASK` | pipeline deploy task for `deployment` events (i.e. migration) |
| `CI_PIPELINE_STATUS` | pipeline status (success, failure) |
| `CI_PIPELINE_CREATED` | pipeline created UNIX timestamp |
| `CI_PIPELINE_STARTED` | pipeline started UNIX timestamp |
@ -114,10 +115,11 @@ This is the reference list of all environment variables available to your pipeli
| | **Previous pipeline** |
| `CI_PREV_PIPELINE_NUMBER` | previous pipeline number |
| `CI_PREV_PIPELINE_PARENT` | previous pipeline number of parent pipeline |
| `CI_PREV_PIPELINE_EVENT` | previous pipeline event (see [pipeline events](../20-usage/15-terminiology/index.md#pipeline-events)) |
| `CI_PREV_PIPELINE_EVENT` | previous pipeline event (see [pipeline events](../20-usage/15-terminology/index.md#pipeline-events)) |
| `CI_PREV_PIPELINE_URL` | previous pipeline link in CI |
| `CI_PREV_PIPELINE_FORGE_URL` | previous pipeline link to event in forge |
| `CI_PREV_PIPELINE_DEPLOY_TARGET` | previous pipeline deploy target for `deployment` events (ie production) |
| `CI_PREV_PIPELINE_DEPLOY_TASK` | previous pipeline deploy task for `deployment` events (ie migration) |
| `CI_PREV_PIPELINE_STATUS` | previous pipeline status (success, failure) |
| `CI_PREV_PIPELINE_CREATED` | previous pipeline created UNIX timestamp |
| `CI_PREV_PIPELINE_STARTED` | previous pipeline started UNIX timestamp |

View file

@ -42,12 +42,29 @@ Values like this are converted to JSON and then passed to your plugin. In the ex
### Secrets
Secrets should be passed as settings too. Therefore, users should use [`from_secret`](../40-secrets.md#use-secrets-in-settings).
Secrets should be passed as settings too. Therefore, users should use [`from_secret`](../40-secrets.md#use-secrets-in-settings-and-environment).
## Plugin library
For Go, we provide a plugin library you can use to get easy access to internal env vars and your settings. See <https://codeberg.org/woodpecker-plugins/go-plugin>.
## Metadata
In your documentation, you can use a Markdown header to define metadata for your plugin. This data is used by [our plugin index](/plugins).
Supported metadata:
- `name`: The plugin's full name
- `icon`: URL to your plugin's icon
- `description`: A short description of what it's doing
- `author`: Your name
- `tags`: List of keywords (e.g. `[git, clone]` for the clone plugin)
- `containerImage`: name of the container image
- `containerImageUrl`: link to the container image
- `url`: homepage or repository of your plugin
If you want your plugin to be listed in the index, you should add as many fields as possible, but only `name` is required.
## Example plugin
This provides a brief tutorial for creating a Woodpecker webhook plugin, using simple shell scripting, to make HTTP requests during the build pipeline.
@ -118,5 +135,5 @@ docker run --rm \
These should also be built for different OS/architectures.
- Use [built-in env vars](../50-environment.md#built-in-environment-variables) where possible.
- Do not use any configuration except settings (and internal env vars). This means: Don't require using [`environment`](../50-environment.md) and don't require specific secret names.
- Add a `docs.md` file, listing all your settings and plugin metadata ([example](https://codeberg.org/woodpecker-plugins/plugin-docker-buildx/src/branch/main/docs.md)).
- Add your plugin to the [plugin index](/plugins) using your `docs.md` ([the example above in the index](https://woodpecker-ci.org/plugins/Docker%20Buildx)).
- Add a `docs.md` file, listing all your settings and plugin metadata ([example](https://github.com/woodpecker-ci/plugin-git/blob/main/docs.md)).
- Add your plugin to the [plugin index](/plugins) using your `docs.md` ([the example above in the index](https://woodpecker-ci.org/plugins/Git%20Clone)).

View file

@ -3,7 +3,7 @@
Woodpecker gives the ability to define Docker volumes in the YAML. You can use this parameter to mount files or folders on the host machine into your containers.
:::note
Volumes are only available to trusted repositories and for security reasons should only be used in private environments. See [project settings](./71-project-settings.md#trusted) to enable trusted mode.
Volumes are only available to trusted repositories and for security reasons should only be used in private environments. See [project settings](./75-project-settings.md#trusted) to enable trusted mode.
:::
```diff

View file

@ -0,0 +1,62 @@
# Linter
Woodpecker automatically lints your workflow files for errors, deprecations and bad habits. Errors and warnings are shown in the UI for any pipelines.
![errors and warnings in UI](./linter-warnings-errors.png)
## Running the linter from CLI
You can run the linter also manually from the CLI:
```shell
woodpecker-cli lint <workflow files>
```
## Bad habit warnings
Woodpecker warns you if your configuration contains some bad habits.
### Event filter for all steps
All your items in `when` blocks should have an `event` filter, so no step runs on all events. This is recommended because if new events are added, your steps probably shouldn't run on those as well.
Examples of an **incorrect** config for this rule:
```yaml
when:
- branch: main
- event: tag
```
This will trigger the warning because the first item (`branch: main`) does not filter with an event.
```yaml
steps:
- name: test
when:
branch: main
- name: deploy
when:
event: tag
```
Examples of a **correct** config for this rule:
```yaml
when:
- branch: main
event: push
- event: tag
```
```yaml
steps:
- name: test
when:
event: [tag, push]
- name: deploy
when:
- event: tag
```

View file

@ -12,18 +12,25 @@ The path to the pipeline config file or folder. By default it is left empty whic
Your Version-Control-System will notify Woodpecker about events via webhooks. If you want your pipeline to only run on specific webhooks, you can check them with this setting.
## Project settings
### Allow pull requests
## Allow pull requests
Enables handling webhook's pull request event. If disabled, then pipeline won't run for pull requests.
### Protected
## Allow deployments
Enables a pipeline to be started with the `deploy` event from a successful pipeline.
:::danger
Only activate this option if you trust all users who have push access to your repository.
Otherwise, these users will be able to steal secrets that are only available for `deploy` events.
:::
## Protected
Every pipeline initiated by an webhook event needs to be approved by a project members with push permissions before being executed.
The protected option can be used as an additional review process before running potentially harmful pipelines. Especially if pipelines can be executed by third-parties through pull-requests.
### Trusted
## Trusted
If you set your project to trusted, a pipeline step and by this the underlying containers gets access to escalated capabilities like mounting volumes.
@ -33,7 +40,7 @@ Only server admins can set this option. If you are not a server admin this optio
:::
### Only inject netrc credentials into trusted containers
## Only inject netrc credentials into trusted containers
Cloning pipeline step may need git credentials. They are injected via netrc. By default, they're only injected if this option is enabled, the repo is trusted ([see above](#trusted)) or the image is a trusted clone image. If you uncheck the option, git credentials will be injected into any container in clone step.

View file

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

View file

Before

Width:  |  Height:  |  Size: 165 KiB

After

Width:  |  Height:  |  Size: 165 KiB

View file

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 21 KiB

View file

@ -16,23 +16,7 @@ You can add more agents to increase the number of parallel workflows or set the
Woodpecker is having two different kinds of releases: **stable** and **next**.
### Stable releases
We release a new version every four weeks and will release the current state of the `main` branch.
If there are security fixes or critical bug fixes, we'll release them directly.
There are no backports or similar.
#### Versioning
We use [Semantic Versioning](https://semver.org/) to be able,
to communicate when admins have to do manual migration steps and when they can just bump versions up.
#### Breaking changes
As of semver guidelines, breaking changes will be released as a major version. We will hold back
breaking changes to not release many majors each containing just a few breaking changes.
Prior to the release of a major version, a release candidate (RC) will be published to allow easy testing,
the actual release will be about a week later.
Find more information about the different versions [here](/versions).
## Hardware Requirements
@ -59,7 +43,7 @@ You can install Woodpecker on multiple ways:
Authentication is done using OAuth and is delegated to your forge which is configured using environment variables.
See the complete reference for all supported forges [here](../11-forges/10-overview.md).
See the complete reference for all supported forges [here](../11-forges/11-overview.md).
## Database
@ -85,6 +69,8 @@ In the case you need to use Woodpecker with a URL path prefix (like: <https://ex
These installation methods are not officially supported. If you experience issues with them, please open issues in the specific repositories.
:::
- Using [NixOS](./30-nixos.md) via the [NixOS module](https://search.nixos.org/options?channel=unstable&size=200&sort=relevance&query=woodpecker)
- [Using NixOS](./30-nixos.md) via the [NixOS module](https://search.nixos.org/options?channel=unstable&size=200&sort=relevance&query=woodpecker)
- [On Alpine Edge](https://pkgs.alpinelinux.org/packages?name=woodpecker&branch=edge&repo=&arch=&maintainer=)
- [On Arch Linux](https://archlinux.org/packages/?q=woodpecker)
- [Using YunoHost](https://apps.yunohost.org/app/woodpecker)
- [On Cloudron](https://www.cloudron.io/store/org.woodpecker_ci.cloudronapp.html)

View file

@ -67,7 +67,7 @@ They can be configured with `*_ADDR` variables:
+ - WOODPECKER_SERVER_ADDR=${WOODPECKER_HTTP_ADDR}
```
Reverse proxying can also be [configured for gRPC](../proxy#caddy). If the agents are connecting over the internet, it should also be SSL encrypted. The agent then needs to be configured to be secure:
Reverse proxying can also be [configured for gRPC](../70-proxy.md#caddy). If the agents are connecting over the internet, it should also be SSL encrypted. The agent then needs to be configured to be secure:
```diff title="docker-compose.yaml"
version: '3'

View file

@ -1,7 +1,7 @@
# NixOS
:::info
Note that this module is not maintained by the woodpecker-developers.
Note that this module is not maintained by the Woodpecker developers.
If you experience issues please open a bug report in the [nixpkgs repo](https://github.com/NixOS/nixpkgs/issues/new/choose) where the module is maintained.
:::
@ -85,4 +85,4 @@ All configuration options can be found via [NixOS Search](https://search.nixos.o
## Tips and tricks
There are some resources on how to utilize Woodpecker more effectively with NixOS on the [Awesome Woodpecker](../../92-awesome.md) page, like using the runners nix-store in the pipeline
There are some resources on how to utilize Woodpecker more effectively with NixOS on the [Awesome Woodpecker](../../92-awesome.md) page, like using the runners nix-store in the pipeline.

View file

@ -6,7 +6,7 @@ toc_max_heading_level: 2
## User registration
Woodpecker does not have its own user registry; users are provided from your [forge](./11-forges/10-overview.md) (using OAuth2).
Woodpecker does not have its own user registry; users are provided from your [forge](./11-forges/11-overview.md) (using OAuth2).
Registration is closed by default (`WOODPECKER_OPEN=false`). If registration is open (`WOODPECKER_OPEN=true`) then every user with an account at the configured forge can login to Woodpecker.
@ -69,7 +69,7 @@ To handle sensitive data in docker-compose or docker-swarm configurations there
For docker-compose you can use a `.env` file next to your compose configuration to store the secrets outside of the compose file. While this separates configuration from secrets it is still not very secure.
Alternatively use docker-secrets. As it may be difficult to use docker secrets for environment variables woodpecker allows to read sensible data from files by providing a `*_FILE` option of all sensible configuration variables. Woodpecker will try to read the value directly from this file. Keep in mind that when the original environment variable gets specified at the same time it will override the value read from the file.
Alternatively use docker-secrets. As it may be difficult to use docker secrets for environment variables Woodpecker allows to read sensible data from files by providing a `*_FILE` option of all sensible configuration variables. Woodpecker will try to read the value directly from this file. Keep in mind that when the original environment variable gets specified at the same time it will override the value read from the file.
```diff title="docker-compose.yaml"
version: '3'
@ -419,7 +419,7 @@ The database driver name. Possible values are `sqlite3`, `mysql` or `postgres`.
### `WOODPECKER_DATABASE_DATASOURCE`
> Default: `woodpecker.sqlite`
> Default: `woodpecker.sqlite` if not running inside a container, `/var/lib/woodpecker/woodpecker.sqlite` if running inside a container
The database connection string. The default value is the path of the embedded SQLite database file.
@ -441,30 +441,6 @@ WOODPECKER_DATABASE_DATASOURCE=postgres://root:password@1.2.3.4:5432/woodpecker?
Read the value for `WOODPECKER_DATABASE_DATASOURCE` from the specified filepath
### `WOODPECKER_ENCRYPTION_KEY`
> Default: empty
Encryption key used to encrypt secrets in DB. See [secrets encryption](./40-encryption.md)
### `WOODPECKER_ENCRYPTION_KEY_FILE`
> Default: empty
Read the value for `WOODPECKER_ENCRYPTION_KEY` from the specified filepath
### `WOODPECKER_ENCRYPTION_TINK_KEYSET_FILE`
> Default: empty
Filepath to encryption keyset used to encrypt secrets in DB. See [secrets encryption](./40-encryption.md)
### `WOODPECKER_ENCRYPTION_DISABLE`
> Default: empty
Boolean flag to decrypt secrets in DB and disable server encryption. See [secrets encryption](./40-encryption.md)
### `WOODPECKER_PROMETHEUS_AUTH_TOKEN`
> Default: empty
@ -497,12 +473,6 @@ Supported variables:
- `owner`: the repo's owner
- `repo`: the repo's name
### `WOODPECKER_ADDONS`
> Default: empty
List of addon files. See [addons](./75-addons/00-overview.md).
---
### `WOODPECKER_LIMIT_MEM_SWAP`
@ -555,6 +525,12 @@ Specify a configuration service endpoint, see [Configuration Extension](./100-ex
Specify timeout when fetching the Woodpecker configuration from forge. See <https://pkg.go.dev/time#ParseDuration> for syntax reference.
### `WOODPECKER_FORGE_RETRY`
> Default: 3
Specify how many retries of fetching the Woodpecker configuration from a forge are done before we fail.
### `WOODPECKER_ENABLE_SWAGGER`
> Default: true
@ -567,20 +543,36 @@ Enable the Swagger UI for API documentation.
Disable version check in admin web UI.
### `WOODPECKER_LOG_STORE`
> Default: `database`
Where to store logs. Possible values: `database` or `file`.
### `WOODPECKER_LOG_STORE_FILE_PATH`
> Default empty
Directory to store logs in if [`WOODPECKER_LOG_STORE`](#woodpecker_log_store) is `file`.
---
### `WOODPECKER_GITHUB_...`
See [GitHub configuration](forges/github/#configuration)
See [GitHub configuration](./11-forges/20-github.md#configuration)
### `WOODPECKER_GITEA_...`
See [Gitea configuration](forges/gitea/#configuration)
See [Gitea configuration](./11-forges/30-gitea.md#configuration)
### `WOODPECKER_BITBUCKET_...`
See [Bitbucket configuration](forges/bitbucket/#configuration)
See [Bitbucket configuration](./11-forges/50-bitbucket.md#configuration)
### `WOODPECKER_GITLAB_...`
See [Gitlab configuration](forges/gitlab/#configuration)
See [GitLab configuration](./11-forges/40-gitlab.md#configuration)
### `WOODPECKER_ADDON_FORGE`
See [addon forges](./11-forges/100-addon.md).

View file

@ -0,0 +1,68 @@
# Addon forges
If the forge you're using does not comply with [Woodpecker's requirements](../../92-development/02-core-ideas.md#forges) or your setup is too specific to be added to Woodpecker's core, you can write your own forge using an addon forge.
:::warning
Addon forges are still experimental. Their implementation can change and break at any time.
:::
:::danger
You need to trust the author of the addon forge you use. It can access authentication codes and other possibly sensitive information.
:::
## Usage
To use an addon forge, download the correct addon version. Then, you can add the following to your configuration:
```ini
WOODPECKER_ADDON_FORGE=/path/to/your/addon/forge/file
```
In case you run Woodpecker as container, you probably want to mount the addon binary to `/opt/addons/`.
### Bug reports
If you experience bugs, please check which component has the issue. If it's the addon, **do not raise an issue in the main repository**, but rather use the separate addon repositories. To check which component is responsible for the bug, look at the logs. Logs from addons are marked with a special field `addon` containing their addon file name.
## List of addon forges
If you wrote or found an addon forge, please add it here so others can find it!
_Be the first one to add your addon forge!_
## Creating addon forges
Addons use RPC to communicate to the server and are implemented using the [`go-plugin` library](https://github.com/hashicorp/go-plugin).
### Writing your code
This example will use the Go language.
Directly import Woodpecker's Go packages (`go.woodpecker-ci.org/woodpecker/woodpecker/v2`) and use the interfaces and types defined there.
In the `main` function, just call `"go.woodpecker-ci.org/woodpecker/v2/server/forge/addon".Serve` with a `"go.woodpecker-ci.org/woodpecker/v2/server/forge".Forge` as argument.
This will take care of connecting the addon forge to the server.
### Example structure
```go
package main
import (
"context"
"net/http"
"go.woodpecker-ci.org/woodpecker/v2/server/forge/addon"
forgeTypes "go.woodpecker-ci.org/woodpecker/v2/server/forge/types"
"go.woodpecker-ci.org/woodpecker/v2/server/model"
)
func main() {
addon.Serve(config{})
}
type config struct {
}
// `config` must implement `"go.woodpecker-ci.org/woodpecker/v2/server/forge".Forge`. You must directly use Woodpecker's packages - see imports above.
```

View file

@ -0,0 +1,13 @@
# Forges
## Supported features
| Feature | [GitHub](20-github.md) | [Gitea](30-gitea.md) | [Forgejo](35-forgejo.md) | [Gitlab](40-gitlab.md) | [Bitbucket](50-bitbucket.md) | [Bitbucket Datacenter](60-bitbucket_datacenter.md) |
| ------------------------------------------------------------- | :--------------------: | :------------------: | :----------------------: | :--------------------: | :--------------------------: | :------------------------------------------------: |
| Event: Push | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Event: Tag | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Event: Pull-Request | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Event: Release | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: |
| Event: Deploy | :white_check_mark: | :x: | :x: | :x: | :x: | :x: |
| [Multiple workflows](../../20-usage/25-workflows.md) | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| [when.path filter](../../20-usage/20-workflow-syntax.md#path) | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: |

View file

@ -81,3 +81,9 @@ Read the value for `WOODPECKER_GITHUB_SECRET` from the specified filepath.
> Default: `false`
Configure if SSL verification should be skipped.
### `WOODPECKER_GITHUB_PUBLIC_ONLY`
> Default: `false`
Configures the GitHub OAuth client to only obtain a token that can manage public repositories.

View file

@ -2,9 +2,9 @@
toc_max_heading_level: 2
---
# Gitea / Forgejo
# Gitea
Woodpecker comes with built-in support for Gitea and the "soft" fork Forgejo. To enable Gitea you should configure the Woodpecker container using the following environment variables:
Woodpecker comes with built-in support for Gitea. To enable Gitea you should configure the Woodpecker container using the following environment variables:
```ini
WOODPECKER_GITEA=true
@ -16,7 +16,7 @@ WOODPECKER_GITEA_SECRET=YOUR_GITEA_CLIENT_SECRET
## Gitea on the same host with containers
If you have Gitea also running on the same host within a container, make sure the agent does have access to it.
The agent tries to clone using the URL which Gitea reports through its API. For simplified connectivity, you should add the woodpecker agent to the same docker network as Gitea is in.
The agent tries to clone using the URL which Gitea reports through its API. For simplified connectivity, you should add the Woodpecker agent to the same docker network as Gitea is in.
Otherwise, the communication should go via the `docker0` gateway (usually 172.17.0.1).
To configure the Docker network if the network's name is `gitea`, configure it like this:
@ -93,3 +93,11 @@ Read the value for `WOODPECKER_GITEA_SECRET` from the specified filepath
> Default: `false`
Configure if SSL verification should be skipped.
## Advanced options
### `WOODPECKER_DEV_GITEA_OAUTH_URL`
> Default: value of `WOODPECKER_GITEA_URL`
Configures the user-facing Gitea server address. Should be used if `WOODPECKER_GITEA_URL` points to an internal URL used for API requests.

View file

@ -0,0 +1,97 @@
---
toc_max_heading_level: 2
---
# Forgejo
:::warning
Forgejo support is experimental.
:::
Woodpecker comes with built-in support for Forgejo. To enable Forgejo you should configure the Woodpecker container using the following environment variables:
```ini
WOODPECKER_FORGEJO=true
WOODPECKER_FORGEJO_URL=YOUR_FORGEJO_URL
WOODPECKER_FORGEJO_CLIENT=YOUR_FORGEJO_CLIENT
WOODPECKER_FORGEJO_SECRET=YOUR_FORGEJO_CLIENT_SECRET
```
## Forgejo on the same host with containers
If you have Forgejo also running on the same host within a container, make sure the agent does have access to it.
The agent tries to clone using the URL which Forgejo reports through its API. For simplified connectivity, you should add the Woodpecker agent to the same docker network as Forgejo is in.
Otherwise, the communication should go via the `docker0` gateway (usually 172.17.0.1).
To configure the Docker network if the network's name is `forgejo`, configure it like this:
```diff title="docker-compose.yaml"
services:
[...]
woodpecker-agent:
[...]
environment:
- [...]
+ - WOODPECKER_BACKEND_DOCKER_NETWORK=forgejo
```
## Registration
Register your application with Forgejo to create your client id and secret. You can find the OAuth applications settings of Forgejo at `https://forgejo.<host>/user/settings/`. It is very import the authorization callback URL matches your http(s) scheme and hostname exactly with `https://<host>/authorize` as the path.
If you run the Woodpecker CI server on the same host as the Forgejo instance, you might also need to allow local connections in Forgejo. Otherwise webhooks will fail. Add the following lines to your Forgejo configuration (usually at `/etc/forgejo/conf/app.ini`).
```ini
[webhook]
ALLOWED_HOST_LIST=external,loopback
```
For reference see [Configuration Cheat Sheet](https://forgejo.org/docs/latest/admin/config-cheat-sheet/#webhook-webhook).
![forgejo oauth setup](gitea_oauth.gif)
## Configuration
This is a full list of configuration options. Please note that many of these options use default configuration values that should work for the majority of installations.
### `WOODPECKER_FORGEJO`
> Default: `false`
Enables the Forgejo driver.
### `WOODPECKER_FORGEJO_URL`
> Default: `https://next.forgejo.org`
Configures the Forgejo server address.
### `WOODPECKER_FORGEJO_CLIENT`
> Default: empty
Configures the Forgejo OAuth client id. This is used to authorize access.
### `WOODPECKER_FORGEJO_CLIENT_FILE`
> Default: empty
Read the value for `WOODPECKER_FORGEJO_CLIENT` from the specified filepath
### `WOODPECKER_FORGEJO_SECRET`
> Default: empty
Configures the Forgejo OAuth client secret. This is used to authorize access.
### `WOODPECKER_FORGEJO_SECRET_FILE`
> Default: empty
Read the value for `WOODPECKER_FORGEJO_SECRET` from the specified filepath
### `WOODPECKER_FORGEJO_SKIP_VERIFY`
> Default: `false`
Configure if SSL verification should be skipped.

View file

@ -14,7 +14,7 @@ WOODPECKER_BITBUCKET_SECRET=...
## Registration
You must register an OAuth application at Bitbucket in order to get a key and secret combination for woodpecker. Navigate to your workspace settings and choose `OAuth consumers` from the menu, and finally click `Add Consumer` (the url should be like: `https://bitbucket.org/[your-project-name]/workspace/settings/api`).
You must register an OAuth application at Bitbucket in order to get a key and secret combination for Woodpecker. Navigate to your workspace settings and choose `OAuth consumers` from the menu, and finally click `Add Consumer` (the url should be like: `https://bitbucket.org/[your-project-name]/workspace/settings/api`).
Please set a name and set the `Callback URL` like this:

View file

@ -0,0 +1,98 @@
---
toc_max_heading_level: 2
---
# Bitbucket Datacenter / Server
:::warning
Woodpecker comes with experimental support for Bitbucket Datacenter / Server, formerly known as Atlassian Stash.
:::
To enable Bitbucket Server you should configure the Woodpecker container using the following environment variables:
```diff title="docker-compose.yaml"
version: '3'
services:
woodpecker-server:
[...]
environment:
- [...]
+ - WOODPECKER_BITBUCKET_DC=true
+ - WOODPECKER_BITBUCKET_DC_GIT_USERNAME=foo
+ - WOODPECKER_BITBUCKET_DC_GIT_PASSWORD=bar
+ - WOODPECKER_BITBUCKET_DC_CLIENT_ID=xxx
+ - WOODPECKER_BITBUCKET_DC_CLIENT_SECRET=yyy
+ - WOODPECKER_BITBUCKET_DC_URL=http://stash.mycompany.com
woodpecker-agent:
[...]
```
## Service Account
Woodpecker uses `git+https` to clone repositories, however, Bitbucket Server does not currently support cloning repositories with an OAuth token. To work around this limitation, you must create a service account and provide the username and password to Woodpecker. This service account will be used to authenticate and clone private repositories.
## Registration
Woodpecker must be registered with Bitbucket Datacenter / Server. In the administration section of Bitbucket choose "Application Links" and then "Create link". Woodpecker should be listed as "External Application" and the direction should be set to "Incomming". Note the client id and client secret of the registration to be used in the configuration of Woodpecker.
See also [Configure an incoming link](https://confluence.atlassian.com/bitbucketserver/configure-an-incoming-link-1108483657.html).
## Configuration
This is a full list of configuration options. Please note that many of these options use default configuration values that should work for the majority of installations.
### `WOODPECKER_BITBUCKET_DC`
> Default: `false`
Enables the Bitbucket Server driver.
### `WOODPECKER_BITBUCKET_DC_URL`
> Default: empty
Configures the Bitbucket Server address.
### `WOODPECKER_BITBUCKET_DC_CLIENT_ID`
> Default: empty
Configures your Bitbucket Server OAUth 2.0 client id.
### `WOODPECKER_BITBUCKET_DC_CLIENT_SECRET`
> Default: empty
Configures your Bitbucket Server OAUth 2.0 client secret.
### `WOODPECKER_BITBUCKET_DC_GIT_USERNAME`
> Default: empty
This username is used to authenticate and clone all private repositories.
### `WOODPECKER_BITBUCKET_DC_GIT_USERNAME_FILE`
> Default: empty
Read the value for `WOODPECKER_BITBUCKET_DC_GIT_USERNAME` from the specified filepath
### `WOODPECKER_BITBUCKET_DC_GIT_PASSWORD`
> Default: empty
The password is used to authenticate and clone all private repositories.
### `WOODPECKER_BITBUCKET_DC_GIT_PASSWORD_FILE`
> Default: empty
Read the value for `WOODPECKER_BITBUCKET_DC_GIT_PASSWORD` from the specified filepath
### `WOODPECKER_BITBUCKET_DC_SKIP_VERIFY`
> Default: `false`
Configure if SSL verification should be skipped.

View file

@ -168,12 +168,6 @@ Configures if the gRPC server certificate should be verified, only valid when `W
Configures the backend engine to run pipelines on. Possible values are `auto-detect`, `docker`, `local` or `kubernetes`.
### `WOODPECKER_ADDONS`
> Default: empty
List of addon files. See [addons](./75-addons/00-overview.md).
### `WOODPECKER_BACKEND_DOCKER_*`
See [Docker backend configuration](./22-backends/10-docker.md#configuration)

View file

@ -5,33 +5,31 @@ toc_max_heading_level: 3
# Local backend
:::danger
The local backend will execute the pipelines on the local system without any isolation of any kind.
The local backend executes pipelines on the local system without any isolation.
:::
:::note
Currently we do not support services for this backend.
Currently we do not support [services](../../20-usage/60-services.md) for this backend.
[Read more here](https://github.com/woodpecker-ci/woodpecker/issues/3095).
:::
Since the code runs directly in the same context as the agent (same user, same
Since the commands run directly in the same context as the agent (same user, same
filesystem), a malicious pipeline could be used to access the agent
configuration especially the `WOODPECKER_AGENT_SECRET` variable.
It is recommended to use this backend only for private setup where the code and
pipeline can be trusted. You shouldn't use it for a public facing CI where
anyone can submit code or add new repositories. You shouldn't execute the agent
as a privileged user (root).
pipeline can be trusted. It should not be used in a public instance where
anyone can submit code or add new repositories. The agent should not run as a privileged user (root).
The local backend will use a random directory in $TMPDIR to store the cloned
The local backend will use a random directory in `$TMPDIR` to store the cloned
code and execute commands.
In order to use this backend, you need to download (or build) the
[binary](https://github.com/woodpecker-ci/woodpecker/releases/latest) of the
agent, configure it and run it on the host machine.
[agent](https://github.com/woodpecker-ci/woodpecker/releases/latest), configure it and run it on the host machine.
## Usage
To enable the local backend, add this to your configuration:
To enable the local backend, set the following:
```ini
WOODPECKER_BACKEND=local
@ -39,7 +37,7 @@ WOODPECKER_BACKEND=local
### Shell
The `image` entry is used to specify the shell, such as Bash or Fish, that is
The `image` entrypoint is used to specify the shell, such as `bash` or `fish`, that is
used to run the commands.
```yaml title=".woodpecker.yaml"
@ -51,15 +49,13 @@ steps:
### Plugins
Plugins are just executable binaries:
```yaml
steps:
- name: build
image: /usr/bin/tree
```
If no commands are provided, we treat them as plugins in the usual manner.
If no commands are provided, plugins are treated in the usual manner.
In the context of the local backend, plugins are simply executable binaries, which can be located using their name if they are listed in `$PATH`, or through an absolute path.
### Options

View file

@ -0,0 +1,305 @@
---
toc_max_heading_level: 2
---
# Kubernetes backend
The Kubernetes backend executes steps inside standalone Pods. A temporary PVC is created for the lifetime of the pipeline to transfer files between steps.
## Images from private registries
In order to pull private container images defined in your pipeline YAML you must provide [registry credentials in Kubernetes Secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/).
As the Secret is Agent-wide, it has to be placed in namespace defined by `WOODPECKER_BACKEND_K8S_NAMESPACE`.
Besides, you need to provide the Secret name to Agent via `WOODPECKER_BACKEND_K8S_PULL_SECRET_NAMES`.
## Job specific configuration
### Resources
The Kubernetes backend also allows for specifying requests and limits on a per-step basic, most commonly for CPU and memory.
We recommend to add a `resources` definition to all steps to ensure efficient scheduling.
Here is an example definition with an arbitrary `resources` definition below the `backend_options` section:
```yaml
steps:
- name: 'My kubernetes step'
image: alpine
commands:
- echo "Hello world"
backend_options:
kubernetes:
resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 400Mi
cpu: 1000m
```
You can use [Limit Ranges](https://kubernetes.io/docs/concepts/policy/limit-range/) if you want to set the limits by per-namespace basis.
### Runtime class
`runtimeClassName` specifies the name of the RuntimeClass which will be used to run this Pod. If no `runtimeClassName` is specified, the default RuntimeHandler will be used.
See the [Kubernetes documentation](https://kubernetes.io/docs/concepts/containers/runtime-class/) for more information on specifying runtime classes.
### Service account
`serviceAccountName` specifies the name of the ServiceAccount which the Pod will mount. This service account must be created externally.
See the [Kubernetes documentation](https://kubernetes.io/docs/concepts/security/service-accounts/) for more information on using service accounts.
### Node selector
`nodeSelector` specifies the labels which are used to select the node on which the job will be executed.
Labels defined here will be appended to a list which already contains `"kubernetes.io/arch"`.
By default `"kubernetes.io/arch"` is inferred from the agents' platform. One can override it by setting that label in the `nodeSelector` section of the `backend_options`.
Without a manual overwrite, builds will be randomly assigned to the runners and inherit their respective architectures.
To overwrite this, one needs to set the label in the `nodeSelector` section of the `backend_options`.
A practical example for this is when running a matrix-build and delegating specific elements of the matrix to run on a specific architecture.
In this case, one must define an arbitrary key in the matrix section of the respective matrix element:
```yaml
matrix:
include:
- NAME: runner1
ARCH: arm64
```
And then overwrite the `nodeSelector` in the `backend_options` section of the step(s) using the name of the respective env var:
```yaml
[...]
backend_options:
kubernetes:
nodeSelector:
kubernetes.io/arch: "${ARCH}"
```
You can use [WOODPECKER_BACKEND_K8S_POD_NODE_SELECTOR](#woodpecker_backend_k8s_pod_node_selector) if you want to set the node selector per Agent
or [PodNodeSelector](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#podnodeselector) admission controller if you want to set the node selector by per-namespace basis.
### Tolerations
When you use `nodeSelector` and the node pool is configured with Taints, you need to specify the Tolerations. Tolerations allow the scheduler to schedule Pods with matching taints.
See the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) for more information on using tolerations.
Example pipeline configuration:
```yaml
steps:
- name: build
image: golang
commands:
- go get
- go build
- go test
backend_options:
kubernetes:
serviceAccountName: 'my-service-account'
resources:
requests:
memory: 128Mi
cpu: 1000m
limits:
memory: 256Mi
nodeSelector:
beta.kubernetes.io/instance-type: p3.8xlarge
tolerations:
- key: 'key1'
operator: 'Equal'
value: 'value1'
effect: 'NoSchedule'
tolerationSeconds: 3600
```
### Volumes
To mount volumes a PersistentVolume (PV) and PersistentVolumeClaim (PVC) are needed on the cluster which can be referenced in steps via the `volumes` option.
Assuming a PVC named `woodpecker-cache` exists, it can be referenced as follows in a step:
```yaml
steps:
- name: "Restore Cache"
image: meltwater/drone-cache
volumes:
- woodpecker-cache:/woodpecker/src/cache
settings:
mount:
- "woodpecker-cache"
[...]
```
### Security context
Use the following configuration to set the [Security Context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for the Pod/container running a given pipeline step:
```yaml
steps:
- name: test
image: alpine
commands:
- echo Hello world
backend_options:
kubernetes:
securityContext:
runAsUser: 999
runAsGroup: 999
privileged: true
[...]
```
Note that the `backend_options.kubernetes.securityContext` object allows you to set both Pod and container level security context options in one object.
By default, the properties will be set at the Pod level. Properties that are only supported on the container level will be set there instead. So, the
configuration shown above will result in something like the following Pod spec:
```yaml
kind: Pod
spec:
securityContext:
runAsUser: 999
runAsGroup: 999
containers:
- name: wp-01hcd83q7be5ymh89k5accn3k6-0-step-0
image: alpine
securityContext:
privileged: true
[...]
```
You can also restrict a container's syscalls with [seccomp](https://kubernetes.io/docs/tutorials/security/seccomp/) profile
```yaml
backend_options:
kubernetes:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/audit.json
```
or restrict a container's access to resources by specifying [AppArmor](https://kubernetes.io/docs/tutorials/security/apparmor/) profile
```yaml
backend_options:
kubernetes:
securityContext:
apparmorProfile:
type: Localhost
localhostProfile: k8s-apparmor-example-deny-write
```
:::note
AppArmor syntax follows [KEP-24](https://github.com/kubernetes/enhancements/blob/fddcbb9cbf3df39ded03bad71228265ac6e5215f/keps/sig-node/24-apparmor/README.md).
:::
### Annotations and labels
You can specify arbitrary [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) and [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) to be set on the Pod definition for a given workflow step using the following configuration:
```yaml
backend_options:
kubernetes:
annotations:
workflow-group: alpha
io.kubernetes.cri-o.Devices: /dev/fuse
labels:
environment: ci
app.kubernetes.io/name: builder
```
In order to enable this configuration you need to set the appropriate environment variables to `true` on the woodpecker agent:
[WOODPECKER_BACKEND_K8S_POD_ANNOTATIONS_ALLOW_FROM_STEP](#woodpecker_backend_k8s_pod_annotations_allow_from_step) and/or [WOODPECKER_BACKEND_K8S_POD_LABELS_ALLOW_FROM_STEP](#woodpecker_backend_k8s_pod_labels_allow_from_step).
## Tips and tricks
### CRI-O
CRI-O users currently need to configure the workspace for all workflows in order for them to run correctly. Add the following at the beginning of your configuration:
```yaml
workspace:
base: '/woodpecker'
path: '/'
```
See [this issue](https://github.com/woodpecker-ci/woodpecker/issues/2510) for more details.
### `KUBERNETES_SERVICE_HOST` environment variable
Like the below env vars used for configuration, this can be set in the environment fonfiguration of the agent. It configures the address of the Kubernetes API server to connect to.
If running the agent within Kubernetes, this will already be set and you don't have to add it manually.
## Configuration
These env vars can be set in the `env:` sections of the agent.
### `WOODPECKER_BACKEND_K8S_NAMESPACE`
> Default: `woodpecker`
The namespace to create worker Pods in.
### `WOODPECKER_BACKEND_K8S_VOLUME_SIZE`
> Default: `10G`
The volume size of the pipeline volume.
### `WOODPECKER_BACKEND_K8S_STORAGE_CLASS`
> Default: empty
The storage class to use for the pipeline volume.
### `WOODPECKER_BACKEND_K8S_STORAGE_RWX`
> Default: `true`
Determines if `RWX` should be used for the pipeline volume's [access mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). If false, `RWO` is used instead.
### `WOODPECKER_BACKEND_K8S_POD_LABELS`
> Default: empty
Additional labels to apply to worker Pods. Must be a YAML object, e.g. `{"example.com/test-label":"test-value"}`.
### `WOODPECKER_BACKEND_K8S_POD_LABELS_ALLOW_FROM_STEP`
> Default: `false`
Determines if additional Pod labels can be defined from a step's backend options.
### `WOODPECKER_BACKEND_K8S_POD_ANNOTATIONS`
> Default: empty
Additional annotations to apply to worker Pods. Must be a YAML object, e.g. `{"example.com/test-annotation":"test-value"}`.
### `WOODPECKER_BACKEND_K8S_POD_ANNOTATIONS_ALLOW_FROM_STEP`
> Default: `false`
Determines if Pod annotations can be defined from a step's backend options.
### `WOODPECKER_BACKEND_K8S_POD_NODE_SELECTOR`
> Default: empty
Additional node selector to apply to worker pods. Must be a YAML object, e.g. `{"topology.kubernetes.io/region":"eu-central-1"}`.
### `WOODPECKER_BACKEND_K8S_SECCTX_NONROOT`
> Default: `false`
Determines if containers must be required to run as non-root users.
### `WOODPECKER_BACKEND_K8S_PULL_SECRET_NAMES`
> Default: empty
Secret names to pull images from private repositories. See, how to [Pull an Image from a Private Registry](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/).

View file

@ -0,0 +1,23 @@
# Custom backends
If none of our backends fits your usecases, you can write your own.
Therefore, implement the interface `"go.woodpecker-ci.org/woodpecker/woodpecker/v2/pipeline/backend/types".Backend` and
build a custom agent using your backend with this `main.go`:
```go
package main
import (
"go.woodpecker-ci.org/woodpecker/v2/cmd/agent/core"
backendTypes "go.woodpecker-ci.org/woodpecker/v2/pipeline/backend/types"
)
func main() {
core.RunAgent([]backendTypes.Backend{
yourBackend,
})
}
```
It is also possible to use multiple backends, you can select with [`WOODPECKER_BACKEND`](../15-agent-config.md#woodpecker_backend) between them.

View file

@ -78,7 +78,7 @@ Update your configuration to mount your certificate and key:
Update your configuration to provide the paths of your certificate and key:
```yaml title="docker-compose.yaml"
```diff title="docker-compose.yaml"
version: '3'
services:

View file

@ -31,7 +31,7 @@ You must configure Apache to set `X-Forwarded-Proto` when using https.
## Nginx
This guide provides a basic overview for installing Woodpecker server behind the Nginx web-server. For more advanced configuration options please consult the official Nginx [documentation](https://www.nginx.com/resources/admin-guide/).
This guide provides a basic overview for installing Woodpecker server behind the Nginx web-server. For more advanced configuration options please consult the official Nginx [documentation](https://docs.nginx.com/nginx/admin-guide).
Example configuration:

View file

@ -0,0 +1,594 @@
# CLI
# NAME
woodpecker-cli - A new cli application
# SYNOPSIS
woodpecker-cli
```
[--config|-c]=[value]
[--disable-update-check]
[--log-file]=[value]
[--log-level]=[value]
[--nocolor]
[--pretty]
[--server|-s]=[value]
[--token|-t]=[value]
```
# DESCRIPTION
Woodpecker command line utility
**Usage**:
```
woodpecker-cli [GLOBAL OPTIONS] command [COMMAND OPTIONS] [ARGUMENTS...]
```
# GLOBAL OPTIONS
**--config, -c**="": path to config file
**--disable-update-check**: disable update check
**--log-file**="": Output destination for logs. 'stdout' and 'stderr' can be used as special keywords. (default: "stderr")
**--log-level**="": set logging level (default: "info")
**--nocolor**: disable colored debug output, only has effect if pretty output is set too
**--pretty**: enable pretty-printed debug output
**--server, -s**="": server address
**--token, -t**="": server auth token
# COMMANDS
## pipeline
manage pipelines
### ls
show pipeline history
**--branch**="": branch filter
**--event**="": event filter
**--format**="": format output (default: "\x1b[33mPipeline #{{ .Number }} \x1b[0m\nStatus: {{ .Status }}\nEvent: {{ .Event }}\nCommit: {{ .Commit }}\nBranch: {{ .Branch }}\nRef: {{ .Ref }}\nAuthor: {{ .Author }} {{ if .Email }}<{{.Email}}>{{ end }}\nMessage: {{ .Message }}\n")
**--limit**="": limit the list size (default: 25)
**--status**="": status filter
### last
show latest pipeline details
**--branch**="": branch name (default: "main")
**--format**="": format output (default: "Number: {{ .Number }}\nStatus: {{ .Status }}\nEvent: {{ .Event }}\nCommit: {{ .Commit }}\nBranch: {{ .Branch }}\nRef: {{ .Ref }}\nMessage: {{ .Message }}\nAuthor: {{ .Author }}\n")
### logs
show pipeline logs
### info
show pipeline details
**--format**="": format output (default: "Number: {{ .Number }}\nStatus: {{ .Status }}\nEvent: {{ .Event }}\nCommit: {{ .Commit }}\nBranch: {{ .Branch }}\nRef: {{ .Ref }}\nMessage: {{ .Message }}\nAuthor: {{ .Author }}\n")
### stop
stop a pipeline
### start
start a pipeline
**--param, -p**="": custom parameters to be injected into the step environment. Format: KEY=value
### approve
approve a pipeline
### decline
decline a pipeline
### queue
show pipeline queue
**--format**="": format output (default: "\x1b[33m{{ .FullName }} #{{ .Number }} \x1b[0m\nStatus: {{ .Status }}\nEvent: {{ .Event }}\nCommit: {{ .Commit }}\nBranch: {{ .Branch }}\nRef: {{ .Ref }}\nAuthor: {{ .Author }} {{ if .Email }}<{{.Email}}>{{ end }}\nMessage: {{ .Message }}\n")
### ps
show pipeline steps
**--format**="": format output (default: "\x1b[33mStep #{{ .PID }} \x1b[0m\nStep: {{ .Name }}\nState: {{ .State }}\n")
### create
create new pipeline
**--branch**="": branch to create pipeline from
**--format**="": format output (default: "\x1b[33mPipeline #{{ .Number }} \x1b[0m\nStatus: {{ .Status }}\nEvent: {{ .Event }}\nCommit: {{ .Commit }}\nBranch: {{ .Branch }}\nRef: {{ .Ref }}\nAuthor: {{ .Author }} {{ if .Email }}<{{.Email}}>{{ end }}\nMessage: {{ .Message }}\n")
**--var**="": key=value
## log
manage logs
### purge
purge a log
## deploy
deploy code
**--branch**="": branch filter (default: "main")
**--event**="": event filter (default: "push")
**--format**="": format output (default: "Number: {{ .Number }}\nStatus: {{ .Status }}\nCommit: {{ .Commit }}\nBranch: {{ .Branch }}\nRef: {{ .Ref }}\nMessage: {{ .Message }}\nAuthor: {{ .Author }}\nTarget: {{ .Deploy }}\n")
**--param, -p**="": custom parameters to be injected into the step environment. Format: KEY=value
**--status**="": status filter (default: "success")
## exec
execute a local pipeline
**--backend-docker-api-version**="": the version of the API to reach, leave empty for latest.
**--backend-docker-cert**="": path to load the TLS certificates for connecting to docker server
**--backend-docker-host**="": path to docker socket or url to the docker server
**--backend-docker-ipv6**: backend docker enable IPV6
**--backend-docker-network**="": backend docker network
**--backend-docker-tls-verify**: enable or disable TLS verification for connecting to docker server
**--backend-docker-volumes**="": backend docker volumes (comma separated)
**--backend-engine**="": backend engine to run pipelines on (default: "auto-detect")
**--backend-http-proxy**="": if set, pass the environment variable down as "HTTP_PROXY" to steps
**--backend-https-proxy**="": if set, pass the environment variable down as "HTTPS_PROXY" to steps
**--backend-k8s-namespace**="": backend k8s namespace (default: "woodpecker")
**--backend-k8s-pod-annotations**="": backend k8s additional worker pod annotations
**--backend-k8s-pod-image-pull-secret-names**="": backend k8s pull secret names for private registries (default: "regcred")
**--backend-k8s-pod-labels**="": backend k8s additional worker pod labels
**--backend-k8s-secctx-nonroot**: `run as non root` Kubernetes security context option
**--backend-k8s-storage-class**="": backend k8s storage class
**--backend-k8s-storage-rwx**: backend k8s storage access mode, should ReadWriteMany (RWX) instead of ReadWriteOnce (RWO) be used? (default: true)
**--backend-k8s-volume-size**="": backend k8s volume size (default 10G) (default: "10G")
**--backend-local-temp-dir**="": set a different temp dir to clone workflows into (default: "/tmp")
**--backend-no-proxy**="": if set, pass the environment variable down as "NO_PROXY" to steps
**--commit-author-avatar**="":
**--commit-author-email**="":
**--commit-author-name**="":
**--commit-branch**="":
**--commit-message**="":
**--commit-ref**="":
**--commit-refspec**="":
**--commit-sha**="":
**--env**="":
**--forge-type**="":
**--forge-url**="":
**--local**: run from local directory
**--netrc-machine**="":
**--netrc-password**="":
**--netrc-username**="":
**--network**="": external networks
**--pipeline-created**="": (default: 0)
**--pipeline-event**="": (default: "manual")
**--pipeline-finished**="": (default: 0)
**--pipeline-number**="": (default: 0)
**--pipeline-parent**="": (default: 0)
**--pipeline-started**="": (default: 0)
**--pipeline-status**="":
**--pipeline-target**="":
**--pipeline-url**="":
**--prev-commit-author-avatar**="":
**--prev-commit-author-email**="":
**--prev-commit-author-name**="":
**--prev-commit-branch**="":
**--prev-commit-message**="":
**--prev-commit-ref**="":
**--prev-commit-refspec**="":
**--prev-commit-sha**="":
**--prev-pipeline-created**="": (default: 0)
**--prev-pipeline-event**="":
**--prev-pipeline-finished**="": (default: 0)
**--prev-pipeline-number**="": (default: 0)
**--prev-pipeline-started**="": (default: 0)
**--prev-pipeline-status**="":
**--prev-pipeline-url**="":
**--privileged**="": privileged plugins (default: "plugins/docker", "plugins/gcr", "plugins/ecr", "woodpeckerci/plugin-docker-buildx", "codeberg.org/woodpecker-plugins/docker-buildx")
**--repo**="": full repo name
**--repo-clone-ssh-url**="":
**--repo-clone-url**="":
**--repo-path**="": path to local repository
**--repo-private**="":
**--repo-remote-id**="":
**--repo-trusted**:
**--repo-url**="":
**--step-name**="": (default: 0)
**--system-name**="": (default: "woodpecker")
**--system-platform**="":
**--system-url**="": (default: "https://github.com/woodpecker-ci/woodpecker")
**--timeout**="": pipeline timeout (default: 1h0m0s)
**--volumes**="": pipeline volumes
**--workflow-name**="": (default: 0)
**--workflow-number**="": (default: 0)
**--workspace-base**="": (default: "/woodpecker")
**--workspace-path**="": (default: "src")
## info
show information about the current user
## registry
manage registries
### add
adds a registry
**--hostname**="": registry hostname (default: "docker.io")
**--password**="": registry password
**--repository, --repo**="": repository id or full-name (e.g. 134 or octocat/hello-world)
**--username**="": registry username
### rm
remove a registry
**--hostname**="": registry hostname (default: "docker.io")
**--repository, --repo**="": repository id or full-name (e.g. 134 or octocat/hello-world)
### update
update a registry
**--hostname**="": registry hostname (default: "docker.io")
**--password**="": registry password
**--repository, --repo**="": repository id or full-name (e.g. 134 or octocat/hello-world)
**--username**="": registry username
### info
display registry info
**--hostname**="": registry hostname (default: "docker.io")
**--repository, --repo**="": repository id or full-name (e.g. 134 or octocat/hello-world)
### ls
list registries
**--repository, --repo**="": repository id or full-name (e.g. 134 or octocat/hello-world)
## secret
manage secrets
### add
adds a secret
**--event**="": secret limited to these events
**--global**: global secret
**--image**="": secret limited to these images
**--name**="": secret name
**--organization, --org**="": organization id or full-name (e.g. 123 or octocat)
**--repository, --repo**="": repository id or full-name (e.g. 134 or octocat/hello-world)
**--value**="": secret value
### rm
remove a secret
**--global**: global secret
**--name**="": secret name
**--organization, --org**="": organization id or full-name (e.g. 123 or octocat)
**--repository, --repo**="": repository id or full-name (e.g. 134 or octocat/hello-world)
### update
update a secret
**--event**="": secret limited to these events
**--global**: global secret
**--image**="": secret limited to these images
**--name**="": secret name
**--organization, --org**="": organization id or full-name (e.g. 123 or octocat)
**--repository, --repo**="": repository id or full-name (e.g. 134 or octocat/hello-world)
**--value**="": secret value
### info
display secret info
**--global**: global secret
**--name**="": secret name
**--organization, --org**="": organization id or full-name (e.g. 123 or octocat)
**--repository, --repo**="": repository id or full-name (e.g. 134 or octocat/hello-world)
### ls
list secrets
**--global**: global secret
**--organization, --org**="": organization id or full-name (e.g. 123 or octocat)
**--repository, --repo**="": repository id or full-name (e.g. 134 or octocat/hello-world)
## repo
manage repositories
### ls
list all repos
**--format**="": format output (default: "\x1b[33m{{ .FullName }}\x1b[0m (id: {{ .ID }}, forgeRemoteID: {{ .ForgeRemoteID }})")
**--org**="": filter by organization
### info
show repository details
**--format**="": format output (default: "Owner: {{ .Owner }}\nRepo: {{ .Name }}\nURL: {{ .ForgeURL }}\nConfig path: {{ .Config }}\nVisibility: {{ .Visibility }}\nPrivate: {{ .IsSCMPrivate }}\nTrusted: {{ .IsTrusted }}\nGated: {{ .IsGated }}\nClone url: {{ .Clone }}\nAllow pull-requests: {{ .AllowPullRequests }}\n")
### add
add a repository
### update
update a repository
**--config**="": repository configuration path (e.g. .woodpecker.yml)
**--gated**: repository is gated
**--pipeline-counter**="": repository starting pipeline number (default: 0)
**--timeout**="": repository timeout (default: 0s)
**--trusted**: repository is trusted
**--unsafe**: validate updating the pipeline-counter is unsafe
**--visibility**="": repository visibility
### rm
remove a repository
### repair
repair repository webhooks
### chown
assume ownership of a repository
### sync
synchronize the repository list
**--format**="": format output (default: "\x1b[33m{{ .FullName }}\x1b[0m (id: {{ .ID }}, forgeRemoteID: {{ .ForgeRemoteID }})")
## user
manage users
### ls
list all users
**--format**="": format output (default: "{{ .Login }}")
### info
show user details
**--format**="": format output (default: "User: {{ .Login }}\nEmail: {{ .Email }}")
### add
adds a user
### rm
remove a user
## lint
lint a pipeline configuration file
## log-level
get the logging level of the server, or set it with [level]
## cron
manage cron jobs
### add
add a cron job
**--branch**="": cron branch
**--name**="": cron name
**--repository, --repo**="": repository id or full-name (e.g. 134 or octocat/hello-world)
**--schedule**="": cron schedule
### rm
remove a cron job
**--id**="": cron id
**--repository, --repo**="": repository id or full-name (e.g. 134 or octocat/hello-world)
### update
update a cron job
**--branch**="": cron branch
**--id**="": cron id
**--name**="": cron name
**--repository, --repo**="": repository id or full-name (e.g. 134 or octocat/hello-world)
**--schedule**="": cron schedule
### info
display info about a cron job
**--id**="": cron id
**--repository, --repo**="": repository id or full-name (e.g. 134 or octocat/hello-world)
### ls
list cron jobs
**--repository, --repo**="": repository id or full-name (e.g. 134 or octocat/hello-world)
## setup
setup the woodpecker-cli for the first time
**--server-url**="": The URL of the woodpecker server
**--token**="": The token to authenticate with the woodpecker server
## update
update the woodpecker-cli to the latest version
**--force**: force update even if the latest version is already installed

View file

@ -0,0 +1,18 @@
# About
Woodpecker has been originally forked from Drone 0.8 as the Drone CI license was changed after the 0.8 release from Apache 2.0 to a proprietary license. Woodpecker is based on this latest freely available version.
## History
Woodpecker was originally forked by [@laszlocph](https://github.com/laszlocph) in 2019.
A few important time points:
- [`2fbaa56`](https://github.com/woodpecker-ci/woodpecker/commit/2fbaa56eee0f4be7a3ca4be03dbd00c1bf5d1274) is the first commit of the fork, made on Apr 3, 2019.
- The first release [v0.8.91](https://github.com/woodpecker-ci/woodpecker/releases/tag/v0.8.91) was published on Apr 6, 2019.
- On Aug 27, 2019, the project was renamed to "Woodpecker" ([`630c383`](https://github.com/woodpecker-ci/woodpecker/commit/630c383181b10c4ec375e500c812c4b76b3c52b8)).
- The first release under the name "Woodpecker" was published on Sep 9, 2019 ([v0.8.104](https://github.com/woodpecker-ci/woodpecker/releases/tag/v0.8.104)).
## Differences to Drone
Woodpecker is a community-focused software that still stay free and open source forever, while Drone is managed by [Harness](https://harness.io/) and published under [Polyform Small Business](https://polyformproject.org/licenses/small-business/1.0.0/) license.

View file

@ -2,11 +2,25 @@
Some versions need some changes to the server configuration or the pipeline configuration files.
<!--
## 3.0.0
- Update all webhooks by pressing the "Repair all" button in the admin settings as the webhook token claims have changed
-->
## `next`
- Deprecated `steps.[name].group` in favor of `steps.[name].depends_on` (see [workflow syntax](./20-usage/20-workflow-syntax.md#depends_on) to learn how to set dependencies)
- Removed `WOODPECKER_ROOT_PATH` and `WOODPECKER_ROOT_URL` config variables. Use `WOODPECKER_HOST` with a path instead
- Pipelines without a config file will now be skipped instead of failing
- Deprecated `includes` and `excludes` support from **event** filter
- Deprecated uppercasing all secret env vars, instead, the value of the `secrets` property is used. [Read more](./20-usage/40-secrets.md#use-secrets-in-commands)
- Deprecated alternative names for secrets, use `environment` with `from_secret`
- Deprecated slice definition for env vars
- Deprecated `environment` filter, use `when.evaluate`
- Use `WOODPECKER_EXPERT_FORGE_OAUTH_HOST` instead of `WOODPECKER_DEV_GITEA_OAUTH_URL` or `WOODPECKER_DEV_OAUTH_HOST`
- Deprecated `WOODPECKER_WEBHOOK_HOST` in favor of `WOODPECKER_EXPERT_WEBHOOK_HOST`
## 2.0.0
@ -62,7 +76,7 @@ Some versions need some changes to the server configuration or the pipeline conf
Only projects created after updating will have an empty value by default. Existing projects will stick to the current pipeline path which is `.drone.yml` in most cases.
Read more about it at the [Project Settings](./20-usage/71-project-settings.md#pipeline-path)
Read more about it at the [Project Settings](./20-usage/75-project-settings.md#pipeline-path)
- From version `0.15.0` ongoing there will be three types of docker images: `latest`, `next` and `x.x.x` with an alpine variant for each type like `latest-alpine`.
If you used `latest` before to try pre-release features you should switch to `next` after this release.

View file

@ -1,6 +1,6 @@
# Awesome Woodpecker
A curated list of awesome things related to Woodpecker-CI.
A curated list of awesome things related to Woodpecker CI.
If you have some missing resources, please feel free to [open a pull-request](https://github.com/woodpecker-ci/woodpecker/edit/main/docs/docs/92-awesome.md) and add them.
@ -14,7 +14,7 @@ If you have some missing resources, please feel free to [open a pull-request](ht
## Projects using Woodpecker
- [Woodpecker-CI](https://github.com/woodpecker-ci/woodpecker/tree/main/.woodpecker) itself
- [Woodpecker CI](https://github.com/woodpecker-ci/woodpecker/tree/main/.woodpecker) itself
- [All official plugins](https://github.com/woodpecker-ci?q=plugin&type=all)
- [dessalines/thumb-key](https://github.com/dessalines/thumb-key/blob/main/.woodpecker.yml) - Android Jetpack compose linting and building
- [Vieter](https://git.rustybever.be/vieter-v/vieter) - Archlinux/Pacman repository server & automated package build system
@ -24,12 +24,12 @@ If you have some missing resources, please feel free to [open a pull-request](ht
## Tools
- [Convert Drone CI pipelines to Woodpecker CI](https://codeberg.org/lafriks/woodpecker-pipeline-transform)
- [Ansible NAS](https://github.com/davestephens/ansible-nas/) - a homelab Ansible playbook that can set up Woodpecker-CI and Gitea
- [Ansible NAS](https://github.com/davestephens/ansible-nas/) - a homelab Ansible playbook that can set up Woodpecker CI and Gitea
- [picus](https://github.com/windsource/picus) - Picus connects to a Woodpecker CI server and creates an agent in the cloud when there are pending workflows.
- [Hetzner cloud](https://www.hetzner.com/cloud) based [Woodpecker compatible autoscaler](https://git.ljoonal.xyz/ljoonal/hetzner-ci-autoscaler) - Creates and destroys VPS instances based on the count of pending & running jobs.
- [woodpecker-lint](https://git.schmidl.dev/schtobia/woodpecker-lint) - A repository for linting a woodpecker config file via pre-commit hook
- [Grafana Dashboard](https://github.com/Janik-Haag/woodpecker-grafana-dashboard) - A dashboard visualizing information exposed by the woodpecker prometheus endpoint.
- [woodpecker-autoscaler](https://github.com/Lerentis/woodpecker-autoscaler) - Yet another woodpecker autoscaler currently targeting [Hetzner cloud](https://www.hetzner.com/cloud) that works in parallel to other autoscaler implementations.
- [woodpecker-lint](https://git.schmidl.dev/schtobia/woodpecker-lint) - A repository for linting a Woodpecker config file via pre-commit hook
- [Grafana Dashboard](https://github.com/Janik-Haag/woodpecker-grafana-dashboard) - A dashboard visualizing information exposed by the Woodpecker prometheus endpoint.
- [woodpecker-autoscaler](https://github.com/Lerentis/woodpecker-autoscaler) - Yet another Woodpecker autoscaler currently targeting [Hetzner cloud](https://www.hetzner.com/cloud) that works in parallel to other autoscaler implementations.
## Configuration Services
@ -50,6 +50,11 @@ If you have some missing resources, please feel free to [open a pull-request](ht
- [Locally Cached Nix CI with Woodpecker](https://blog.kotatsu.dev/posts/2023-04-21-woodpecker-nix-caching/)
- [How to run Cypress auto-tests on Woodpecker CI and report results to Slack](https://devforth.io/blog/how-to-run-cypress-auto-tests-on-woodpecker-ci-and-report-results-to-slack/)
- [Quest For CICD - WoodpeckerCI](https://omaramin.me/posts/woodpecker/)
- [Getting started with Woodpecker CI](https://systeemkabouter.eu/getting-started-with-woodpecker-ci.html)
- [Installing gitea and woodpecker using binary packages](https://neelex.com/2023/03/26/Installing-gitea-using-binary-packages/)
- [Deploying mdbook to codeberg pages using woodpecker CI](https://www.markpitblado.me/blog/deploying-mdbook-to-codeberg-pages-using-woodpecker-ci/)
- [Deploy a Fly app with Woodpecker CI](https://joeroe.io/2024/01/09/deploy-fly-woodpecker-ci.html)
- [Ansible - using Woodpecker as an alternative to Semaphore](https://pat-s.me/ansible-using-woodpecker-as-an-alternative-to-semaphore/)
## Videos

View file

@ -1,12 +1,5 @@
# Getting started
## Core ideas
- A (e.g. pipeline) configuration should never be [turing complete](https://en.wikipedia.org/wiki/Turing_completeness) (We have agents to exec things 🙂).
- If possible follow the [KISS principle](https://en.wikipedia.org/wiki/KISS_principle).
- What is used most should be default.
- Keep different topics separated, so you can write plugins, port new ideas ... more easily, see [Architecture](./05-architecture.md).
You can develop on your local computer by following the [steps below](#preparation-for-local-development) or you can start with a fully prepared online setup using [Gitpod](https://github.com/gitpod-io/gitpod) and [Gitea](https://github.com/go-gitea/gitea).
## Gitpod
@ -89,7 +82,7 @@ WOODPECKER_HEALTHCHECK=false
### Setup OAuth
Create an OAuth app for your forge as described in the [forges documentation](../30-administration/11-forges/10-overview.md). If you set `WOODPECKER_DEV_OAUTH_HOST=http://localhost:8000` you can use that address with the path as explained for the specific forge to login without the need for a public address. For example for GitHub you would use `http://localhost:8000/authorize` as authorization callback URL.
Create an OAuth app for your forge as described in the [forges documentation](../30-administration/11-forges/11-overview.md). If you set `WOODPECKER_DEV_OAUTH_HOST=http://localhost:8000` you can use that address with the path as explained for the specific forge to login without the need for a public address. For example for GitHub you would use `http://localhost:8000/authorize` as authorization callback URL.
## Developing with VS Code

View file

@ -0,0 +1,26 @@
# Core ideas
- A configuration (e.g. of a pipeline) should never be [turing complete](https://en.wikipedia.org/wiki/Turing_completeness) (We have agents to exec things 🙂).
- If possible, follow the [KISS principle](https://en.wikipedia.org/wiki/KISS_principle).
- What is used most often should be default.
- Keep different topics separated, so you can write plugins, port new ideas ... more easily, see [Architecture](./05-architecture.md).
## Addons and extensions
If you are wondering whether your contribution will be accepted to be merged in the Woodpecker core, or whether it's better to write an
[addon forge](../30-administration/11-forges/100-addon.md), [extension](../30-administration/100-external-configuration-api.md) or an
[external custom backend](../30-administration/22-backends/50-custom-backends.md), please check these points:
- Is your change very specific to your setup and unlikely to be used by anyone else?
- Does your change violate the [guidelines](#guidelines)?
Both should be false when you open a pull request to get your change into the core repository.
### Guidelines
#### Forges
A new forge must support these features:
- OAuth2
- Webhooks

View file

@ -34,8 +34,8 @@
| `server/forge/**` | forge lib for server to connect and handle forge specific stuff | `shared`, `server/model` |
| `server/router/**` | handle requests to REST API (and all middleware) and serve UI and WebUI config | `shared`, `../api`, `../model`, `../forge`, `../store`, `../web` |
| `server/store/**` | handle database | `server/model` |
| `server/shared/**` | TODO: move and split [#974](https://github.com/woodpecker-ci/woodpecker/issues/974) |
| `server/web/**` | server SPA |
| `server/shared/**` | TODO: move and split [#974](https://github.com/woodpecker-ci/woodpecker/issues/974) | |
| `server/web/**` | server SPA | |
- `../` = `server/`

View file

@ -3,7 +3,6 @@
## ORM
Woodpecker uses [Xorm](https://xorm.io/) as ORM for the database connection.
You can find its documentation at [gobook.io/read/gitea.com/xorm](https://gobook.io/read/gitea.com/xorm/manual-en-US/).
## Add a new migration

View file

@ -46,7 +46,7 @@ These guidelines aim to have consistent wording in the swagger doc:
- `@Param Authorization` is almost always present, there are just a few un-protected endpoints
There are many examples in the `server/api` package, which you can use a blueprint.
More enhanced information you can find here <https://github.com/swaggo/swag/blob/main/README.md#declarative-comments-format>
More enhanced information you can find here <https://github.com/swaggo/swag/blob/master/README.md#declarative-comments-format>
### Manual code generation

View file

Before

Width:  |  Height:  |  Size: 7.5 KiB

After

Width:  |  Height:  |  Size: 7.5 KiB

View file

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 17 KiB

View file

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 11 KiB

View file

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 70 KiB

View file

@ -1 +1 @@
["2.5", "2.4", "2.3", "1.0"]
["2.6", "2.5", "2.4", "1.0"]