When triggering a deployment event on an existing pipeline, there is no
way to get the tag used to trigger the parent pipeline, even if this
parent was a tag event.
In our company CI/CD current setup with drone, we use the tag event to
trigger a kaniko image build step, using the git tag as an image tag,
and the deployment/promotion step to effectively deploy this image using
the tag reference in our cluster. This is the only point blocking us to
completely switch to woodpecker and get rid of drone...
What's done:
- changed the metadata environ() method to populate CI_COMMIT_TAG env
var if commit ref starts with /refs/tags (like it's done in drone),
independently of event type EventTag.
Please let me know if I'm wrong, I will happily contribute in this nice
project.
---------
Co-authored-by: Christian Gapany <christian.gapany@netplus.pro>
Co-authored-by: Lauris BH <lauris@nix.lv>
I've tried setting resources for a service and have seen the linter
warning that is not supported, though the the pipeline was successful
and resources were actually set on the pod. So I assume it shouldn't be
a linter issue.
I"m also not sure if my change is correct, I only hope it is
## Some Context
A pipeline example (I've removed steps that are not related directly:
```yaml
---
steps:
test:
name: Test charts
image: quay.io/helmpack/chart-testing
environment:
- DOCKER_HOST=tcp://docker:2375
commands:
- export PATH=$PWD/.bin:$PATH
- apk update && apk add docker
- kind create cluster --config kind.yaml
- sed -i -E -e 's/localhost|0\.0\.0\.0/docker/g' ~/.kube/config
- git fetch origin
- |
if [ -e .changed ]; then
ct install --target-branch main --chart-dirs .
ct install --target-branch main --chart-dirs . --upgrade
fi
services:
docker:
image: docker:dind
commands: dockerd -H tcp://0.0.0.0:2375 --tls=false
privileged: true
ports:
- 2375
backend_options:
kubernetes:
resources:
requests:
memory: 400Mi
cpu: 100m
limits:
memory: 400Mi
cpu: 100m
```
Pod description:
```
Containers:
wp-01hhczdknafj81jv80gzjbgt93-0-services-0:
Limits:
cpu: 100m
memory: 400Mi
Requests:
cpu: 100m
memory: 400Mi
```
Warning in the Woodpecker UI:
```
[linter]woodpecker: services.dockerAdditional property backend_options is not allowed
```
https://go.dev/doc/modules/release-workflow#breaking
Fixes https://github.com/woodpecker-ci/woodpecker/issues/2913 fixes
#2654
```
runephilosof@fedora:~/code/platform-woodpecker/woodpecker-repo-configurator (master)$ go get go.woodpecker-ci.org/woodpecker@v2.0.0
go: go.woodpecker-ci.org/woodpecker@v2.0.0: invalid version: module contains a go.mod file, so module path must match major version ("go.woodpecker-ci.org/woodpecker/v2")
```
---------
Co-authored-by: qwerty287 <80460567+qwerty287@users.noreply.github.com>
Add additional string matching to determine when container is not found
or running when invoked via podman compatibility socket
Co-authored-by: qwerty287 <80460567+qwerty287@users.noreply.github.com>
Revert #2180 and #2480 so we can release v2.0 without breaking changes
in pipeline config.
After merging #2476 we should apply these changes to a new v2.
After merging this, we should be ready for the 2.0 release.
---------
Co-authored-by: Anbraten <anton@ju60.de>
implement this fix but with an additional field on workflows to not
change the workflow name
closes#1840closes#713
---------
Co-authored-by: 6543 <6543@obermui.de>
if you run woodpecker-agent on windows and connect it to an docker
daemon, there could be two different platforms possible, as you can
switch from linux to windows mode and visa versa
---
*Sponsored by Kithara Software GmbH*
the cmd currently shows the full prompt and drop the exact error level.
this set the prompt to be hidden and let cmd exit with error level
reported by the command
---
*Sponsored by Kithara Software GmbH*
as this secrets have to low entropy they can not be valid secrets and
e.g. make log only unredable
just add a secret with value `a` to a repo an run a pipeline ...
---
*Sponsored by Kithara Software GmbH*
for normal posix shells we have to add the `-e ` option ... but as there
are more shells out there we have to handle this edgecases on base per
base case.
create a switch case statement in woodpecker should be fine as there is
only a finite number of shells, used in production.
also close #2612
---
*Sponsored by Kithara Software GmbH*
The SSH backend is, similar to Gogs and Coding for forges, completely
unmaintained and seems unused (it is likely broken but we didn't get any
reports).
Instead, you should directly run the agent on the SSH machine with the
`local` backend.
- refactor pipeline parsing
- do not parse the pipeline multiple times to perform filter checks, do
this once and perform checks on the result directly
- code deduplication
- refactor forge token refreshing
- move refreshing to a helper func to reduce code
---------
Co-authored-by: Anbraten <anton@ju60.de>
When in local mode, `getWorkflowIDFromStep` can handle normal steps with
a name like `wp_01h2a6qggwz68zekrkbwqq9rny_0_step_0`.
However, it will fail on clone (unless `skip_clone: true`) with an
`invalid step name` error.
```
invalid step name wp_01h2a2ebppp43bwjdfdsyj1m6m_0_clone
```
This patch handles either `_stage_` or `_clone` as the separator that
the local backend can use to extract the workflowID.
closes#1801closes#1815closes#1144
closes #983
closes #557closes#1827
regression of #1791
# TODO
- [x] adjust log model
- [x] add migration for logs
- [x] send log line via grpc using step-id
- [x] save log-line to db
- [x] stream log-lines to UI
- [x] use less structs for log-data
- [x] make web UI work
- [x] display logs loaded from db
- [x] display streaming logs
- [ ] ~~make migration work~~ -> dedicated pull (#1828)
# TESTED
- [x] new logs are stored in database
- [x] log retrieval via cli (of new logs) works
- [x] log streaming works (tested via curl & webui)
- [x] log retrieval via web (of new logs) works
---------
Co-authored-by: 6543 <6543@obermui.de>
This add a simple implementation of requests/limits for individual
steps. There is no validation of what the resource actually is beyond
checking that it can successfully be converted to a Quantity, so it can
be used for things other than just memory/CPU.
close#1809
Do not sync repos with forge if the repo is not necessary in DB.
In the DB, only repos that were active once or repos that are currently
active are stored. When trying to enable new repos, the repos list is
fetched from the forge instead and displayed directly. In addition to
this, the forge func `Perm` was removed and is now merged with `Repo`.
Solves a TODO on RepoBatch.
---------
Co-authored-by: Anbraten <anton@ju60.de>
- Kubernetes v1.26 on VKE causes error when creating persistent volume
claim because of uppercase characters in name field
This patch is trivial just in order to get it working - happy to
implement differently.
The error in question:
```
The PersistentVolumeClaim "wp-01G1131R63FWBSPMA4ZAZTKLE-0-clone-0" is invalid: metadata.name: Invalid value: "wp-01G1131R63FWBSPMA4ZAZTKLE-0-clone-0": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
```
close#1114
As long as the `VersionResponse` type is not changed the check will
fail/pass gracefully
example output:
```
{"level":"error","error":"GRPC version mismatch","time":"2023-03-19T19:49:09+01:00","message":"Server version next-6923e7ab does report grpc version 2 but we only understand 1"}
GRPC version mismatch
```
Closes https://github.com/woodpecker-ci/woodpecker/issues/1617
The `woodpecker exec` auto-detects the backend by iterating over a map
of backends. However, since Go 1 the runtime randomizes map iteration
order, so a random backend might be chosen during each execution.
PR changes to auto-detection to iterate over the backends list with
predefined priority: `docker`, `local`, `ssh`, `kubernetes`.
---------
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Closes https://github.com/woodpecker-ci/woodpecker/issues/1615
The error described in
https://github.com/woodpecker-ci/woodpecker/issues/1615 is happening
because `Tail` method of the docker backend closes the instance of
`io.ReadCloser` it returns in `defer` function. As a result anything
that try to read data returned by `Tail` method eventually will attempt
to read from closes reader and get an error:
2171212c5a/pipeline/backend/docker/docker.go (L229)
The fix is just don't close returned reader and let the consumer of
`Tail` method do it. Good thing is that `Tail` is used only in one place
and reader is correctly closed:
2171212c5a/pipeline/pipeline.go (L231-L237)
Example of `woodpecker exec` output using pipeline from
https://github.com/woodpecker-ci/woodpecker/issues/1615 with the fix:
```
woodpecker exec .woodpecker.yaml
[step1:L0:0s] + echo step1
[step1:L1:0s] step1
[step2:L0:0s] + echo step2
[step2:L1:0s] step2
```
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
Try to fix#1495
It's very hard to reproduce it and only way to fix when it gets in this
state is woodpecker agent restart.
This anyway fixes problem if step mounts and
`WOODPECKER_BACKEND_DOCKER_VOLUMES` conflict
Since "success" and "failure" are the only two possible values, and
"success" is considered to be included by default, the existing code can
also be simplified a little.
This has the side effect of ignoring the "exclude" part of the
constraint completely. I put it in the tests just to make sure the
workaround in
https://github.com/woodpecker-ci/woodpecker/issues/1181#issuecomment-1347253585
continues to work as expected, but couldn't think of any legitimate use
cases for it.
Fixes#1181
Provide up to date drone compatibility environment variables to each step execution.
closes#1416
Before a step is executed, some environemnt variables are updated.
This ensures, that the updated environment variables are copied to their corresponding `DRONE_` environemt variables.
Side effect is that the `DRONE_` environemnt variables are no longer available in the metadata which should not harm as they are not used inside woodpecker.
closes#1181closes#834
Adds `ignore_failure` to pipeline steps. When it's set to true,
if the step fails the following steps continue to execute as if no failure had occurred.
---
failure enums idea:
* fail (default) = if other steps run in parallel, wait for them and
then let workflow fail
* cancel = if other steps run in parallel, kill them
* ignore = we mark the step as failed but it wont have any impact
at the moment we compile a script that we can pipe in as single command
this is because of the constrains the docker backend gives us.
so we move it into the docker backend and eventually get rid of it altogether
breakout from #934
when new events are added you don't have to worry that pipeline will behave different as it does now with this
Co-authored-by: Anbraten <anton@ju60.de>
use yaml aliases (https://yaml.org/spec/1.2.2/#3222-anchors-and-aliases) to have pipeline `variables`
Co-authored-by: qwerty287 <80460567+qwerty287@users.noreply.github.com>
Co-authored-by: Anbraten <anton@ju60.de>
/bin/env was used to resolve a command name against PATH and pass
additional environment variables.
All of this can also be achieved using functionality already provided by
go's exec lib, which will then internally pass the appropriate arguments
to e.g. execve.
A non-zero exit code signifies a pipeline failure, but is not a fatal error in the agent.
Since exec reports this as exec.ExitError, this has to be handled explicitly.
This also fixes logs not being shown on build errors.
- Support for pipeline/containers as list
- Support for container name in logs (step.Name)
Co-authored-by: Zav Shotan <zshotan@bloomberg.net>
Co-authored-by: 6543 <6543@obermui.de>
Officially support labels for pipelines and agents to improve pipeline picking.
* add pipeline labels
* update, improve docs and add migration
* update proto file
---
closes#304 & #860
When executing a backend step, in case of failure of the specific step, the run is marked as errored but the step error is missing.
Added:
1. Log for the backend error (without trace)
2. Mark the step as errored with exit code 126 (Could not execute).
Co-authored-by: Zav Shotan <zshotan@bloomberg.net>
Co-authored-by: Anton Bracke <anton@ju60.de>
Since /tmp is writable by everybody, a user could precreate
/tmp/woodpecker with 777 permissions, allowing them to modify the
pipeline while it is being run, or preventing the pipeline from running.
And since os.MkdirAll error code wasn't checked, the same attacker
could have precreated the directory where the pipeline is executed to
mess with the run, allowing code execution under the UID of the
agent (who has access to the toke, to communicate with the server, which
mean a attacker could inject a fake agent, steal credentials, etc)
* Do not filter on linux/amd64 per default & add tests
Tasks with no platform would otherwise not perform on runners with different OS/ARCH combos
Signed-off-by: Lukas Bachschwell <lukas@lbsfilm.at>
Co-authored-by: 6543 <6543@obermui.de>
hotfix #717
This comes from the agent being inactive / not sending and requesting any data if there a no pipelines waiting for him to execute. GRPC seems to only allow 2 pings without calling an actual endpoint before closing the connection. I think this will be indirectly solved in the moment we implement something like #536https://github.com/grpc/grpc/blob/master/doc/keepalive.md
Co-authored-by: Anbraten <anton@ju60.de>
Refactor
- use constants for strings
- more tests
- move constraint code into own package
Enhance
- all constrains use doublestart (glob pattern matching) now
Co-authored-by: Anbraten <anton@ju60.de>