When in local mode, `getWorkflowIDFromStep` can handle normal steps with
a name like `wp_01h2a6qggwz68zekrkbwqq9rny_0_step_0`.
However, it will fail on clone (unless `skip_clone: true`) with an
`invalid step name` error.
```
invalid step name wp_01h2a2ebppp43bwjdfdsyj1m6m_0_clone
```
This patch handles either `_stage_` or `_clone` as the separator that
the local backend can use to extract the workflowID.
closes#1801closes#1815closes#1144
closes #983
closes #557closes#1827
regression of #1791
# TODO
- [x] adjust log model
- [x] add migration for logs
- [x] send log line via grpc using step-id
- [x] save log-line to db
- [x] stream log-lines to UI
- [x] use less structs for log-data
- [x] make web UI work
- [x] display logs loaded from db
- [x] display streaming logs
- [ ] ~~make migration work~~ -> dedicated pull (#1828)
# TESTED
- [x] new logs are stored in database
- [x] log retrieval via cli (of new logs) works
- [x] log streaming works (tested via curl & webui)
- [x] log retrieval via web (of new logs) works
---------
Co-authored-by: 6543 <6543@obermui.de>
This add a simple implementation of requests/limits for individual
steps. There is no validation of what the resource actually is beyond
checking that it can successfully be converted to a Quantity, so it can
be used for things other than just memory/CPU.
close#1809
- Kubernetes v1.26 on VKE causes error when creating persistent volume
claim because of uppercase characters in name field
This patch is trivial just in order to get it working - happy to
implement differently.
The error in question:
```
The PersistentVolumeClaim "wp-01G1131R63FWBSPMA4ZAZTKLE-0-clone-0" is invalid: metadata.name: Invalid value: "wp-01G1131R63FWBSPMA4ZAZTKLE-0-clone-0": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
```
Closes https://github.com/woodpecker-ci/woodpecker/issues/1617
The `woodpecker exec` auto-detects the backend by iterating over a map
of backends. However, since Go 1 the runtime randomizes map iteration
order, so a random backend might be chosen during each execution.
PR changes to auto-detection to iterate over the backends list with
predefined priority: `docker`, `local`, `ssh`, `kubernetes`.
---------
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Closes https://github.com/woodpecker-ci/woodpecker/issues/1615
The error described in
https://github.com/woodpecker-ci/woodpecker/issues/1615 is happening
because `Tail` method of the docker backend closes the instance of
`io.ReadCloser` it returns in `defer` function. As a result anything
that try to read data returned by `Tail` method eventually will attempt
to read from closes reader and get an error:
2171212c5a/pipeline/backend/docker/docker.go (L229)
The fix is just don't close returned reader and let the consumer of
`Tail` method do it. Good thing is that `Tail` is used only in one place
and reader is correctly closed:
2171212c5a/pipeline/pipeline.go (L231-L237)
Example of `woodpecker exec` output using pipeline from
https://github.com/woodpecker-ci/woodpecker/issues/1615 with the fix:
```
woodpecker exec .woodpecker.yaml
[step1:L0:0s] + echo step1
[step1:L1:0s] step1
[step2:L0:0s] + echo step2
[step2:L1:0s] step2
```
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
Try to fix#1495
It's very hard to reproduce it and only way to fix when it gets in this
state is woodpecker agent restart.
This anyway fixes problem if step mounts and
`WOODPECKER_BACKEND_DOCKER_VOLUMES` conflict
closes#1181closes#834
Adds `ignore_failure` to pipeline steps. When it's set to true,
if the step fails the following steps continue to execute as if no failure had occurred.
---
failure enums idea:
* fail (default) = if other steps run in parallel, wait for them and
then let workflow fail
* cancel = if other steps run in parallel, kill them
* ignore = we mark the step as failed but it wont have any impact
at the moment we compile a script that we can pipe in as single command
this is because of the constrains the docker backend gives us.
so we move it into the docker backend and eventually get rid of it altogether