mirror of
https://github.com/woodpecker-ci/woodpecker.git
synced 2024-09-28 06:22:01 +00:00
01699eaaab
Proposal to fix https://github.com/woodpecker-ci/woodpecker/issues/2253 We have observed several possibly-related issues on a Kubernetes backend: 1. Agents behave erractly when dealing with certain log payloads. A common observation here is that steps that produce a large volume of logs will cause some steps to be stuck "pending" forever. 2. Agents use way more CPU than should be expected, we often see 200-300 millicores of CPU per Workflow per agent (as reported on #2253). 3. We commonly see Agents displaying thousands of error lines about parsing logs, often with very close timestamps, which may explain issues 1 and 2 (as reported on #2253). ``` {"level":"error","error":"rpc error: code = Internal desc = grpc: error while marshaling: string field contains invalid UTF-8","time":"2024-04-05T21:32:25Z","caller":"/src/agent/rpc/client_grpc.go:335","message":"grpc error: log(): code: Internal"} {"level":"error","error":"rpc error: code = Internal desc = grpc: error while marshaling: string field contains invalid UTF-8","time":"2024-04-05T21:32:25Z","caller":"/src/agent/rpc/client_grpc.go:335","message":"grpc error: log(): code: Internal"} {"level":"error","error":"rpc error: code = Internal desc = grpc: error while marshaling: string field contains invalid UTF-8","time":"2024-04-05T21:32:25Z","caller":"/src/agent/rpc/client_grpc.go:335","message":"grpc error: log(): code: Internal"} ``` 4. We've also observed that agents will sometimes drop out of the worker queue, also as reported on #2253. Seeing as the logs point to `client_grpc.go:335`, this pull request fixes the issue by: 1. Removing codes.Internal from being a retryable GRPC status. Now agent GRPC calls that fail with codes. Internal will not be retried. There's not an agreement on what GRPC codes should be retried but Internal does not seem to be a common one to retry -- if ever. 2. Add a timeout of 30 seconds to any retries. Currently, the exponential retries have a maximum timeout of _15 minutes_. I assume this might be required by some other functions so Agents resume their operation in case the webserver restarts. Still this is likely the cause behind the large cpu increase as agents can be stuck trying thousands of requests for a large windown of time. The previous change alone should be enough to solve this issue but I think this might be a good idea to prevent similar problems from arising in the future. |
||
---|---|---|
.. | ||
auth_client_grpc.go | ||
auth_interceptor.go | ||
client_grpc.go |