forked from mirrors/gotosocial
acc333c40b
When GTS is running in a container runtime which has configured CPU or memory limits or under an init system that uses cgroups to impose CPU and memory limits the values the Go runtime sees for GOMAXPROCS and GOMEMLIMIT are still based on the host resources, not the cgroup. At least for the throttling middlewares which use GOMAXPROCS to configure their queue size, this can result in GTS running with values too big compared to the resources that will actuall be available to it. This introduces 2 dependencies which can pick up resource contraints from the current cgroup and tune the Go runtime accordingly. This should result in the different queues being appropriately sized and in general more predictable performance. These dependencies are a no-op on non-Linux systems or if running in a cgroup that doesn't set a limit on CPU or memory. The automatic tuning of GOMEMLIMIT can be disabled by either explicitly setting GOMEMLIMIT yourself or by setting AUTOMEMLIMIT=off. The automatic tuning of GOMAXPROCS can similarly be counteracted by setting GOMAXPROCS yourself.
149 lines
3.8 KiB
Markdown
149 lines
3.8 KiB
Markdown
# cgroups
|
|
|
|
[![Build Status](https://github.com/containerd/cgroups/workflows/CI/badge.svg)](https://github.com/containerd/cgroups/actions?query=workflow%3ACI)
|
|
[![codecov](https://codecov.io/gh/containerd/cgroups/branch/main/graph/badge.svg)](https://codecov.io/gh/containerd/cgroups)
|
|
[![GoDoc](https://godoc.org/github.com/containerd/cgroups?status.svg)](https://godoc.org/github.com/containerd/cgroups)
|
|
[![Go Report Card](https://goreportcard.com/badge/github.com/containerd/cgroups)](https://goreportcard.com/report/github.com/containerd/cgroups)
|
|
|
|
Go package for creating, managing, inspecting, and destroying cgroups.
|
|
The resources format for settings on the cgroup uses the OCI runtime-spec found
|
|
[here](https://github.com/opencontainers/runtime-spec).
|
|
|
|
## Examples
|
|
|
|
### Create a new cgroup
|
|
|
|
This creates a new cgroup using a static path for all subsystems under `/test`.
|
|
|
|
* /sys/fs/cgroup/cpu/test
|
|
* /sys/fs/cgroup/memory/test
|
|
* etc....
|
|
|
|
It uses a single hierarchy and specifies cpu shares as a resource constraint and
|
|
uses the v1 implementation of cgroups.
|
|
|
|
|
|
```go
|
|
shares := uint64(100)
|
|
control, err := cgroups.New(cgroups.V1, cgroups.StaticPath("/test"), &specs.LinuxResources{
|
|
CPU: &specs.LinuxCPU{
|
|
Shares: &shares,
|
|
},
|
|
})
|
|
defer control.Delete()
|
|
```
|
|
|
|
### Create with systemd slice support
|
|
|
|
|
|
```go
|
|
control, err := cgroups.New(cgroups.Systemd, cgroups.Slice("system.slice", "runc-test"), &specs.LinuxResources{
|
|
CPU: &specs.CPU{
|
|
Shares: &shares,
|
|
},
|
|
})
|
|
|
|
```
|
|
|
|
### Load an existing cgroup
|
|
|
|
```go
|
|
control, err = cgroups.Load(cgroups.V1, cgroups.StaticPath("/test"))
|
|
```
|
|
|
|
### Add a process to the cgroup
|
|
|
|
```go
|
|
if err := control.Add(cgroups.Process{Pid:1234}); err != nil {
|
|
}
|
|
```
|
|
|
|
### Update the cgroup
|
|
|
|
To update the resources applied in the cgroup
|
|
|
|
```go
|
|
shares = uint64(200)
|
|
if err := control.Update(&specs.LinuxResources{
|
|
CPU: &specs.LinuxCPU{
|
|
Shares: &shares,
|
|
},
|
|
}); err != nil {
|
|
}
|
|
```
|
|
|
|
### Freeze and Thaw the cgroup
|
|
|
|
```go
|
|
if err := control.Freeze(); err != nil {
|
|
}
|
|
if err := control.Thaw(); err != nil {
|
|
}
|
|
```
|
|
|
|
### List all processes in the cgroup or recursively
|
|
|
|
```go
|
|
processes, err := control.Processes(cgroups.Devices, recursive)
|
|
```
|
|
|
|
### Get Stats on the cgroup
|
|
|
|
```go
|
|
stats, err := control.Stat()
|
|
```
|
|
|
|
By adding `cgroups.IgnoreNotExist` all non-existent files will be ignored, e.g. swap memory stats without swap enabled
|
|
```go
|
|
stats, err := control.Stat(cgroups.IgnoreNotExist)
|
|
```
|
|
|
|
### Move process across cgroups
|
|
|
|
This allows you to take processes from one cgroup and move them to another.
|
|
|
|
```go
|
|
err := control.MoveTo(destination)
|
|
```
|
|
|
|
### Create subcgroup
|
|
|
|
```go
|
|
subCgroup, err := control.New("child", resources)
|
|
```
|
|
|
|
### Registering for memory events
|
|
|
|
This allows you to get notified by an eventfd for v1 memory cgroups events.
|
|
|
|
```go
|
|
event := cgroups.MemoryThresholdEvent(50 * 1024 * 1024, false)
|
|
efd, err := control.RegisterMemoryEvent(event)
|
|
```
|
|
|
|
```go
|
|
event := cgroups.MemoryPressureEvent(cgroups.MediumPressure, cgroups.DefaultMode)
|
|
efd, err := control.RegisterMemoryEvent(event)
|
|
```
|
|
|
|
```go
|
|
efd, err := control.OOMEventFD()
|
|
// or by using RegisterMemoryEvent
|
|
event := cgroups.OOMEvent()
|
|
efd, err := control.RegisterMemoryEvent(event)
|
|
```
|
|
|
|
### Attention
|
|
|
|
All static path should not include `/sys/fs/cgroup/` prefix, it should start with your own cgroups name
|
|
|
|
## Project details
|
|
|
|
Cgroups is a containerd sub-project, licensed under the [Apache 2.0 license](./LICENSE).
|
|
As a containerd sub-project, you will find the:
|
|
|
|
* [Project governance](https://github.com/containerd/project/blob/main/GOVERNANCE.md),
|
|
* [Maintainers](https://github.com/containerd/project/blob/main/MAINTAINERS),
|
|
* and [Contributing guidelines](https://github.com/containerd/project/blob/main/CONTRIBUTING.md)
|
|
|
|
information in our [`containerd/project`](https://github.com/containerd/project) repository.
|