This is the inital step towards migrating our docker images setup
to something closer and eventually freedesktop/citemplates [1]
The idea is that jobs always run from the registry in your fork. If the
image sha/id matches the one from the upstream registry, its copied
over else a new one is build, pushed and tested.
Only change the fedora job for now while testing.
[1]: https://gitlab.freedesktop.org/freedesktop/ci-templates
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-ci/-/merge_requests/308>
In all our other builds, we are using the clone_manifest_ref script
to fetch the revision of gst-build that we discover in the manifest.
For the windows job this was missed it seems, but didn't cause
any issues till now cause it only affected the gst-build MRs.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-ci/-/merge_requests/296>
Previously we were optimizing for cpu time, so we where building
gst-build once and then exporting that to be used by the test jobs.
However this meant that we where uploading 200mb (previously 600mb)
zipped of artifacts and then re-downloading them for each test job.
This caused big costs in terms of cloud egress since the runners
aren't hosted on the same cloud as the storage/artifacts instance.
Instead we are going to be rebuilding gst-build for each test
job from now, it also doesn't take more time than the network
i/o would of downloading the artifacts, so the impact of rebuilding
shouldn't be noticebly.
We are also using pinned git refs the modules we rebuild from
the manifest, so the binaries should be reproducible for the most
part (minus things like .pyc files).
Close#68
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-ci/-/merge_requests/280>
Previously we where accidently exporting the whole repo of
gst-integration-testsuites which includes 350mb of raw media
files and made the artifacts storage explode through the roof
along with the CI bills fd.o had to pay for uploading and
redownloading the artifacts
To deal with this, we clean all the media files from the builddir
and when needed we copy them over from the cache in the docker image,
and then git fetch the repo.
Close#69
We don't run the libnice testsuite, and when binaries are built
they consume ~45mb of space. This increases the size of the artifacts
we export from the gst-build job for the testsuite and drives up
the storage and bandwith costs when re-downloading the artifacts.
Similary disable the test targets of couple other subprojects as well
We have notice that a lot of CI activity is cause by user pushing to their
branch after having created an MR. To reduce our CI foot-print, the CI will
now only be automatically triggered when a reviewer assign the MR to the merge
bot. It will still be possible to run the CI manually but the result of that
CI won't be used by Marge.
It seems to be timing out with high frequency only on Windows runners.
```
Version: 12.8.0
00:47
Git revision: 1b659122
Git branch: 12-8-stable
GO version: go1.13.7
Built: 2020-02-22T03:03:07+0000
OS/Arch: windows/amd64
Uploading artifacts...
gst-build/build/meson-logs/: found 2 matching files
WARNING: Failed to load system CertPool: crypto/x509: system root pool is not available on Windows
ERROR: Job failed (system failure): aborted: <nil>
```
See: https://gitlab.freedesktop.org/gstreamer/gst-ci/-/merge_requests/261
This might be related to the same issue described in the previous
commit: Till we can update the container image to the Feb 11 security
update, x86 executables and in general the container image will behave
badly because of:
https://support.microsoft.com/en-us/help/4542617/you-might-encounter-issues-when-using-windows-server-containers-with-t
vs2017 x86 has been failing with a runner system failure while
uploading artifacts / submitting job status:
```
Uploading artifacts...
gst-build/build/meson-logs/: found 2 matching files
WARNING: Failed to load system CertPool: crypto/x509: system root pool is not available on Windows
ERROR: Job failed (system failure): aborted: <nil>
```
https://gitlab.freedesktop.org/slomo/gst-plugins-good/-/jobs/2084184
Disable it for now.
This will reduce the excessive load on the runners which are having issues
with this job in particuliar. We will revisit when we better understand the
runners issues.
Passing regex as variable does not really works, we ended up matching the
regex as a string instead. Replace all REGEX variable with rules: override.
It is longer but more reliable.
Related to !247Fixes#63