Building Container Images¶
Building images using fedpkg¶
Following command submits a build to Koji:
fedpkg container-build --target=<target>
For detailed Fedora workflow please visit Fedora Layered Image Build System guide.
Building images using koji¶
Using a koji client CLI directly you have to specify git repo URL and branch:
koji container-build <target> <repourl>#<branch/ref> --git-branch <branch>
The koji-containerbuild plugin provides the container-build
sub-command
in the koji
CLI. Please install the plugin in order to access this
sub-command:
sudo yum install python3-koji-containerbuild-cli
You will now have the container-build
sub-command available on your
workstation. For a full list of options:
koji container-build --help
Streamed build logs¶
When atomic-reactor in the orchestrator build runs its orchestrate_build plugin and watches the builds, it will stream in the logs from those builds and emit them as logs itself, with the platform name as one of the fields. The extra fields for these worker logs will be: platform, level.
Note that there will be a single Koji task with a single log output,
which will contain logs from multiple builds. When watching this using
koji watch-logs <task id>
the log output from each worker build
will be interleaved. To watch logs from a particular worker build
image owners can use koji watch-logs <task id> | grep -w x86_64
.
Koji Build Results¶
Koji Web¶
This is the easiest way to access information about OSBS builds.
List Builds¶
Navigate to the “Builds” tab in koji and set the “Type” filter to image
.
Get Build¶
If you have the build ID, go to <KOJI_WEB_URL>/buildinfo?buildID=<build-ID>
If you want to search build by its name or part of name, use the search box on
top of the page.
For example, redis-*
and select “Builds”.
In koji build you can find a lot information about build, some noticeable are:
pull specifications for the build in the ‘Extra’ section
image.index.pull
, for digest type inimage.index.digests
list of image archives for each specific architecture for which build was executed (for more detailed information about specific archive click on ‘info’)
Also in the “Extra” section, docker.config shows (parts of) the docker image JSON description, as well it indicates container image API version.
See atomic-reactor documentation for a full description of the Koji container image metadata.
Get Task¶
All OSBS builds triggered via koji have a task linked to them. On the Build
info page, look at the “Extra” field for the container_koji_task_id
value.
When you locate this task ID integer, go to
<KOJI_WEB_URL>/taskinfo?taskID=<task-ID> to find the task responsible
for the build.
Build Logs¶
The logs can be found in task’s “Output” section (older builds will have that section empty as logs has been garbage collected), or in build’s “Logs” section (persist after garbage collection).
Koji CLI¶
List Builds¶
List all image (OSBS) builds:
koji call listBuilds type=image
Apply filter for more specific search:
koji call listBuilds type=image createdAfter='2016-02-01 00:00:00' prefix=redis
Search for builds of specific users:
koji call listUsers prefix=<user> # get user-ID
koji call listbuilds type=image userId=<user-ID>
Get Build¶
Retrieve build information from either the build ID or the build NVR:
koji buildinfo <build-ID or build-NVR>
Get Task¶
The “Extra” field in build result is useful to track the task that originated this build. Use the “container_koji_task_id”, or “filesystem_koji_task_id”, to get more info about task:
koji taskinfo <task-ID>
Cancel Task¶
You can cancel a buildContainer koji task as for other types of task, and this will cancel the OSBS build:
koji cancel <task-ID>
Build Notifications¶
Package owners and build submitter will be notified via email about build.
Building images using osbs-client¶
osbs-client provides osbs
CLI command for interaction with OSBS builds
and allows creation of new builds directly without koji-client.
Please note that mainly koji
and fedpkg
commands should be used
for building container images instead of direct osbs-client
calls.
To execute build via osbs-client CLI use:
osbs build -g <git_repo_url> -b <branch> -u <username> --git-commit <commit> [--platforms=x86_64] [-i <instance>]
To see full list of options execute:
osbs build --help
To see all osbs-client subcommands execute:
osbs --help
Please note that osbs-client
must be configured properly using config file /etc/osbs.conf
.
Please refer to osbs-client configuration section
for configuration examples.
Accessing built images¶
Information about registry and image name is included in koji build. Use one
of names listed in extra.image.index.pull
to pull built image from a registry.
If you are building multiple architectures of your components (see Image configuration),
it is possible to run/test containers for architectures that do not match your local system using
both podman
and docker
.
Docker¶
Overrides are available using the --platform
argument:
$ uname -m
x86_64
$ docker run --platform=linux/s390x --rm -it registry.access.redhat.com/ubi8/ubi:latest uname -m
Unable to find image 'registry.access.redhat.com/ubi8/ubi:latest' locally
latest: Pulling from ubi8/ubi
93db6d6cdd93: Pull complete
985161ee72a9: Pull complete
Digest: sha256:82e0fbbf1f3e223550aefbc28f44dc6b04967fe25788520eac910ac8281cec9e
Status: Downloaded newer image for registry.access.redhat.com/ubi8/ubi:latest
s390x
The necessary QEMU packages should be installed and available if you are running Docker Desktop.
Podman¶
Overrides are available using the --override-os
and --override-arch
arguments:
$ uname -m
x86_64
$ podman run -it --rm -it --pull always --override-os=linux --override-arch=arm64 registry.access.redhat.com/ubi8 uname -m
Trying to pull registry.access.redhat.com/ubi8:latest...
Getting image source signatures
Copying blob cbe902a0a8c4 skipped: already exists
Copying blob e753ad39f085 [--------------------------------------] 0.0b / 0.0b
Copying config 70dab2c4ec done
Writing manifest to image destination
Storing signatures
aarch64
If running from Fedora, you will need to install qemu-user-static
before running the different
architectures. Additionally, there is a known issue with podman where a new architecture is
not pulled if there is already one architecture pulled. Adding --pull always
will make it
behave as expected (as above).
Writing a Dockerfile¶
OSBS builds a container image from a Dockerfile
. Developers must place
their Dockerfile
at the root of a Git repository. OSBS will only process a
single Dockerfile
per Git repository branch.
Developers must set the following mandatory labels in each Dockerfile
:
com.redhat.component
: OSBS uses this value as the “name” when importing a build into Koji. We recommend that you use a string ending in-container
here, so that you can easily distinguish these container builds from other non-container builds in Koji. Example:LABEL com.redhat.component=rsyslog-container
.name
: OSBS pushes each built image to a repository in a container registry, and this label determines the name of the repository. For example, if you useLABEL name=fedora/rsyslog
, you will be able to pull your image withpodman pull my-container-registry.example.com/fedora/rsyslog
.Limit this to lowercase alphanumerical values and dashes. A single
/
is also allowed..
is not allowed in the first section. For instance,fed/rsys.log
andrsyslog
are allowed, butfe.d/rsyslog
andrsys.log
aren’t.version
: OSBS uses this for the “version” portion of the Koji build Name-Version-Release, as well as the version-release tag in container repository. Example:32
. (You may define this viaENV
from parent image if you want to use the same version as the parent.)
A combined example, in a Dockerfile
:
LABEL com.redhat.component=rsyslog-container \
name=fedora/rsyslog \
version=32
release=1
When OSBS builds the above Dockerfile
, it will import the build into Koji
as rsyslog-container-32-1
. You can pull the image from OSBS’s container
registry with:
podman pull my-container-registry.example.com/fedora/rsyslog:32-1
The release
label is optional. OSBS uses this for the “release” portion of
the Koji build Name-Version-Release, as well as the version-release tag in
container repository. (You may define release
with ENV
from the parent image if you want to use the same release as the parent.)
If you omit a release
label, OSBS will automatically determine a release
number for your build by querying Koji’s getNextRelease
API method.
OSBS will automatically set other labels for your image if you do not
set these in your Dockerfile
. Here are the default labels OSBS will set
automatically:
build-date
: Date/Time image was built as RFC 3339 date-time.architecture
: Architecture for the image.vcs-ref
: A reference within the version control repository; e.g. a git commit.vcs-type
: The type of version control used by the container source. Currently, only git is supported.
Finally, OSBS administrators may also set additional labels through the
reactor configuration, by setting the label key values in image_labels
.
Image configuration¶
Some aspects of the container image build process are controlled by a
file in the git repository named container.yaml
. This file need
not be present, but if it is it must adhere to the container.yaml
schema.
An example:
---
platforms:
# all these keys are optional
only:
- x86_64 # can be a list (as here) or a string (as below)
- ppc64le
- armhfp
not: armhfp
remote_sources:
- name: npm-example
remote_source:
repo: https://git-forge.example.com/namespace/repo.git
ref: AddFortyCharactersGitCommitHashRightHere
pkg_managers:
- npm
packages:
npm:
- path: client
- path: proxy
compose:
# used for requesting ODCS compose of type "tag"
packages:
- nss_wrapper # package name, not an NVR.
- httpd
- httpd-devel
# used for requesting ODCS compose of type "pulp"
pulp_repos: true
# used for requesting ODCS compose of type "module"
modules:
- "module_name1:stream1"
- "module_name2:stream1"
# Possible values, and default, are configured in OSBS environment.
signing_intent: release
# used for inheritance of yum repos and ODCS composes from baseimage build
inherit: true
image_build_method: docker_api
platforms¶
Keys in this map relate to multi-platform builds. The full set of
platforms for which builds may be required will come initially from
the Koji build tag associated with the build target, or from the
platforms
parameter provided to the create_orchestrator_build
API method when Koji is not used.
- only
list of platform names (or a single platform name as a string); this restricts the platforms to build for using set intersection
- not
list of platform names (or a single platform name as a string); this restricts the platforms to build for using set difference
go¶
Warning
Using this key is deprecated in favor of using Cachito integration. To switch to Cachito, set the remote_sources
key instead. OSBS does not permit users to specify a go
key with a remote_sources
key.
Keys in this map relate to source code in the Go language which the user intends to be built into the container image. They are responsible for building the source code into an executable themselves. Keys here are only for identifying source code which was used to create the files in the container image.
- modules
sequence of mappings containing information for the Go modules (packages) built and shipped in the container image. The accepted mappings are listed bellow.
- module
top-level go module (package) name to be built in the image. If
modules
is specified, this entry is required.- archive
possibly-compressed archive containing full source code including vendored dependencies.
- path
path to directory containing source code (or its parent), possibly within archive.
buildtime_limit¶
This parameter is used for setting a build time limit in seconds. After specified seconds the build will timeout. Also note that, this is an optional parameter. If it’s not specified, default build time will be used. This configuration can not exceed max build time. Max build time and default build time are set by maintainers.
compose¶
Use this section to request Yum repositories at build time. OSBS will request
a compose from ODCS and insert the .repo
file into your container build
environment. When you run a yum install
command in your Dockerfile
,
Yum will consider this repo for RPMs.
- packages
list of package names to be included in ODCS compose. Package in this case refers to the “name” portion of the NVR (name-version-release) of an RPM, not the Koji package name. Packages will be selected based on the Koji build tag of the Koji build target used. The following command is useful in determining which packages are available in a given Koji build tag:
koji list-tagged --inherit --latest TAG
If “packages” key is declared but is empty (
packages: []
in YAML), the compose will include all packages from the Koji build tag of the Koji build target.ODCS will work more quickly if you only specify the minimum set of packages you need here, but if you want to avoid hard-coding a complete package list in
container.yaml
, you can use the empty list to just make everything available.- pulp_repos
boolean to control whether or not an ODCS compose of type “pulp” should be requested. If set to true,
content_sets.yml
must also be provided. A compose will be requested for each architecture incontent_sets.yml
. See Content Sets. Additionally alsobuild_only_content_sets
will be used if provided.- modules
list of modules for requesting ODCS compose of type “module”. ODCS will cherry-pick each module into the compose.
Use this
modules
option to make module builds available that are not yet available from the other options like Pulp. This is useful if you want to test a newly-built module before it is available in Pulp, or if you want to pin to a specific module that MBS has built.This list can be of the format
name:stream
,name:stream:version
, orname:stream:version:context
.If you specify a
name:stream
without specifying aversion:context
, ODCS will query MBS to find the very latestversion:context
build. For example, if you specifygo-toolset:rhel8
, ODCS will query MBS for the latestgo-toolset
module build for therhel8
stream, whereas if you specifygo-toolset:rhel8:8020020200128163444:0ab52eed
, ODCS will compose that exact module instead.This can be modified by specifying
module_resolve_tags
(not to be confused withmodular_koji_tags
). When this is present, then instead of querying MBS for the latest built version, ODCS will look up the most recent build ofname::stream
in any of the given tags in Koji. (E.g.["<release>-pending"]
might be specified to only find builds that have been attached to an errata for<release>
.)Note that if you simply specify a
name:stream
for a module, ODCS will compose the very latest module that a module developer has built for that stream (the one with the greatest version number), and this module might not be tested by QE or GPG signed, or even intended to be released. It’s typically best to specifymodule_resolve_tags
. Alternatively, if your desired module is already QE’d, signed, and available in Pulp, skip using themodules
option entirely, and instead use thepulp_repos: true
option. This will ensure that your container build environment only uses tested and signed modules.- signing_intent
used for verifying packages in yum repositories are signed with expected signing keys. The possible values for signing intent are defined in OSBS environment. See odcs section for environment configuration details, and full explanation of Signing intent.
- inherit
boolean to control whether or not to inherit yum repositories and odcs composes from baseimage build, default false. Scratch and isolated builds do not support inheritance and false is always assumed.
- include_unpublished_pulp_repos
If you set
include_unpublished_pulp_repos: true
under thecompose
section incontainer.yaml
, the ODCS composes can pull from unpublished pulp repositories. The default isfalse
. Use this setting to make pre-release RPMs available to your container images. Use caution with this setting, because you could end up publicly shipping container images with RPMs that you have not exposed publicly otherwise.- ignore_absent_pulp_repos
If you set
ignore_absent_pulp_repos: true
under thecompose
section incontainer.yaml
, ODCS will ignore missing content sets. Use this setting if you want to pre-configure your container’scontent_sets.yml
in dist-git before a Pulp administrator creates all the repositories you expect to use in the future. Alternatively, do not enable this setting if you want to enforce strict error-checking on all the the content set names incontent_sets.yml
.- multilib_method
List of methods used to determine if a package should be considered multilib. Available methods are
iso
,runtime
,devel
, andall
.- multilib_arches
Platform list for which the multilib should be enabled. For each entry in the list, ODCS will also include packages from other compatible architectures in the compose. For example when “x86_64” is included, ODCS will also include “i686” packages in the compose.
- modular_koji_tags
List of Koji tags that have modules tagged into them. The latest version of each module
name::stream
in these tags will be included in the compose. Whentrue
is specified instead of a list, the Koji build tag of the Koji build target will be used instead.- module_resolve_tags
List of Koji tags to use when resolving the modules in
modules
. Whentrue
is specified instead of a list, the Koji build tag of the Koji build target will be used instead.- build_only_content_sets
Content sets used only for building content, not for distributing. Will be used only if
pulp_repos
is set to true. These content sets won’t be included in ICM Image Content Manifests. A compose will be requested for each architecture additionally withcontent_sets.yml
. Definition is the same as forcontent_sets.yml
See Content Sets.
If there is a “modules” key, it must have a non-empty list of modules. The “packages” key, and only the “packages” key, can have an empty list.
The “packages”, “modules”, “modular_koji_tags” and “pulp_repos” keys can be used mutually.
flatpak¶
This section holds the information needed to build a Flatpak. For more information on Flatpak builds, see flatpak-docs. This is a map with the following keys:
- id
The ID of the application or runtime. Required.
- name
name
label in generated Dockerfile. Used for the repository when pushing to a registry. Defaults to the module name.- component
com.redhat.component
label in generated Dockerfile. Used to name the build when uploading to Koji. Defaults to the module name.- base_image
The image that is used when installing packages to create the filesystem. It is also recorded as the parent image of the output image. This defaults to the
flatpak: base_image
setting in the reactor-config-map.- branch
The branch of the application or runtime. In many cases, this will match the stream name of the module. Required.
- cleanup-commands
A shell script that is run after installing all packages. Only applicable to runtimes.
- command
The name of the executable to run to start the application. If not specified, defaults to the first executable found in /usr/bin. Only applicable to applications.
- tags
Tags to add to the Flatpak metadata for searching. Only applicable to applications.
- finish-args
Arguments to
flatpak build-finish
(see the flatpak-build-finish man page). This is a string split on white space with shell style quoting. Only applicable to applications.
version¶
This key is no longer used by OSBS and is only kept in the schema for backwards compatibility.
set_release_env¶
Optional string. If set, osbs-client will modify each stage of the image’s Dockerfile, adding an ENV statement immediately following the FROM statement. The ENV statement will assign an environment variable with the same name as the value of set_release_env and the value of the current build’s release number. Users can use this environment variable to get the release value when running tools inside the container.
image_build_method¶
This string indicates which build-step plugin to use in order to perform the layered image build, on a per-image basis. The docker_api plugin uses the docker-py module to run the build via the Docker API, while the imagebuilder plugin uses the imagebuilder utility to do the same. Both have similar capabilities, but the imagebuilder plugin brings two advantages:
It performs all changes made in the build in a single layer, which is a little more efficient and removes the need to squash layers afterward.
It can perform multistage builds without requiring Docker 17+ (which Red Hat and Fedora do not support).
In order to use the imagebuilder plugin, the imagebuilder binary must be available and in the PATH for the builder image, or an error will result.
Fetching source code from external source using cachito¶
As described in Cachito integration, it is possible to use cachito to download a tarball with an upstream project and its dependencies and make it available for usage during an OSBS build.
remote_sources¶
A list of remote_source maps, each with an additional name parameter. For each remote_source, OSBS will request a source archive bundle from cachito. The keys accepted here are described below.
Note
In order for these entries to be used, both OSBS cachito integration and usage of remote_sources need to be allowed in the OSBS Instance configuration. See Configuring your cachito instance and Allowing multiple remote sources.
- name
Serves as a unique identifier for the remote source. It is a non-empty unique string containing only alphanumeric characters, underscore or dash.
- remote_source
- repo
String with an URL to the upstream project SCM repository, such as
https://git.example.com/team/repo.git
. Required.- ref
String with a 40-character reference to the SCM reference of re project described in
repo
to be fetched. This should be a complete git commit hash. Required.- pkg_managers
A list of package managers to be used for resolving the upstream project dependencies. If not provided, Cachito will assume
gomod
due to backward compatibility reasons, however, this default could be configured differently on different Cachito deployments (make sure to check with your Cachito instance admins). Finally, if this is set to an empty array ([]
), Cachito will provide the sources with no package manager magic. In other words, no environment variables, dependencies, or extra configuration will be provided with the sources.The full information about supported package managers is in the upstream Cachito package manager documentation.
- flags
List of flags to pass to the cachito request. See the cachito documentation for further reference.
- packages
A map of package managers where each value is an array of maps describing custom behavior for the packages of that package manager. For example, if you have two npm packages in the same source repository, you can specify the subdirectories with the
path
key. For example{"npm": [{"path": "client"}, {"path": "proxy"}]}
.
container.yaml example with multiple remote sources:
remote_sources:
- name: cachito-pip-with-deps
remote_source:
repo: https://github.com/cachito-testing/cachito-pip-with-deps
ref: 56efa5f7eb4ff1b7ea1409dbad76f5bb378291e6
pkg_managers: [“pip”]
- name: cachito-gomod-with-deps
remote_source:
repo: https://github.com/cachito-testing/cachito-gomod-with-deps
ref: 21e42c6a62a23002408438d07169e2d7c76649c5
pkg_managers: [“gomod”]
Once the list of remote_sources described above is set in container.yaml
,
you can copy the upstream sources and bundled dependencies for all remote
references into your build image by adding:
COPY $REMOTE_SOURCES $REMOTE_SOURCES_DIR
to your Dockerfile. This $REMOTE_SOURCES_DIR
directory contains a subdirectory
for each remote source.
You can access the source of an individual remote source at $REMOTE_SOURCES_DIR/{name}/app
,
where {name}
refers to the name of a given remote source as defined in container.yaml file.
The dependencies can be correspondingly found at $REMOTE_SOURCES_DIR/{name}/deps
OSBS also creates a $REMOTE_SOURCES_DIR/{name}/cachito.env
bash script with exported
environment variables received from each cachito request (such as GOPATH
,
GOCACHE
for gomod package manager and PIP_CERT
, PIP_INDEX_URL
for pip).
Users should use the following command in the Dockerfile to set all required variables:
RUN source $REMOTE_SOURCES_DIR/{name}/cachito.env
Note that $REMOTE_SOURCES_DIR
is a build arg, available only in build time.
Hence, for cleaning up the image after using the sources, add the following
line to the Dockerfile after the build is complete:
RUN rm -rf $REMOTE_SOURCES_DIR
$REMOTE_SOURCES
is another build arg, which points to the directory that
contains extracted tar archives provided by cachito in the buildroot workdir.
Note
To better use the cachito provided dependencies, a full gomod supporting Golang version is required. In other words, you should use Golang >= 1.13
Replacing project dependencies with cachito¶
Cachito also provides a feature to allow users to replace a project’s dependencies with another version of that same dependency or with a completely different dependency (this is useful when you want to use a patched fork for a dependency).
OSBS allows users to use this feature for test purposes. In other words, you can use cachito dependency replacements for scratch builds, and only for scratch builds.
You can use this feature using the --replace-dependency
option, which is
available for the fedpkg
, koji
, and osbs
commands.
This option expects a string with the following information, separated by the
:
character: pkg_manager:name:version[:new_name]
, where pkg_manager
is the package manager used by cachito to handle the dependency; name
is
the name of the dependency to be replaced; version
is the new version of
the dependency to be injected by cachito; and new_name
is an optional
entry, to inform cachito that the dependency known as name
by the package
manager should be replaced with a new dependency, known as new_name
by the
package manager.:
fedpkg container-build --scratch --replace-dependency gomod:pagure.org/cool-go-project:v1.2 gomod:gopkg.in/foo:2:github.com/bar/foo
or:
koji container-build [...] --scratch --replace-dependency gomod:pagure.org/cool-go-project:v1.2 --replace-dependency gomod:gopkg.in/foo:2:github.com/bar/foo
In the examples above, two dependencies would be replaced. cool-go-project
would be used in version v1.2
, no matter what version is specified by the
project requesting it. Whereas gopkg.in/foo
will be replaced by
github.com/bar/foo
version 2.
Note that while in fedpkg
the replace dependency option receives multiple
parameters, the same option should be specified multiple times in koji
or
the osbs
CLI. This was done to keep the consistency with the similar option
to specify yum repository URLs in each particular CLI.
Content Sets¶
The file content_sets.yml
is used to define the content sets relevant to the
container image. This is relevant if RPM packages in container image are in
pulp repositories. See pulp_repos
in compose for how
this file is used during build time. If this file is present, it must adhere
to the content_sets.yml schema. You can specify Pulp repositories by
content set name, repository id, or both.
This example uses RHEL 7 and RHEL 7 Extras Pulp content set names:
---
x86_64:
- rhel-7-server-rpms
- rhel-7-server-extras-rpms
ppc64le:
- rhel-7-for-power-le-rpms
- rhel-7-for-power-le-extras-rpms
This example uses RHEL 8’s Pulp content set names:
---
x86_64:
- rhel-8-for-x86_64-baseos-rpms
- rhel-8-for-x86_64-appstream-rpms
ppc64le:
- rhel-8-for-ppc64le-baseos-rpms
- rhel-8-for-ppc64le-appstream-rpms
s390x:
- rhel-8-for-s390x-baseos-rpms
- rhel-8-for-s390x-appstream-rpms
This example uses RHEL 8.4 EUS’s Pulp repository IDs:
---
x86_64:
- rhel-8-for-x86_64-baseos-eus-rpms__8_DOT_4
- rhel-8-for-x86_64-appstream-eus-rpms__8_DOT_4
ppc64le:
- rhel-8-for-ppc64le-baseos-eus-rpms__8_DOT_4
- rhel-8-for-ppc64le-appstream-eus-rpms__8_DOT_4
s390x:
- rhel-8-for-s390x-baseos-eus-rpms__8_DOT_4
- rhel-8-for-s390x-appstream-eus-rpms__8_DOT_4
aarch64:
- rhel-8-for-s390x-baseos-eus-rpms__8_DOT_4
- rhel-8-for-s390x-appstream-eus-rpms__8_DOT_4
Using Artifacts from Koji or Project Newcastle(aka PNC)¶
During a container build, it might be desirable to fetch some artifacts from an existing Koji build or a PNC build. For instance, when building a Java-based container, JAR archives from a Koji build or PNC build are required to be added to the resulting container image.
The atomic-reactor pre-build plugin, fetch_maven_artifacts, can be used for including non-RPM content in a container image during build time. This plugin will look for the existence of three files in the git repository in the same directory as the Dockerfile: fetch-artifacts-koji.yaml, fetch-artifacts-pnc.yaml and fetch-artifacts-url.yaml. (See fetch-artifacts-nvr.json, fetch-artifacts-pnc.json and fetch-artifacts-url.json for their YAML schema.)
fetch-artifacts-nvr.yaml is meant to fetch artifacts from an existing Koji build. fetch-artifacts-pnc.yaml is meant to fetch artifacts from an existing PNC build. fetch-artifacts-url.yaml allows specific URLs to be used for fetching artifacts.
All these configurations can be used together in any combination but aren’t mandatory.
fetch-artifacts-koji.yaml¶
- nvr: foobar # All archives will be downloaded
- nvr: com.sun.xml.bind.mvn-jaxb-parent-2.2.11.redhat_4-1
archives:
# pull a specific archive
- filename: jaxb-core-2.2.11.redhat-4.jar
group_id: org.glassfish.jaxb
# group_id omitted - multiple archives may be downloaded
- filename: jaxb-jxc-2.2.11.redhat-4.jar
# glob support
- filename: txw2-2.2.11.redhat-4-*.jar
# pull all archives for a specific group
- group_id: org.glassfish.jaxb
# glob support with group_id restriction
- filename: txw2-2.2.11.redhat-4-*.jar
group_id: org.glassfish.jaxb
# causes build failure due to unmatched archive
- filename: archive-filename-with-a-typo.jar
Each archive will be downloaded to artifacts/<mavenfile_path> at the root of git repository. It can be used from Dockerfile via ADD/COPY instruction:
COPY \
artifacts/org/glassfish/jaxb/jaxb-core/2.2.11.redhat-4/jaxb-core-2.2.11.redhat-4.jar /jars
The directory structure under artifacts
directory is determined
by koji.PathInfo.mavenfile
method. It’s essentially the end of
the URL after /maven/
when downloading archive from Koji Web UI.
Upon downloading each file, the plugin will verify the file checksum by leveraging the checksum value in the archive info stored in Koji. If checksum fails, container build fails immediately. The checksum algorithm used is dictated by Koji via the checksum_type value in the archive info.
If build specified in nvr attribute does not exist, the container build will fail.
If any of the archives does not produce a match, the container build will fail. In other words, every item in the archives list is expected to match at least one archive from specified Koji build. However, the build will not fail if it matches multiple archives.
Note that only archives of maven type are supported. If in the nvr supplied an archive item references a non maven artifact, the container build will fail due to no archives matching request.
fetch-artifacts-pnc.yaml¶
metadata:
# this object allows additional parameters, you can put any metadata here
author: shadowman
builds:
# all artifacts are grouped by builds to keep track of their sources
- build_id: '1234' # build id must be string
artifacts:
# list of artifacts to fetch, artifacts are fetched from PNC using their IDs
- id : '12345' # artifact id must be string
# the target can just be a filename or path+filename
target: test/rhba-common-7.10.0.redhat-00004.pom
- id: '12346'
target: prod/rhba-common-7.10.0.redhat-00004-dist.zip
- build_id: '1235'
artifacts:
- id: '12354'
target: test/client-patcher-7.10.0.redhat-00004.jar
- id: '12355'
target: prod/rhdm-7.10.0.redhat-00004-update.zip
Each artifact will be downloaded to artifacts/<target_path> at the root of git repository. It can be used from Dockerfile via ADD/COPY instruction:
Upon downloading each file, the plugin will verify the file checksums by leveraging the checksum value provided by PNC REST API. If checksum fails, container build fails immediately. All types of checksum types provided will be verified.
If build or artifact specified does not exist, the container build will fail.
fetch-artifacts-url.yaml¶
- url: http://download.example.com/JBossDV/6.3.0/jboss-dv-6.3.0-teiid-jdbc.jar
md5: e85807e42460b3bc22276e6808839013
- url: http://download.example.com/JBossDV/6.3.0/jboss-dv-6.3.0-teiid-javadoc.jar
# Use different hashing algorithm
sha256: 3ba8a145a3b1381d668203cd73ed62d53ba8a145a3b1381d668203cd73ed62d5
# Optionally, overwrite target name
target: custom-dir/custom-name.jar
- url: http://download.example.com/JBossDV/6.3.0/jboss-dv-6.3.0-teiid-jdbc.jar
md5: e85807e42460b3bc22276e6808839013
# Optionally, provide source of the artifact
source-url: http://download.example.com/JBossDV/6.3.0/jboss-dv-6.3.0-teiid-jdbc-sources.tar.gz
# When source-url is specified, checksum must be provided
source-md5: af8ee0374e8160dc19b2598da2b22162
Each archive will be downloaded to artifacts/<target_path> at the root of git repository. It can be used from Dockerfile via ADD/COPY instruction:
COPY artifacts/jboss-dv-6.3.0-teiid-jdbc.jar /jars/
COPY artifacts/custom-dir/custom-name.jar /jars/
By default, target_path is set to the filename from provided url. It can be customized by providing a target. The target value can be either a filename, archive.jar, or also include a path, my/path/archive.jar, for easier archive management.
The md5, sha1, sha256 attributes specify the corresponding hash to be used when verifying artifact was downloaded properly. At least one of them is required. If more than one is defined, multiple hashes will be computed and verified.
If source-url is specified, the source-md5, source-sha1 or source-sha256 attributes specify the corresponding hash to be used when verifying sources. At least one of the these three checksums must be provided.
Koji Build Metadata Integration¶
When OSBS fetches artifacts, it stores references to each artifact in Koji’s content generator metadata.
For artifacts from fetch-artifact-koji
, OSBS will list each artifact
component as "type": "kojifile"
in the components
list of each
docker-image
build.
For artifacts from fetch-artifacts-pnc
, OSBS will add all the PNC build
IDs to the build.extra.image.pnc
metadata.
For artifacts from fetch-artifacts-url
with source-url
, OSBS will
attach all source archives to the Koji build as a remote-sources
archive
type. You can download these to your computer with koji download-build
--type=remote-sources
.
Override Parent Image¶
OSBS uses the FROM
instruction in the Dockerfile to find a parent image
for a layered image build. Users can override this behavior by specifying a
koji parent build via the koji_parent_build
API parameter. When a user
specifies a koji_parent_build
parameter, OSBS will look up the image
reference for that koji build and override the FROM
instruction with that
image instead. The same source registry restrictions apply. For multi-stage
builds, the koji_parent_build
parameter will only override the final
FROM
instruction.
If the FROM
instruction on last stage of the Dockerfile is set
to scratch
build will fail if you specify the koji_parent_build
parameter.
If the FROM
instruction on last stage of the Dockerfile is set
to koji/image-build
the koji_parent_build
parameter will be ignored.
This behavior requires koji integration to be enabled in the OSBS environment.
Koji NVR¶
When koji integration is enabled, every container image build requires a unique
Name-Version-Release, NVR. The Name and Version are extracted from the name
and version labels in Dockerfile. Users can also use the release label
to hard code the release value, although this requires a git commit for every
build to change the value. A better alternative is to leave off the release
label which causes OSBS to query koji for what the next release value should be.
This is done via koji’s getNextRelease
API method. In either case, the
release value can also be overridden by using the release
API parameter.
During the build process, OSBS will query koji for the builds of all parent
images using their NVRs. If any of the parent image builds is not found in
koji, or if NVR information cannot be extracted from the parent image, OSBS
assumes that the parent image was not built by OSBS and halts the current
build. In other words, an image cannot be built using a parent image which has
not been built by OSBS. It is possible to disable this feature through reactor
configuration, with skip_koji_check_for_base_image
option in
config.json, when there are no NVR labels set on the base image,
if the NVR labels are set on the base image, the check is performed regardless.
OSBS skips this Koji NVR check for scratch builds. This means that when a user
builds a layered image on a scratch build, that layered image must also be a
scratch build. For example, if OSBS tags one scratch build as
rsync-containers-candidate-93619-20191017205627
, users can build another
layered scratch build on top of that with FROM
rsync-containers-candidate-93619-20191017205627
in the Dockerfile.
Digests verification¶
Once OSBS has the koji build information for a parent image, it compares the digest of the parent image manifest available in koji metadata (stored when that parent build had completed) with the actual parent image manifest digest (calculated by OSBS during the build). In case manifests do not match, the build will fail and the parent image must be rebuilt in OSBS before it is used in another build.
If the manifest in question is a manifest list and the digests comparison fail,
the V2 manifest digests in the manifest list will be compared with the koji
build archive metadata digests. In this case, OSBS will only halt the build
with an error, advising rebuilding the parent image, if the V2 manifest digests
in the manifest list do not match the analogous koji information. This
behavior can be deactivated through the deep_manifest_list_inspection
option. See config.json for further reference.
Manifest lists can be manually pushed to the registry to make sure a specific tag (e.g., latest) is available for all platforms. In such cases, these manifest lists may include images from different koji builds. OSBS will only perform digest checks for the images requested in the current build. Moreover, build requests for platforms that were not built in the same koji build as the one found for the given image reference (manifest list) will fail.
It is also possible to have OSBS only warn about any digest mismatches (instead
of halting the build with an error). This is done by setting the
fail_on_digest_mismatch
option to false in the config.json file.
Isolated Builds¶
In some cases, you may not want to update the floating tags for certain builds.
Consider the case of a container image that includes packages that have new
security vulnerabilities. To address this issue, you must build a new
container image. You only want to apply changes related to the security fixes,
and you want to ignore any new unrelated development work. It is not correct
to update the latest
floating tag reference for this build. You can use
OSBS’s isolated builds feature to achieve this.
As an example, let’s use the image rsyslog
again. At some point the
container image 7.4-2 is released (version 7.4, release 2). Soon after, minor
bug fixes are addressed in 7.4-3, a new feature is added to 7.4-4, and so on. A
security vulnerability is then discovered in the released image 7.4-2. To
minimize disruption to users, you may want to build a patched version of 7.4-2,
say 7.4-2.1. The packages installed in this new container image will differ from
the former only when needed to address the security vulnerability. It will not
include the minor bug fixes from 7.4-3, nor the new features added in 7.4-4. For
this reason, updating the latest
tag is considered incorrect.
7.4 version
|
|____
| |1 release
|
|__________________
| |2 release |2.1 release
|
|____
| |3 release
|
|____
| |4 release
|
To start an isolated build, use the isolated
boolean parameter. Due to the
nature of isolated builds, you must explicitly specify your build’s
release
parameter, which must match the format ^\d+\.\d+(\..+)?$
.
Here is an example of an isolated build using fedpkg:
fedpkg container-build --isolated --build-release=2.1
Isolated builds will only create the {version}-{release}
primary tag and
the unique tag in the container registry. OSBS does not update any floating
tags for an isolated build.
Operator bundle isolated builds¶
In some cases you may want to rebuild operator bundle image with customized Cluster Service Version (CSV) file, like using CVE patched related images and updates to metadata used by the operator upgrade procedure.
Modifications to CSV file are possible only for isolated builds:
koji container-build \
--operator-csv-modifications-url=https://example.com/path/to/file.json \
--isolated \
--release=2.1
Option --operator-csv-modifications-url
must contain a path to remote JSON
file in the following format as shows the example bellow:
{
"pullspec_replacements": [
{
"original": "registry.example.com/namespace/app:v2.2.0",
"new": "registry.example.com/namespace/app@sha256:a0ae15b2c8b2c7ba115d37625e750848658b76bed7fa9f7e7f6a5e8ab3c71bac",
"pinned": true
}
],
"append": {
"spec": {
"skips": ["1.0.0"]
}
},
"update": {
"metadata": {
"name": "app.v1.0.1-01610399900-patched",
"substitutes-for": "1.0.0"
},
"spec": {
"version": "1.0.0-01610399900-patched"
}
}
}
Attribute pullspec_replacements
must contain list of replacements for all images
pullspecs used in the operator CSV file.
Attribute append
is optional. It contains nested structure of attributes to be
updated recursively by appending (attributes will be created if don’t exist).
Terminal property must contain a list with values to append.
Attribute update
is optional. It contains nested structure of attributes to be
updated recursively (attributes will be created if don’t exist).
With enabled modifications to operator CSV file OSBS will not perform digest pinning for images, an user is responsible for the content.
For more details about operator bundles please see Operator manifest bundle builds section.
This feature may require additional site configuration changes, please see Enabling operator CSV modifications section.
Yum repositories¶
In most cases, part of the process of building container images is to install RPM packages. These packages must come from yum repositories. There are various methods for making a yum repository available for your container build.
ODCS compose¶
The preferred method for injecting yum repositories in container builds is by
enabling ODCS integration via the “compose” key in container.yaml
. See
Image configuration and Signing intent for details.
RHEL subscription¶
If the underlying host is Red Hat Enterprise Linux (RHEL), its subscriptions
will be made available during container builds. Note that changes in the
underlying host to enable/disable yum repositories is not reflected in container
builds. Dockerfile
must explicitly enable/disable yum repositories as
needed. Although this is desirable in most cases, in an OSBS deployment it can
cause unexpected behavior. It’s recommended to disable subscription for RHEL
hosts when they are being used by OSBS.
Yum repository URL¶
As part of a build request, you may provide the repo-url
parameter with the
URL to a yum repository file. This file is injected into the container build.
Current OSBS versions support the combination of ODCS composes with repository files.
This is a change to OSBS former behavior, where the ODCS compose would be
disabled if a repository file URL was given.
Koji tag¶
When Koji integration is enabled, a Koji build target parameter is provided. The yum repository for the build tag of target is automatically injected in container build. This behavior is disabled if either “ODCS compose” or “Yum repository URL” are used.
Inherited yum repository and ODCS compose¶
If you want to inherit yum repositories and ODCS composes from baseimage build,
you can enable it via the “inherit” key under “compose” in container.yaml
.
Does not support scratch or isolated builds.
See Image configuration.
Signing intent¶
When the “compose” section in container.yaml
is defined, ODCS composes will
be requested at build time. ODCS is aware of RPM package signatures and can be
used to ensure that only signed packages are added to the generated yum
repositories. Ultimately, this can be used to ensure a container image only
contains packages signed by known signing keys.
Signing intents are an abstraction for signing keys. It allows the OSBS environment administrator to define which signing keys are valid for different types of releases. See odcs section for details.
For instance, an environment may provide the following signing intents:
release
, beta
, and unsigned
. Each one of those intents is then
mapped to a list of signing keys. These signing keys are then used during ODCS
compose creation. The packages to be included must have been signed by any of
the signing keys listed. In the example above, the intents could be mapped to
the following keys:
# Only include packages that have been signed by "my-release-key"
release -> my-release-key
# Include packages that have been signed by either "my-beta-key" or
# "my-release-key"
beta -> my-beta-key, my-release-key
# Do not check signature of packages - may include unsigned packages
unsigned -> <empty>
The signing intents are also defined by their restrictive order, which will be enforced when building layered images. For instance, consider the case of two images, X and Y. Y uses X as its parent image (FROM X). If image X was built with “beta” intent, image Y’s intent can only be “beta” or “unsigned”. If the dist-git repo for image Y has it configured to use “release” intent, this value will be downgraded to “beta” at build time.
Automatically downgrading the signing intent, instead of failing the build, is
important for allowing a hierarchy of layered images to be built automatically
by ImageChangeTriggers
. For instance, with Continuous Integration in mind, a
user may want to perform daily builds without necessarily requiring signed
packages, while periodically also producing builds with signed content. In this
case, the signing_intent
in container.yaml
can be set to release
for
all the images in hierarchy. Whether or not the layered images in the hierarchy
use signed packages can be controlled by simply overriding the signing intent of
the top most ancestor image. The signing intent of the layered images would then
be automatically adjusted as needed.
In the case where multiple composes are used, the least restrictive intent is used. Continuing with our previous signing intent example, let’s say a container image build request uses two composes. Compose 1 was generated with no signing keys provided, and compose 2 was generated with “my-release-key”. In this case, the intent is “unsigned”.
Compose IDs can be passed in to OSBS in a build request. If one or more compose IDs are provided, OSBS will classify the intent of the existing compose. This is done by inspecting the signing keys used for generating the compose and performing a reverse mapping to determine the signing intent. If a match cannot be determined, the build will fail. Note that if given compose is expired or soon to be expired, OSBS will automatically renew it.
The signing_intent
specified in container.yaml
can be overridden with
the build parameter of same name. The value in container.yaml
should
always be used in that case. Note that the signing intent used by the compose
of parent image is still taken into account which may lead to downgrading
signing intent for the layered image.
The Koji build metadata will contain a new key,
build.extra.image.odcs.signing_intent_overridden
, to indicate whether or not
the signing_intent
was overridden (CLI parameter, automatically downgraded,
etc). This value will only be true
if
build.extra.image.odcs.signing_intent
does not match the signing_intent
in container.yaml
.
Base image builds¶
OSBS is able to create base images, and it does by creating Koji image-build task, importing its output as a new container image, then continuing to build using a Dockerfile that inherits from that imported image.
Each dist-git branch should have the following files:
Dockerfile
image-build.conf
kickstart.ks (or any .ks name, but must match what image-build.conf references)
The Dockerfile should start “FROM koji/image-build”, and continue with LABEL and CMD etc instructions as needed.
The image-build.conf file should start “[image-build]” and set the target (for the image-build task), distro, and ksversion, for example:
[image-build]
target = f30
distro = Fedora-30
ksversion = Fedora
The image-build task will need to know where to find the kickstart configuration; it finds this from the ‘ksurl’ and ‘kickstart’ parameters in image-build.conf. If these are absent from the file in dist-git, atomic-reactor will provide defaults:
kickstart: ‘kickstart.ks’
ksurl: the dist-git URL and commit hash used for the OSBS build
In this way, the kickstart configuration can be placed in the dist-git repository as ‘kickstart.ks’ alongside the Dockerfile and image-build.conf files, and the correct git URL and commit hash will be recorded in Koji when the image is built. This is the recommended way of providing a kickstart configuration for base images.
Alternatively it can be stored elsewhere (perhaps another git repository) in which case a URL is needed. However, when doing this please make sure to use a git commit hash in the ‘ksurl’ parameter instead of a symbolic name (e.g. branch name); failure to do this means there will be no reliable way to discover the kickstart configuration used for the built image.
To execute base image build, run:
fedpkg container-build --target=<target> --repo-url=<repo-url>
The –repo-url parameter specifies the URL to a repofile. The first section of this is inspected and the ‘baseurl’ is examined to discover the compose URL. You can also use –compose-id parameter to specify ODCS composes from which additional yum repos will be used.
Multistage builds¶
Often users may wish to build an image directly from project sources (rather than intermediate build artifacts), but not include the sources or toolchain necessary for compiling the project in the final image. Multistage builds are a simple solution.
Multistage refers to container image builds with at least two stages in the Dockerfile; initial stage(s) provide a build environment and produce some kind of artifact(s) which in the final stage are copied into a clean base image. The most obvious signature of a multistage build is that the Dockerfile has more than one “FROM” statement. For example:
FROM toolchain:latest AS builder1
ADD .
RUN make artifact
FROM base:release
COPY --from=builder1 artifact /dest/
In most respects, multistage builds operate very similarly to multiple
single-stage builds; the results from initial stage(s) are simply not tagged or
used except by later COPY --from
statements. Refer to Docker multistage
docs for complete details.
In OSBS, multistage builds require using the imagebuilder plugin, which
can be configured as the system default or per-image in container.yaml
.
In a multistage build, yum repositories are made available in all stages. The build may have multiple parent builds, as each stage may specify a different image. The parent images FROM initial stages are pulled and rewritten similarly as the parent in the final stage (known as the “base image”). Note that ENV and LABEL entries from earlier stages do not affect later stages.
Note that the COPY --from=<image>
form (with a full image specification as
opposed to a stage alias) should not be used in OSBS builds. It works, but the
image used is not treated as other parents are (rewritten, etc). To achieve the
same effect, specify such images with another stage, for example:
FROM registry.example.com/image:tag AS source1
FROM base
COPY --from=source1 src/ dest/
Operator manifests¶
OSBS is able to extract operator manifests from an operator image. This image
should contain a /manifests
directory, whose content can be extracted to
koji for later distribution.
To activate the operator manifests extraction from the image, you must set a specific label in your Dockerfile to identify your build as either an operator bundle build or an appregistry build:
LABEL com.redhat.delivery.appregistry=true
LABEL com.redhat.delivery.operator.bundle=true
Only one of these labels (the appropriate one for your build) may be present, otherwise build will fail.
When present (and set to true
), this label triggers the atomic-reactor
export_operator_manifests
plugin. This plugin extracts the content from the
/manifests
directory in the built image and uploads it to koji. If the
/manifests
directory is either empty or not present in the image, the build
will fail.
Since the operator manifests are not tied to any specific architecture, OSBS
will decide from which worker build the manifests will be extract (and make
sure only a single platform will upload the archive to koji). If, for some
reason, you need to select which platform will extract and upload the manifests
archive, you can set the operator_manifests_extract_platform
build param to
the desired platform.
[Backward compatibility] If the build succeeds,
the build.extra.operator_manifests_archive
koji
metadata will be set to the name of the archive containing the operator
manifests (currently, operator_manifests.zip
).
The operator manifests archive is uploaded to koji as a separate type:
operator-manifests
(currently with filename operator_manifests.zip
).
Operator manifest bundle builds¶
This type of build is for the newer style of operator manifests targeting
Openshift 4.4 or higher.
It is identified by the com.redhat.delivery.operator.bundle
label.
To make OSBS cooperate on building your operator manifest bundle, you will need to set up the following:
Dockerfile
# Base needs to be scratch, and multi-stage builds are not allowed
FROM scratch
# Make this an operator bundle build
LABEL com.redhat.delivery.operator.bundle=true
# Does not matter where you keep your manifests in the repo, but in the
# final image, they need to be in /manifests
COPY my-manifests-dir/ /manifests
container.yaml (see operator_manifests
in container.yaml schema)
operator_manifests:
# Relative path to your manifests dir from root of repo
manifests_dir: my-manifests-dir
Operator manifest appregistry builds¶
This type of build is for the older style of operator manifests targeting
Openshift 4.3 or lower.
It is identified by the com.redhat.delivery.appregistry
label.
Details on how operator manifest can be accessed from the application registry are
stored in koji build, in section build.extra.operator_manifests.appregistry
.
Manifests will not be pushed to the application registry for scratch builds, isolated builds, or re-builds to prevent unwanted changes
Inspecting built image components¶
It is possible to inspect OSBS built image contents from within the image container.
In addition to being able to do so with the package manager available in the
image, if any, e.g., RPM through rpm -qa
to list all the packages installed
in the image, OSBS also makes sure the following artifacts are shipped within
the image
Dockerfiles¶
The Dockerfiles used to build the current container image and its parents,
which is located in the /root/buildinfo
directory.
Note that this is not necessarily the same Dockerfile provided by the user in the dist-git repository. OSBS makes changes to the Dockerfile and some of these changes may appear in these files whenever relevant. For instance, the FROM instruction may show the parent image digest instead of the repository and tag information.
Image Content Manifests¶
Image Content Manifests are JSON files shipped in OSBS built images with additional information on the contents shipped in the image.
The Image Content Manifest file is layer specific, and is located under the
/root/buildinfo/content_manifests
directory. It is named after the image NVR,
and it is validated against the JSON Schema that defines the Image Content Manifest.
Among the data available in the Image Content Manifest file (Check the JSON
Schema for further information), most important are the image_layer_index
,
which point to the layer that introduced the components listed in that file,
i.e., the most recent layer for that image, and:
Content Sets¶
The content_sets
field lists the content sets listed in the git repository
for the platform supported by the image. This attribute may differ for each
different platform the image was built for.
See Content Sets for further reference.
Extra contents¶
The image_contents
field lists the non-RPM contents fetched from Cachito
and middleware contents fetched using fetch-artifacts-pnc.yaml.
(see Fetching source code from external source using cachito) that were used during the image build and that were
made available in the image.
For additional information on how to navigate through these contents, refer to the Image Content Manifest JSON Schema.
Building Source Container Images¶
OSBS is able to build source container image from a particular koji build previously created by OSBS. To create a source container build you have to specify either koji N-V-R or build ID for the image build you want to create a source container image for.
When koji build is using lookaside cache, that may include all sort of things about which we can’t get any information, in that case source container build will fail.
Under the hood the BSI project is used to generate source images from sources identified and collected by OSBS. Please note that BSI script must be available in the OSBS buildroot as bsi executable in $PATH.
Current limitations:
only Source RPMs and sources fetched through Cachito integration are added into source container image
only koji internal RPMs are supported
Support for other types of sources and external builds will be added in future.
Signing intent resolution¶
Resolution of signing intent is done in following order:
signing intent specified from CLI params (koji, osbs-client),
otherwise signing intent is taken from the original image build,
if undefined then default signing intent from odcs configuration section in reactor-config-map is used.
If ODCS integration is disabled, unsigned packages are allowed by default.
Koji integration¶
Koji integration must be enabled for building source container images. Source container build requires metadata stored in koji builds and koji database of RPM builds which source container build uses to lookup for sources.
Source container builds uses different task type: buildSourceContainer.
Koji Build Metadata Integration¶
Source container build uses metadata from specified image build in the following manner:
name: suffix -source is appended to original name (ubi8-container will be transformed to ubi8-container-source)
version: value is the same as original image build
release: a suffix .X is appended to original release value, where X is a sequential integer starting from 1 increased by OSBS for each source image rebuild.
For example, from N-V-R ubi8-container-8.1-20 OSBS creates source container build ubi8-container-source-8.1-20.1.
The original image N-V-R is stored in extra.image.sources_for_nvr attribute in koji source container build metadata.
Building source container images using koji¶
Using a koji client CLI directly you have to specify git repo URL and branch:
koji source-container-build <target> --koji-build-nvr=NVR --koji-build-id=ID
For a full list of options:
koji source-container-build --help
Building source container images using osbs-client¶
Please note that mainly koji
and fedpkg
commands should be used
for building container images instead of direct osbs-client
calls.
To execute build via osbs-client CLI use:
osbs build-source-container -c <component> -u <username> --sources-for-koji-build-nvr=N-V-R --sources-for-koji-build-id=ID
To see full list of options execute:
osbs build-source-container --help
To see all osbs-client subcommands execute:
osbs --help
Please note that osbs-client
must be configured properly using config file /etc/osbs.conf
.
Please refer to osbs-client configuration section
for configuration examples.