--- /dev/null
+ceph-build
+==========
+A repository for Ceph (and Ceph-related projects) so that they can be
+automatically configured in Jenkins.
+
+The current state of the repo is of *transition* from single scripts to
+a properly structured one with directories that represent each project.
+
+The structure is strict and provides a convention to set the order of execution
+of build scripts.
+
+Job configuration is done via the CLI app `Jenkins Job Builder <http://ci.openstack.org/jenkins-job-builder/>`_
+on the actual directory for `its own job
+<http://jenkins.ceph.com/job/jenkins-job-builder/>`_ (the job has its
+definition and its build process automated).
+
+The JJB configuration defines the rules needed to generate and update/create
+all other Jenkins Jobs in this repo as long as they define the
+``config/definitions`` along with a valid YAML file.
+
+This script should have all the rules and requirements for generating the
+Jenkins configurations needed from the YAML files to create/update the jenkins
+job.
+
+Deprecation
+-----------
+Any script in the top level of this repo has been removed, and temporarily
+placed in the ``deprecated`` branch. If a job requires any of those it should
+be ported to follow the structure of the Jenkins Job Builder project, like all
+the current jobs in this repository.
+
+The ``deprecated`` branch will be removed by the end of 2018.
+
+Any jobs removed from this repo will be automatically deleted by JJB.
+
+Enforcement
+-----------
+The rules and structure for the builds are *strictly* enforced. If the
+convention is not followed, the builds will not work.
+
+Changing Jenkins jobs in Jenkins is **strongly** discouraged. Changing
+something in the Jenkins UI does not guarantee it will persist and will
+probably be overwritten.
+
+By default, this is how a directory tree would look like for a build for
+a project called ``foo`` that uses every choice available::
+
+ foo
+ ├── config
+ | ├── config
+ | └── definitions
+ | └── foo.yml
+ ├── setup
+ | ├── setup
+ | ├── post
+ | └── pre
+ └── build
+   ├── build
+   ├── post
+   └── pre
+
+This structure consists of two directories with scripts and one for
+configuration. The scripts should be included in the ``foo.yml`` file in
+whatever order the job requires.
+
+For example, this is how it could look in the ``builders`` section for its
+configuration::
+
+ builders:
+ # Setup scripts
+ - shell: !include-raw ../../setup/pre
+ - shell: !include-raw ../../setup/setup
+ - shell: !include-raw ../../setup/post
+ # Build scripts
+ - shell: !include-raw ../../build/pre
+ - shell: !include-raw ../../build/build
+ - shell: !include-raw ../../build/post
+
+These scripts will be added to the Jenkins server so that they can be executed
+as part of a job.
+
+Job Naming Conventions
+----------------------
+Each Jenkins job has two names:
+
+1. The main name for a job. This is the ``name:`` parameter in YAML.
+
+2. The human-friendly "display name" for a job. This is the ``display-name:``
+ parameter in YAML.
+
+For regular jobs, we name the Jenkins job after the git repository name. For
+example, the "ceph-deploy" package is at https://github.com/ceph/ceph-deploy,
+so the job name is "ceph-deploy".
+
+For Pull Request jobs, we use a similar convention for both the internal job
+name and the human readable "display name". For example, if the git repository
+is "ceph-deploy", then we name the Jenkins job ``ceph-deploy-pull-requests``.
+The ``display-name`` is set to ``ceph-deploy: Pull Requests``. In other words,
+to determine a ``display-name`` for a job that handles pull requests, simply
+append ``: Pull Requests`` to the ``name`` value.
+
+In other words, for building pull requests to ceph-deploy, the Jenkins job YAML
+will have the following settings:
+
+* Git repo: https://github.com/ceph/ceph-deploy
+
+* Jenkins job ``name``: ``ceph-deploy-pull-requests``
+
+* Jenkins job ``display-name``: ``ceph-deploy: Pull Requests``
+
+
+Scripts
+-------
+Scripts that may hang should be using the ``timeout`` command::
+
+ timeout 600 ./bad-script.sh
+
+The above command will make the job expire after ten minutes (the argument is
+in seconds).
+
+Pull Request Jobs
+-----------------
+When configuring a new job that will build pull requests, you must also
+configure GitHub's repository to notify Jenkins of new pull requests.
+
+#. In GitHub's web interface, click the "Settings" button for your repository.
+
+#. Click the "Webhooks & Services" link in the "Options" menu on the left.
+
+#. Under the "Webhooks" section, set the "Payload URL" to
+ ``http://jenkins.ceph.com/ghprbhook/``.
+
+#. Click the "Content type" dropdown and select
+ ``application/x-www-form-urlencoded``.
+
+#. For the question "Which events would you like to trigger this webhook?",
+ select the ``Let me select individual events.`` radio, and check the ``Pull
+ Request`` and ``Issue comment`` boxes.
+
+#. Click the green "Update Webhook" button to save your changes.
+
+On the Jenkins side, you should set up the job's GitHub project URL like so::
+
+ - job:
+ name: jenkins-slave-chef-pull-requests
+
+ ...
+
+ properties:
+ - github:
+ url: https://github.com/ceph/jenkins-slave-chef
+
+This will tell the Jenkins GitHub Pull Requests plugin that it should
+associate the incoming webhooks with this particular job.
+
+You should also use the ``triggers`` setting for the job, like so::
+
+ - job:
+ name: jenkins-slave-chef-pull-requests
+
+ ...
+
+ triggers:
+ - github-pull-request:
+ cron: '* * * * *'
+ admin-list:
+ - alfredodeza
+ - ktdreyer
+ org-list:
+ - ceph
+ trigger-phrase: 'retest this please'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: false
+ auto-close-on-fail: false
+
+"Document" Jobs
+---------------
+Some jobs don't actually run code; they simply build a project's documentation
+and upload the docs to ceph.com. One example is the "teuthology-docs-build"
+job.
+
+For these jobs, note that the destination directory must be created on the
+ceph.com web server before the ``rsync`` command will succeed.
+
+Polling and GitHub
+------------------
+Jenkins can periodically poll Git repos on github.com for changes, but this is
+slow and inefficient. Instead of polling GitHub, it's best to use GitHub's web hooks instead.
+
+See the "jenkins-job-builder" job as an example.
+
+1. Set up the ``triggers`` section::
+
+ triggers:
+ - github
+
+2. Visit the GitHub repository's "settings" page, eg
+ https://github.com/ceph/ceph-build/settings/hooks, and add a new web hook.
+
+ - The Payload URL should be ``https://jenkins.ceph.com/github-webhook/``
+ (note the trailing slash)
+ - The ``Content type`` should be ``application/x-www-form-urlencoded``
+ - ``Secret`` should be blank
+ - Select ``Just send the push event``.
+
+Testing JJB changes by hand, before merging to main
+---------------------------------------------------
+
+Sometimes it's useful to test a JJB change by hand prior to merging a pull
+request.
+
+1. Install ``jenkins-job-builder`` on your local computer.
+
+2. Create ``$HOME/.jenkins_jobs.ini`` on your local computer::
+
+ [jenkins]
+ user=ktdreyer
+ password=a8b767bb9cf0938dc7f40603f33987e5
+ url=https://jenkins.ceph.com/
+
+Where ``user`` is your Jenkins (ie GitHub) account username, and ``password``
+is your Jenkins API token. (Note, your Jenkins API token can be found @
+https://jenkins.ceph.com/ , for example
+https://jenkins.ceph.com/user/ktdreyer/configure)
+
+3. Switch to the Git branch with the JJB changes that you wish to test::
+
+ git checkout <branch with your changes>
+
+Let's say this git branch makes a change in the ``my-cool-job`` job.
+
+4. Run JJB to test the syntax of your changes::
+
+ jenkins-jobs --conf ~/.jenkins_jobs.ini test my-cool-job/config/definitions/my-cool-job.yml
+
+ If everything goes ok, this will cause JJB to output the XML of your job(s).
+ If there is a problem, JJB will print an error/backtrace instead.
+
+5. Run JJB to push your changes live to job on the main::
+
+ jenkins-jobs --conf ~/.jenkins_jobs.ini update my-cool-job/config/definitions/my-cool-job.yml
+
+6. Run a throwaway build with your change, and verify that your change didn't
+ break anything and does what you want it to do.
+
+(Note: if anyone merges anything to main during this time, Jenkins will reset
+all jobs to the state of what is in main, and your customizations will be
+wiped out. This "by-hand" testing procedure is only intended for short-lived
+tests.)
+
+Assigning a job to a different Jenkins Master
+---------------------------------------------
+
+We found one Jenkins controller wasn't enough to handle all the jobs we were
+demanding of it. The CI now supports multiple Jenkins controllers. If you wish to
+run your job on a different Jenkins controller:
+
+1. Create a ``config/JENKINS_URL`` file in your job directory containing only
+ the FQDN of the target Jenkins controller::
+
+ # Example
+ $ cat my-cool-job/config/JENKINS_URL
+ 2.jenkins.ceph.com
+
+A note on inclusive language
+----------------------------
+
+Like many software projects, the Ceph project has undertaken the task of
+migrating to more inclusive language. In the Ceph CI,
+
+``master`` branches are now ``main``
+
+``slave`` is now ``builder``
+
+When referring to the main Jenkins server, ``master`` is now ``controller``
+
+Remaining references (like the Jenkins ``ssh-slaves`` plugin) are hardcoded
+and could not be changed.
--- /dev/null
+# these should never make it to version control
+files/ssl/dev
+files/ssl/prod
+tmp
+
+# Top level YAML files are ignored, so that users can simply copy
+# from examples/* to the top level dir and not worry about committing them
+/*.yml
--- /dev/null
+[defaults]
+callback_plugins = callbacks
+retry_files_enabled = False
+stdout_callback=debug
+
+[ssh_connection]
+pipelining=True
+
--- /dev/null
+---
+## Instead of trying to keep 4 separate playbooks up to date, let's do it all here.
+## The only difference from using multiple playbooks is we need to specify `-e libvirt=true` and/or `-e permanent=true` if the builder will be permanent/static.
+## Tested on: CentOS 7, CentOS 8, Xenial, Bionic, Focal, Leap 15.1 using ansible 2.8.5
+##
+## Example:
+## define labels in inventory "jenkins_labels" dict, keyed by fqdn
+##
+## ansible-playbook -vvv -M ./library/ builder.yml, -e '{"api_uri": "https://jenkins.ceph.com"}' --limit braggi01*
+#
+##
+## secrets files jenkins.ceph.com.apitoken.yml and 2.jenkins.ceph.com.apitoken.yml must
+## exist in ANSIBLE_SECRETS_PATH
+
+- hosts: all
+ vars:
+ venv: "/home/{{ ansible_user }}/.venv"
+ tasks:
+ - name: install python3-venv deb
+ apt:
+ name: python3-venv
+ state: latest
+ update_cache: yes
+ when: ansible_os_family == "Debian"
+ become: true
+
+ - name: "clean up any existing {{ venv }}"
+ ansible.builtin.file:
+ path: "{{ venv }}"
+ state: absent
+ tags: always
+
+ - name: Create "{{ venv }}"
+ ansible.builtin.command: "python3 -mvenv {{ venv }}"
+ args:
+ creates: "{{ venv }}/bin/python"
+ tags: always
+
+ - name: Install python-jenkins, six
+ ansible.builtin.command:
+ cmd: "{{ venv }}/bin/pip install python-jenkins six"
+ become_user: "{{ ansible_user }}"
+ tags: always
+
+ - name: switch python interpreters
+ set_fact:
+ ansible_python_interpreter: "{{ venv }}/bin/python"
+ tags: always
+
+- hosts: all
+ become: true
+ user: ubuntu # This should be overridden on the CLI (e.g., -e user=centos). It doesn't matter on a mita/prado builder because the playbook is run locally by root.
+ vars:
+ libvirt: false # Should vagrant be installed?
+ permanent: false # Is this a permanent builder? Since the ephemeral (non-permanent) tasks get run more often, we'll default to false.
+ jenkins_user: 'jenkins-build'
+ api_user: 'ceph-jenkins'
+ api_uri: 'https://jenkins.ceph.com'
+ jenkins_credentials_uuid: 'jenkins-build'
+ nodename: '{{ nodename }}'
+ label: "{{ jenkins_labels[inventory_hostname] }}"
+ grant_sudo: true
+ osc_user: 'username'
+ osc_pass: 'password'
+ container_mirror: 'docker-mirror.front.sepia.ceph.com:5000'
+ secrets_path: "{{ lookup('env', 'ANSIBLE_SECRETS_PATH') | default('/etc/ansible/secrets', true) }}"
+ java_version: 'java-21'
+
+ tasks:
+ - name: "Include appropriate jenkins API token"
+ # sets 'token'
+ include_vars: "{{ secrets_path | mandatory }}/{{ api_uri | replace('https://', '')}}.apitoken.yml"
+ no_log: true
+ tags:
+ always
+
+ # Sometimes, builders would connect to Jenkins and try to run an apt transaction right away. Except apt-daily/unattended-upgrades has the dpkg lock so the Jenkins job would fail.
+ - name: Uninstall unattended-upgrades to avoid conflicts
+ package:
+ name: unattended-upgrades
+ state: absent
+ when: ansible_os_family == "Debian"
+ ignore_errors: yes
+
+ - name: Update package cache (Debian)
+ apt:
+ update_cache: yes
+ cache_valid_time: 3600
+ dpkg_options: "force-confold,force-confdef"
+ lock_timeout: 300
+ when: ansible_os_family == "Debian"
+ timeout: 300
+ register: apt_update
+
+ - name: Upgrade all packages (Debian)
+ apt:
+ upgrade: yes
+ autoclean: yes
+ autoremove: yes
+ dpkg_options: "force-confold,force-confdef"
+ lock_timeout: 300
+ when: ansible_os_family == "Debian"
+ timeout: 600
+
+ - name: Update and upgrade all packages (RedHat EL8)
+ yum:
+ name: '*'
+ state: latest
+ update_cache: yes
+ when:
+ - ansible_os_family == "RedHat"
+ - ansible_distribution_major_version|int <= 8
+ tags: always
+
+ - name: Update and upgrade all packages (RedHat EL9)
+ dnf:
+ name: '*'
+ state: latest
+ update_cache: yes
+ when:
+ - ansible_os_family == "RedHat"
+ - ansible_distribution_major_version|int == 9
+ tags: always
+
+ - name: Update and upgrade all packages (Suse)
+ zypper:
+ name: '*'
+ state: latest
+ update_cache: yes
+ when: ansible_os_family == "Suse"
+ tags: always
+
+ ## DEFINE PACKAGE LISTS BELOW
+ # Universal DEBs
+ - set_fact:
+ universal_debs:
+ - git
+ - fakeroot
+ - debhelper
+ - reprepro
+ - devscripts
+ - pbuilder
+ - pkg-config
+ - libtool
+ - autotools-dev
+ - automake
+ - libssl-dev
+ - libffi-dev
+ - default-jdk
+ - default-jre
+ - openjdk-21-jdk
+ - debian-keyring
+ - debian-archive-keyring
+ - software-properties-common
+ # jenkins-job-builder job:
+ - libyaml-dev
+ - jq
+ - tmpreaper
+ - podman
+ tmp_cleaner_name: tmpreaper
+ tmp_cleaner_args: "--runtime=0 14d /tmp/"
+ when: ansible_os_family == "Debian"
+ tags: always
+
+ # Libvirt DEBs (Bionic and older)
+ - set_fact:
+ libvirt_debs:
+ - qemu-kvm
+ - libvirt-bin
+ - libvirt-dev
+ - vagrant
+ when:
+ - ansible_os_family == "Debian"
+ - ansible_distribution_major_version|int <= 18
+ - libvirt|bool
+ tags: always
+
+ # Libvirt DEBs (Focal and newer)
+ - set_fact:
+ libvirt_debs:
+ - qemu-kvm
+ - libvirt-daemon-system
+ - libvirt-clients
+ - libvirt-dev
+ - vagrant
+ when:
+ - ansible_os_family == "Debian"
+ - ansible_distribution_major_version|int >= 20
+ - libvirt|bool
+ tags: always
+
+ # python2 DEBs
+ - set_fact:
+ python2_debs:
+ - python
+ - python-dev
+ - python-pip
+ - python-virtualenv
+ when:
+ - ansible_os_family == "Debian"
+ - ansible_distribution_major_version|int <= 18
+ tags: always
+
+ # python3 DEBs (We only install python2 *and* python3 on Bionic)
+ - set_fact:
+ python3_debs:
+ - python3
+ - python3-dev
+ - python3-pip
+ - python3-venv
+ - python3-virtualenv
+ when:
+ - ansible_os_family == "Debian"
+ - ansible_distribution_major_version|int >= 18
+ tags: always
+
+ # chroot DEBs (Xenial and older)
+ - set_fact:
+ chroot_deb: dchroot
+ when:
+ - ansible_os_family == "Debian"
+ - ansible_distribution_major_version|int <= 16
+ tags: always
+
+ # chroot DEBs (Bionic and newer)
+ - set_fact:
+ chroot_deb: schroot
+ when:
+ - ansible_os_family == "Debian"
+ - ansible_distribution_major_version|int >= 18
+ tags: always
+
+ # Universal RPMs
+ - set_fact:
+ universal_rpms:
+ - wget
+ - createrepo
+ - java-21-openjdk
+ - git
+ - libtool
+ #- rpm-sign
+ - autoconf
+ - automake
+ - cmake
+ - binutils
+ - bison
+ - flex
+ - gcc
+ - gcc-c++
+ - gettext
+ - libtool
+ - make
+ - patch
+ - pkgconfig
+ - redhat-rpm-config
+ - rpm-build
+ - rpmdevtools
+ - openssl-devel
+ - libffi-devel
+ - tmpwatch
+ tmp_cleaner_name: tmpwatch
+ tmp_cleaner_args: "14d /tmp/"
+ when: ansible_os_family == "RedHat"
+ tags: always
+
+ # Libvirt RPMs
+ - set_fact:
+ libvirt_rpms:
+ - qemu-kvm
+ - libvirt-devel
+ - libguestfs
+ - libvirt
+ - libguestfs-tools
+ - vagrant
+ when:
+ - ansible_os_family == "RedHat"
+ - libvirt|bool
+ tags: always
+
+ # EL7 RPMs
+ - set_fact:
+ epel_rpms:
+ - jq
+ - python-pip
+ - python-devel
+ - python-virtualenv
+ - mock
+ - docker
+ container_service_name: docker
+ container_certs_path: "/etc/docker/certs.d/{{ container_mirror }}"
+ when:
+ - ansible_os_family == "RedHat"
+ - ansible_distribution_major_version|int <= 7
+ tags: always
+
+ # EL8 RPMs
+ - set_fact:
+ epel_rpms:
+ - jq
+ - python3-pip
+ - python3-devel
+ - python3-virtualenv
+ - mock
+ - podman
+ container_service_name: podman
+ container_certs_path: "/etc/containers/certs.d/{{ container_mirror }}"
+ hackery_packages:
+ - gcc
+ - libguestfs-tools-c
+ - libvirt
+ - libvirt-devel
+ - libxml2-devel
+ - libxslt-devel
+ - make
+ - ruby-devel
+ when:
+ - ansible_os_family == "RedHat"
+ - ansible_distribution_major_version|int == 8
+ tags: always
+
+ # EL9 RPMs
+ - set_fact:
+ epel_rpms:
+ - jq
+ - python3-pip
+ - python3-devel
+ - podman
+ - skopeo
+ container_service_name: podman
+ container_certs_path: "/etc/containers/certs.d/{{ container_mirror }}"
+ hackery_packages:
+ - gcc
+ - libguestfs-tools-c
+ - libvirt
+ - libvirt-devel
+ - libxml2-devel
+ - libxslt-devel
+ - make
+ - ruby-devel
+ when:
+ - ansible_os_family == "RedHat"
+ - ansible_distribution_major_version|int == 9
+ tags: always
+
+ # This package removed in EL9
+ # This has to be a "list" otherwise it gets rendered as an empty string and the yum ansible module doesn't like that.
+ - set_fact:
+ lsb_package:
+ - redhat-lsb-core
+ when:
+ - ansible_os_family == "RedHat"
+ - ansible_distribution_major_version|int <= 8
+ tags: always
+
+ # OpenSUSE RPMs
+ - set_fact:
+ zypper_rpms:
+ - autoconf
+ - automake
+ - binutils
+ - bison
+ - cmake
+ - ccache
+ - createrepo
+ - flex
+ - gcc
+ - gcc-c++
+ - gettext-runtime
+ - git
+ - java-1_8_0-openjdk
+ - jq
+ - libffi-devel
+ - libopenssl-devel
+ - libtool
+ - lsb-release
+ - make
+ - patch
+ - pkg-config
+ - python2-pip
+ - python2-virtualenv
+ - python3-pip
+ - python3-virtualenv
+ - rpm-build
+ - rpmdevtools
+ - tig
+ - wget
+ # obs requirements
+ - osc
+ - build
+ tmp_cleaner_name: tmpwatch
+ tmp_cleaner_args: "14d /tmp/"
+ when: ansible_os_family == "Suse"
+ tags: always
+
+ # OpenSUSE Libvirt RPMs (We've never tried this to date so more might be needed)
+ - set_fact:
+ zypper_libvirt_rpms:
+ - libvirt
+ - libvirt-devel
+ - qemu
+ - kvm
+ - vagrant
+ - ruby-devel
+ when:
+ - ansible_os_family == "Suse"
+ - libvirt|bool
+ tags: always
+
+ ## Let's make sure we don't accidentally set up a permanent builder from Sepia as ephemeral
+ - set_fact:
+ permanent: true
+ with_items: "{{ ansible_all_ipv4_addresses }}"
+ when: "item.startswith('172.21.') or item.startswith('8.43')"
+ tags: always
+
+ ## Let's make sure nodename gets set using our Sepia hostnames if the builder's in Sepia
+ - set_fact:
+ nodename: "{{ ansible_hostname }}"
+ with_items: "{{ ansible_all_ipv4_addresses }}"
+ when: "item.startswith('172.21.') or item.startswith('8.43')"
+ tags: always
+
+ ## EPHEMERAL SLAVE TASKS
+ # We would occasionally have issues with name resolution on the Ephemeral builder
+ # so we force them to use Google's DNS servers. This has to be done before
+ # package-related tasks to avoid communication errors with various repos.
+ - name: Ephemeral Slave Tasks
+ block:
+ - name: Uninstall resolvconf on Ubuntu to manually manage resolv.conf
+ apt:
+ name: resolvconf
+ state: absent
+ when: ansible_os_family == "Debian"
+
+ - name: Check for NetworkManager conf
+ stat:
+ path: /etc/NetworkManager/NetworkManager.conf
+ register: nm_conf
+
+ - name: Tell NetworkManager to leave resolv.conf alone on CentOS
+ lineinfile:
+ dest: /etc/NetworkManager/NetworkManager.conf
+ regexp: '^dns='
+ line: 'dns=none'
+ state: present
+ when: ansible_os_family == "RedHat" and nm_conf.stat.exists
+
+ - name: Tell dhclient to leave resolv.conf alone on Ubuntu
+ lineinfile:
+ dest: /etc/dhcp/dhclient.conf
+ regexp: 'prepend domain-name-servers'
+ line: 'supersede domain-name-servers 8.8.8.8;'
+ state: present
+ when: ansible_os_family == "Debian"
+
+ - name: Use Google DNS for name resolution
+ lineinfile:
+ dest: /etc/resolv.conf
+ regexp: '^nameserver'
+ line: 'nameserver 8.8.8.8'
+ state: present
+
+ - name: Set Hostname with hostname command
+ hostname:
+ name: "ceph-builders"
+ when: ansible_os_family != "Suse"
+
+ # https://github.com/ansible/ansible/issues/42726
+ - name: Set Hostname on OpenSUSE Leap
+ command: 'hostname ceph-builders'
+ when: ansible_os_family == "Suse"
+
+ - name: Ensure that 127.0.1.1 is present with an actual hostname
+ lineinfile:
+ backup: yes
+ dest: /etc/hosts
+ line: '127.0.1.1 ceph-builders'
+
+ - name: Update etc cloud templates for debian /etc/hosts
+ lineinfile:
+ backup: yes
+ dest: /etc/cloud/templates/hosts.debian.tmpl
+ line: '127.0.1.1 ceph-builders'
+ when: ansible_os_family == "Debian"
+
+ - name: Update /etc/cloud templates for Red Hat /etc/hosts
+ lineinfile:
+ backup: yes
+ dest: /etc/cloud/templates/hosts.redhat.tmpl
+ line: '127.0.1.1 ceph-builders'
+ failed_when: false
+ when: ansible_os_family == "RedHat"
+
+ - name: Update /etc/cloud templates for Suse /etc/hosts
+ lineinfile:
+ backup: yes
+ dest: /etc/cloud/templates/hosts.suse.tmpl
+ line: '127.0.1.1 ceph-builders'
+ failed_when: false
+ when: ansible_os_family == "Suse"
+
+ - name: Stop and disable daily apt activities
+ command: "{{ item }}"
+ with_items:
+ - systemctl stop apt-daily.timer
+ - systemctl disable apt-daily.timer
+ - systemctl disable apt-daily.service
+ - systemctl daemon-reload
+ when: ansible_os_family == "Debian"
+ # Just in case. This isn't a super important task and might not even be required.
+ ignore_errors: true
+ when: not permanent|bool
+
+ ## VAGRANT REPO TASKS (for libvirt builders)
+ # vagrant doesn't have repositories, this chacra repo will be better to have
+ # around and can get updates as soon as a new vagrant version is published via
+ # chacractl
+ - name: Vagrant/Libvirt Repo Tasks
+ block:
+ - name: Add our vagrant DEB repository
+ apt_repository:
+ repo: "deb [trusted=yes] https://chacra.ceph.com/r/vagrant/latest/HEAD/ubuntu/{{ ansible_distribution_release }}/flavors/default/ {{ ansible_distribution_release }} main"
+ state: present
+ when: ansible_os_family == "Debian"
+
+ - name: Add our vagrant RPM repository
+ yum_repository:
+ name: vagrant
+ description: self-hosted vagrant repo
+ # Although this is a 'CentOS7' repo, the vagrant RPM is OS-version agnostic
+ baseurl: "https://chacra.ceph.com/r/vagrant/latest/HEAD/centos/7/flavors/default/x86_64/"
+ enabled: yes
+ gpgcheck: no
+ when: ansible_os_family == "RedHat"
+ when: libvirt|bool
+
+ ## PACKAGE INSTALLATION TASKS
+ # We do this in one big task to save time and avoid using `with` loops. If a variable isn't defined, it's fine because of the |defaults.
+ - name: Install DEBs
+ apt:
+ name: "{{ universal_debs + libvirt_debs|default([]) + python2_debs|default([]) + python3_debs|default([]) + [ chroot_deb|default('') ] }}"
+ state: latest
+ update_cache: yes
+ when: ansible_os_family == "Debian"
+
+ - name: Install EPEL repo
+ yum:
+ name: epel-release
+ state: latest
+ when:
+ - ansible_os_family == "RedHat"
+ - ansible_distribution_major_version|int <= 8
+
+ - name: Install RPMs without EPEL
+ yum:
+ name: "{{ universal_rpms + libvirt_rpms|default([]) + lsb_package|default([]) }}"
+ state: present
+ disablerepo: epel
+ when: ansible_os_family == "RedHat"
+
+ - name: Install RPMs with EPEL
+ yum:
+ name: "{{ epel_rpms|default([]) }}"
+ state: latest
+ enablerepo: epel
+ when: ansible_os_family == "RedHat"
+
+ - name: Install Suse RPMs
+ zypper:
+ name: "{{ zypper_rpms + zypper_libvirt_rpms|default([]) }}"
+ state: latest
+ update_cache: yes
+ when: ansible_os_family == "Suse"
+
+ ## PODMAN TASKS
+ - name: Check if jenkins-build exists in /etc/subuid
+ command: grep '^jenkins-build:' /etc/subuid
+ register: subuid_check
+ ignore_errors: yes
+ changed_when: false
+
+ - name: Check if jenkins-build exists in /etc/subgid
+ command: grep '^jenkins-build:' /etc/subgid
+ register: subgid_check
+ ignore_errors: yes
+ changed_when: false
+
+ - name: Find highest used subuid
+ shell: "awk -F: '{print $2+$3}' /etc/subuid | sort -n | tail -1"
+ register: highest_subuid
+ changed_when: false
+
+ - name: Set next available UID range
+ set_fact:
+ new_uid: "{{ (highest_subuid.stdout | int + 1) | default(100000) }}"
+
+ - name: Add jenkins-build to /etc/subuid
+ lineinfile:
+ path: /etc/subuid
+ line: "jenkins-build:{{ new_uid }}:65536"
+ create: yes
+ when: subuid_check.rc != 0
+
+ - name: Add jenkins-build to /etc/subgid
+ lineinfile:
+ path: /etc/subgid
+ line: "jenkins-build:{{ new_uid }}:65536"
+ create: yes
+ when: subgid_check.rc != 0
+
+ ## JENKINS USER TASKS
+ - set_fact:
+ jenkins_groups:
+ - "{{ jenkins_user }}"
+ - libvirtd
+ when:
+ - ansible_os_family == "Debian"
+ - ansible_distribution_version == '16.04'
+ - libvirt|bool
+
+ # The group name changed to 'libvirt' in Ubuntu 16.10 and is already 'libvirt' everywhere else
+ - set_fact:
+ jenkins_groups:
+ - "{{ jenkins_user }}"
+ - libvirt
+ when:
+ - not (ansible_os_family == "Debian" and ansible_distribution_version == '16.04')
+ - libvirt|bool
+
+ - name: "Create a {{ jenkins_user }} group"
+ group:
+ name: "{{ jenkins_user }}"
+ state: present
+
+ - name: "Create a {{ jenkins_user }} user"
+ user:
+ name: "{{ jenkins_user }}"
+ # This will add to the jenkins_user and appropriate libvirt group if jenkins_groups is defined.
+ # Otherwise, default to just adding to {{ jenkins_user }} group.
+ groups: "{{ jenkins_groups|default(jenkins_user) }}"
+ state: present
+ comment: "Jenkins Build Slave User"
+
+ - name: "Add {{ jenkins_user }} to mock group"
+ user:
+ name: "{{ jenkins_user }}"
+ groups: mock
+ append: yes
+ when:
+ - ansible_os_family == "RedHat"
+ - ansible_distribution_major_version|int <= 8
+
+ - name: "loginctl enable-linger {{ jenkins_user }}"
+ command: "loginctl enable-linger {{ jenkins_user }}"
+
+ - name: "Create a {{ jenkins_user }} home directory"
+ file:
+ path: "/home/{{ jenkins_user }}/"
+ state: directory
+ owner: "{{ jenkins_user }}"
+
+ - name: Create .ssh directory
+ file:
+ path: "/home/{{ jenkins_user }}/.ssh"
+ state: directory
+ owner: "{{ jenkins_user }}"
+
+ # On a mita/prado provisioned builder, everything gets put into a 'playbook' dir.
+ # Otherwise it can be found in files/ssh/...
+ - set_fact:
+ jenkins_key_file: "{{ lookup('first_found', key_locations, errors='ignore') }}"
+ vars:
+ key_locations:
+ - "{{ playbook_dir }}/../files/ssh/keys/jenkins_build.pub"
+
+ - name: get jenkins_key from key file if found
+ set_fact:
+ jenkins_key: "{{ lookup('file', jenkins_key_file) }}"
+ when: jenkins_key_file != ""
+
+ # And worst case scenario, we just pull the key from github.
+ - name: Set the jenkins key string from github if necessary
+ set_fact:
+ jenkins_key: " {{ lookup('url', 'https://raw.githubusercontent.com/ceph/ceph-build/main/ansible/files/ssh/keys/jenkins_build.pub') }}"
+ when: not jenkins_key is defined
+
+ - name: Set the authorized keys
+ authorized_key:
+ user: "{{ jenkins_user }}"
+ key: "{{ jenkins_key }}"
+
+ - name: "Ensure {{ jenkins_user }} can sudo without a prompt"
+ lineinfile:
+ dest: /etc/sudoers
+ regexp: '^{{ jenkins_user }} ALL'
+ line: '{{ jenkins_user }} ALL=(ALL:ALL) NOPASSWD:ALL'
+ validate: 'visudo -cf %s'
+ when: grant_sudo|bool
+
+ - name: Set utf-8 for LC_ALL
+ lineinfile:
+ dest: "/home/{{ jenkins_user }}/.bashrc"
+ regexp: '^export LC_ALL='
+ line: "export LC_ALL=en_US.UTF-8"
+ create: true
+ state: present
+
+ - name: Set utf-8 for LANG
+ lineinfile:
+ dest: "/home/{{ jenkins_user }}/.bashrc"
+ regexp: '^export LANG='
+ line: "export LANG=en_US.UTF-8"
+
+ - name: Set utf-8 for LANGUAGE
+ lineinfile:
+ dest: "/home/{{ jenkins_user }}/.bashrc"
+ regexp: '^export LANGUAGE='
+ line: "export LANGUAGE=en_US.UTF-8"
+
+ - name: Ensure the build dir exists
+ file:
+ path: "/home/{{ jenkins_user }}/build"
+ state: directory
+ owner: "{{ jenkins_user }}"
+
+ - name: Create .config/osc directory
+ file:
+ path: "/home/{{ jenkins_user }}/.config/osc"
+ state: directory
+ owner: "{{ jenkins_user }}"
+ when: ansible_os_family == "Suse"
+
+ - name: Add oscrc file
+ blockinfile:
+ create: yes
+ block: |
+ [general]
+ apiurl = https://api.opensuse.org
+ build-root = /home/{{ jenkins_user }}/osc/%(repo)s-%(arch)s
+ [https://api.opensuse.org]
+ user = {{ osc_user }}
+ pass = {{ osc_pass }}
+ path: "/home/{{ jenkins_user }}/.config/osc/oscrc"
+ become_user: "{{ jenkins_user }}"
+ when: ansible_os_family == "Suse"
+
+ - name: Ensure the home dir has the right owner permissions
+ command: "sudo chown -R {{ jenkins_user }}:{{ jenkins_user }} /home/{{ jenkins_user}}"
+ tags: chown
+
+ ## DEBIAN GPG KEY TASKS
+ - name: Install Debian GPG Keys on Ubuntu
+ block:
+ - name: Add the Debian Buster Key
+ apt_key:
+ id: 3CBBABEE
+ url: https://ftp-master.debian.org/keys/archive-key-10.asc
+ keyring: /etc/apt/trusted.gpg
+ state: present
+
+ - name: Add the Debian Security Buster Key
+ apt_key:
+ id: CAA96DFA
+ url: https://ftp-master.debian.org/keys/archive-key-10-security.asc
+ keyring: /etc/apt/trusted.gpg
+ state: present
+
+ - name: Add the Debian Buster Stable Key
+ apt_key:
+ id: 77E11517
+ url: https://ftp-master.debian.org/keys/release-10.asc
+ keyring: /etc/apt/trusted.gpg
+ state: present
+
+ - name: Add the Debian Bookworm Key
+ apt_key:
+ id: 350947F8
+ url: https://ftp-master.debian.org/keys/archive-key-12.asc
+ keyring: /etc/apt/trusted.gpg
+ state: present
+
+ - name: Add the Debian Security Bookworm Key
+ apt_key:
+ id: AEC0A8F0
+ url: https://ftp-master.debian.org/keys/archive-key-12-security.asc
+ keyring: /etc/apt/trusted.gpg
+ state: present
+ when: ansible_os_family == "Debian"
+ tags: debian-keys
+
+ ## VAGRANT PLUGIN TASKS
+ - name: Install vagrant-libvirt plugin
+ block:
+ - name: Install the vagrant-libvirt plugin (without args)
+ shell: vagrant plugin install vagrant-libvirt --plugin-version 0.3.0
+ become_user: "{{ jenkins_user }}"
+ when: (ansible_os_family == "RedHat" and ansible_distribution_major_version|int <= 7) or
+ (ansible_os_family == "Debian" and ansible_distribution_major_version|int <= 18)
+
+ - name: Install packages needed to build krb5 from source (EL8)
+ dnf:
+ name: "{{ hackery_packages }}"
+ state: present
+ when: ansible_os_family == "RedHat" and ansible_distribution_major_version|int >= 8
+
+ - name: Build krb5 library from source (EL8)
+ shell: |
+ cd $(mktemp -d)
+ wget https://vault.centos.org/8.4.2105/BaseOS/Source/SPackages/krb5-1.18.2-8.el8.src.rpm
+ rpm2cpio krb5-1.18.2-8.el8.src.rpm | cpio -imdV
+ tar xf krb5-1.18.2.tar.gz; cd krb5-1.18.2/src
+ LDFLAGS='-L/opt/vagrant/embedded/' ./configure
+ make
+ sudo cp lib/libk5crypto.* /opt/vagrant/embedded/lib/
+ wget https://vault.centos.org/8.4.2105/BaseOS/Source/SPackages/libssh-0.9.4-2.el8.src.rpm
+ rpm2cpio libssh-0.9.4-2.el8.src.rpm | cpio -imdV
+ tar xf libssh-0.9.4.tar.xz
+ mkdir build
+ cd build
+ cmake ../libssh-0.9.4 -DOPENSSL_ROOT_DIR=/opt/vagrant/embedded/
+ make
+ cp lib/libssh* /opt/vagrant/embedded/lib64
+ when: ansible_os_family == "RedHat" and ansible_distribution_major_version|int >= 8
+
+ # https://github.com/vagrant-libvirt/vagrant-libvirt/issues/1127#issuecomment-713651332
+ - name: Install the vagrant-libvirt plugin (EL8)
+ command: vagrant plugin install vagrant-libvirt
+ become_user: "{{ jenkins_user }}"
+ environment:
+ CONFIGURE_ARGS: 'with-ldflags=-L/opt/vagrant/embedded/lib with-libvirt-include=/usr/include/libvirt with-libvirt-lib=/usr/lib'
+ GEM_HOME: '~/.vagrant.d/gems'
+ GEM_PATH: '$GEM_HOME:/opt/vagrant/embedded/gems'
+ when: ansible_os_family == "RedHat" and ansible_distribution_major_version|int >= 8
+
+ - name: Install the vagrant-libvirt plugin (Suse)
+ command: vagrant plugin install vagrant-libvirt
+ become_user: "{{ jenkins_user }}"
+ environment:
+ CONFIGURE_ARGS: 'with-libvirt-include=/usr/include/libvirt with-libvirt-lib=/usr/lib64'
+ when: ansible_os_family == "Suse"
+
+ - name: Install the vagrant-libvirt plugin (Focal)
+ command: vagrant plugin install vagrant-libvirt
+ become_user: "{{ jenkins_user }}"
+ environment:
+ CONFIGURE_ARGS: 'with-libvirt-include=/usr/include/libvirt with-libvirt-lib=/usr/lib'
+ when: ansible_os_family == "Debian" and ansible_distribution_major_version|int >= 20
+ when: libvirt|bool
+
+ ## RPMMACROS TASKS
+ - name: rpmmacros Tasks
+ block:
+ - name: Ensure the rpmmacros file exists to fix centos builds
+ file:
+ path: "/home/{{ jenkins_user }}/.rpmmacros"
+ owner: "{{ jenkins_user }}"
+ state: touch
+
+ - name: Write the rpmmacros needed in centos
+ lineinfile:
+ dest: "/home/{{ jenkins_user }}/.rpmmacros"
+ regexp: '^%dist'
+ line: '%dist .el{{ ansible_distribution_major_version }}'
+ when: ansible_os_family == "RedHat" and ansible_distribution_major_version|int <= 7
+
+ ## tmpwatch/tmpreaper TASKS
+ - name: tmpwatch/tmpreaper Tasks
+ block:
+ # In case we're running 'ansible-playbook --tags tmp'
+ - name: "Make sure {{ tmp_cleaner_name }} is installed"
+ package:
+ name: "{{ tmp_cleaner_name }}"
+ state: present
+
+ - name: Disable tmpreaper cron.daily timer
+ file:
+ path: /etc/cron.daily/tmpreaper
+ state: absent
+ when: ansible_os_family == "Debian"
+
+ - name: Create tmp cleaning cronjob
+ cron:
+ name: "Delete /tmp files that have not been accessed in 14 days"
+ special_time: daily
+ job: "{{ tmp_cleaner_name }} {{ tmp_cleaner_args }}"
+ when: permanent|bool
+ tags: tmp
+
+ ## GITCONFIG TASKS
+ - name: Ensure the gitconfig file exists
+ shell: printf "[user]\name=Ceph CI\nemail=ceph-release-team@redhat.com\n" > /home/{{ jenkins_user }}/.gitconfig
+
+ - name: Ensure the gitconfig file has right permissions
+ file:
+ path: "/home/{{ jenkins_user }}/.gitconfig"
+ owner: "{{ jenkins_user }}"
+
+ # On a mita/prado provisioned builder, everything gets put into a 'playbook' dir.
+ # If all else fails, get it from github (using the |default)
+ - set_fact:
+ github_host_key_file: "{{ lookup('first_found', key_locations, errors='ignore') }}"
+ vars:
+ key_locations:
+ # github.com.pub is the output of `ssh-keyscan github.com`
+ - "{{ playbook_dir }}/../files/ssh/hostkeys/github.com.pub"
+ tags: always
+
+ - name: get github host key from file
+ set_fact:
+ github_host_key: "{{ lookup('file', github_host_key_file) }}"
+ when: github_host_key_file != ""
+ tags: always
+
+ - name: get github host key from github if necessary
+ set_fact:
+ github_host_key: "{{ lookup('url', 'https://raw.githubusercontent.com/ceph/ceph-build/main/ansible/files/ssh/hostkeys/github.com.pub') }}"
+ when: github_host_key_file == ""
+ tags: always
+
+ - name: Add github.com host key
+ known_hosts:
+ name: github.com
+ path: '/etc/ssh/ssh_known_hosts'
+ key: "{{ github_host_key }}"
+
+ ## LIBVIRT SERVICE TASKS
+ - name: start libvirt services
+ service:
+ name: "{{ item }}"
+ state: restarted
+ with_items:
+ - libvirtd
+ - libvirt-guests
+ when: libvirt|bool
+
+ - name: Set java alternative for debian
+ block:
+ - name: Get java version alternative
+ shell: >-
+ update-alternatives --query java | awk -F':' '/{{ java_version }}/ && /Alternative/ {print $2}'
+ register: java_alternatives
+ changed_when: false
+
+ - name: Set java version alternative
+ alternatives:
+ name: java
+ path: "{{ java_alternatives.stdout.strip() }}"
+ when:
+ - (ansible_os_family | lower) == 'debian'
+
+ - name: Set java version alternative for RedHat
+ shell:
+ cmd: update-alternatives --set java '{{ java_version }}-openjdk.{{ ansible_architecture }}'
+ when:
+ - (ansible_os_family | lower) == 'redhat'
+
+ ## CONTAINER SERVICE TASKS
+ - name: Container Tasks
+ block:
+ - name: "Create {{ container_certs_path }}"
+ file:
+ path: "{{ container_certs_path }}"
+ state: directory
+
+ - name: "Copy {{ container_mirror }} self-signed cert"
+ copy:
+ dest: "{{ container_certs_path }}/docker-mirror.crt"
+ content: |
+ -----BEGIN CERTIFICATE-----
+ MIIGRTCCBC2gAwIBAgIUPCTsbv8FMCQdzmusdvXTdO8UaKMwDQYJKoZIhvcNAQEL
+ BQAwgbExCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJOQzEUMBIGA1UEBwwLTW9ycmlz
+ dmlsbGUxFjAUBgNVBAoMDVJlZCBIYXQsIEluYy4xDTALBgNVBAsMBENlcGgxKzAp
+ BgNVBAMMImRvY2tlci1taXJyb3IuZnJvbnQuc2VwaWEuY2VwaC5jb20xKzApBgkq
+ hkiG9w0BCQEWHGNlcGgtaW5mcmEtYWRtaW5zQHJlZGhhdC5jb20wHhcNMjAxMTEy
+ MDAwMjM1WhcNMjAxMjEyMDAwMjM1WjCBsTELMAkGA1UEBhMCVVMxCzAJBgNVBAgM
+ Ak5DMRQwEgYDVQQHDAtNb3JyaXN2aWxsZTEWMBQGA1UECgwNUmVkIEhhdCwgSW5j
+ LjENMAsGA1UECwwEQ2VwaDErMCkGA1UEAwwiZG9ja2VyLW1pcnJvci5mcm9udC5z
+ ZXBpYS5jZXBoLmNvbTErMCkGCSqGSIb3DQEJARYcY2VwaC1pbmZyYS1hZG1pbnNA
+ cmVkaGF0LmNvbTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALUfPWaA
+ Gyuu+McgrPmPBafco3NjOQ2Na8rfLA5X0pz1tTfWgmtwuzhgKR674Nh6yz1WKXmS
+ ic5416dSx6r8NnBXkPSVLP3HlejPki1ohrqm9M1rXdPqvdmzV5TcRvmmLljo1IjI
+ Glwhv+XjJlKPLOUmi4Yk8cmgwVThc9OGC67sve2oDY0+JufFdiMPB5OLi13t9vPz
+ lixFzHXsss4KgD95Ou2PVLQpPCJ4Bxyar5BR0sb4+b2J0b3V3sxg/bvuOdlUuxAy
+ yCogtCTVXCBsERJ3wVI28MsibfBy+tLbNMbIJTZC+LblFOKfxbNiLGNv6z2NQ12h
+ S9C3YCxmgs8b3h9dkQtTj0/7/kpOppLPTvU9v/MOt177biTlbw8QQAjYyZYdXkZT
+ 6LwdQmQQGCIQUUaMoeZgIplxEu7My1Gk3M2dfy/c36+r/olfbuTxPav2y9/wwjV2
+ 2TrmbSTrAxZwFVvlb9wJCpW6jKh+Cl55XS4wFmEdgf5OJC8W2Rsa69pUmFnro+2z
+ d6zXlDXj5lxdqwSu6FF/PkImToUJ2J9hvotejIdRIJ/TfowwVygqC9k3wgRDYRut
+ q/tmorElTMDmwt1sATuvK81WkTZ28d3hcg5Xu9o1qwCQnKRHUeOOyP4M6c0lSvLb
+ lkZsptmUHyslGBlc9MOd6kH4REZH9x2pga2nAgMBAAGjUzBRMB0GA1UdDgQWBBSk
+ 4Vk1KYHJ4VmDAorKCtSx5RVD7TAfBgNVHSMEGDAWgBSk4Vk1KYHJ4VmDAorKCtSx
+ 5RVD7TAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4ICAQBYaG8PFoE5
+ PSdIwjT2NRV0ARC+xkM/P8Vo4H2tYewSz/wGwdfjpz7NJD/os6Tiff6BWBaD75t0
+ 2X2MLXeGT2vOJ05hoETCJ1PqHSSlBXkH8De925lGfz4lTeS0gz6qZuEWxeN0Utib
+ 5Q3hq7OByS6I8L5kE6L9acFzKqbIOtJOWXXx9J4B7GEUoE+Jk5Vm6yfH4AeGhEbT
+ bQ8J5FbP+zk6iPkXGQdb/3aUBbOCn5OCSmERcTPyK9XzuyBz6wkFjZ9PAvbFLvOI
+ bD1KGIte1Np4jrM4ur924vjZTxm+wVKFDNS64J8t48yN2LUS2pV2zfwC6ACHypf4
+ WhsGpd1hNy+ZGt0dIrgRgKkttNx5VoVaLgzn3ozFz5BXbdHRCXV2BmY36QDzGQqw
+ 2BdKeJ/7INdB9NkGSkJYTvkNAS9YixqATxNsaOMt35HRADUlPQoUqzxIEujJzYdz
+ LVpzeTMNDxASqDG1MRIjNDp6l2xgC+H5wVpm5wn4eGvf4A7GXr35Q1TNRzmHayiP
+ FBp0Epiy+oFS1Xd/WQvMHCQMT4HoKSGf5u0++DpU1E5vN29vrxIOZ4+a9a5kZA95
+ QnsemvTiYf3C1xktkYR9AmUqYqCDTp/5nfqbQibRO0Chpy5UnhAXujkL0ABeaSaz
+ MViiJ2AX7vk2E++MXkBhi4IMyz0Vw2lPhg==
+ -----END CERTIFICATE-----
+ when: ansible_os_family == "RedHat"
+
+ ## JENKINS BUILDER AGENT TASKS
+ # We use SSH for ephemeral builders
+ - name: Register ephemeral builder using SSH
+ jenkins_node:
+ username: "{{ api_user }}"
+ uri: "{{ api_uri }}"
+ password: "{{ token }}"
+ # relies on a convention to set a unique name that allows a reverse
+ # mapping from Jenkins back to whatever service created the current
+ # node
+ name: "{{ ansible_default_ipv4.address }}+{{ nodename }}"
+ label: "{{ label | default('') }}"
+ host: "{{ ansible_default_ipv4.address }}"
+ credentialsId: "{{ jenkins_credentials_uuid }}"
+ remoteFS: '/home/{{ jenkins_user }}/build'
+ executors: '{{ executors|default(1) }}'
+ exclusive: true
+ when: not permanent|bool
+ tags: register
+
+ - name: Register Permanent Builder
+ block:
+ - name: Register permanent builder using JNLP
+ jenkins_node:
+ username: "{{ api_user }}"
+ uri: "{{ api_uri }}"
+ password: "{{ token }}"
+ # relies on a convention to set a unique name that allows a reverse
+ # mapping from Jenkins back to whatever service created the current
+ # node
+ name: "{{ ansible_default_ipv4.address }}+{{ ansible_hostname }}"
+ label: "{{ label }}"
+ host: "{{ ansible_default_ipv4.address }}"
+ credentialsId: "{{ jenkins_credentials_uuid }}"
+ launcher: 'hudson.slaves.JNLPLauncher'
+ remoteFS: '/home/{{ jenkins_user }}/build'
+ executors: '{{ executors|default(1) }}'
+ exclusive: true
+
+ - name: Update ca-trust bundle
+ command:
+ cmd: "update-ca-trust"
+ when:
+ - ansible_os_family == "RedHat"
+ - ansible_distribution_major_version|int <= 7
+
+ - name: Download agent.jar
+ get_url:
+ url: "{{ api_uri }}/jnlpJars/agent.jar"
+ dest: "/home/{{ jenkins_user }}/agent.jar"
+ force: yes
+ register: jar_changed
+
+ - name: look for templates/
+ ansible.builtin.stat:
+ path: templates
+ delegate_to: localhost
+ run_once: true
+ register: template_dir
+
+ - name: choose proper templates dir
+ set_fact:
+ template_path: "{{ 'templates' if template_dir.stat.exists else '../templates' }}"
+
+ - name: echo template_path
+ debug:
+ msg: "template_path: {{ template_path }}"
+
+ - name: Install the systemd unit files for jenkins
+ ansible.builtin.template:
+ src: "{{ template_path }}/systemd/jenkins.{{ item }}.j2"
+ dest: "/etc/systemd/system/jenkins.{{ item }}"
+ force: yes
+ with_items:
+ - service
+ - secret
+ register: unit_files_changed
+
+ - name: Reload systemd unit files (to pick up potential changes)
+ systemd:
+ daemon_reload: yes
+
+ - name: Stop jenkins service
+ service:
+ name: jenkins
+ state: stopped
+
+ - name: Kill any errant slave.jar or agent.jar processes
+ shell: "pkill -f -9 'java.*(slave|agent).jar'"
+ register: result
+ become: true
+ failed_when: result.rc > 1
+
+ - name: Start jenkins service
+ service:
+ name: jenkins
+ state: started
+ enabled: yes
+
+ - name: Restart jenkins service (if necessary)
+ service:
+ name: jenkins
+ state: restarted
+ enabled: yes
+ when: jar_changed is changed or unit_files_changed is changed
+ when: permanent|bool
+ tags: register
--- /dev/null
+---
+## This playbook force reinstalls vagrant and the vagrant-libvirt plugin on CentOS 8. It assumes:
+## - You've run slave.yml
+## - You have `html2text` installed locally and it's in your path
+##
+## Example:
+## ansible-playbook -vvv centos8-vagrant.yml --limit braggi21*
+
+- hosts: all
+ become: true
+ user: ubuntu
+ vars:
+ jenkins_user: jenkins-build
+ hackery_packages:
+ - gcc
+ - libguestfs-tools-c
+ - libvirt
+ - libvirt-devel
+ - libxml2-devel
+ - libxslt-devel
+ - make
+ - ruby-devel
+
+ tasks:
+ - name: Get the latest vagrant version
+ shell: curl -s https://releases.hashicorp.com/vagrant/ | html2text | grep '*' | grep -v '../' | head -n 1 | cut -d '_' -f2
+ delegate_to: localhost
+ register: latest_vagrant_version
+
+ - name: Set the latest vagrant version URL
+ set_fact:
+ latest_vagrant_url: "https://releases.hashicorp.com/vagrant/{{ latest_vagrant_version.stdout }}/vagrant_{{ latest_vagrant_version.stdout }}_x86_64.rpm"
+
+ ## Wipe out vagrant stuff
+ # From https://github.com/vagrant-libvirt/vagrant-libvirt/issues/943#issuecomment-463678158
+ - name: Wipe out vagrant stuff
+ shell: |
+ rvm implode
+ sudo gem uninstall --all
+ become_user: "{{ jenkins_user }}"
+ ignore_errors: true
+
+ - name: Wipe out more vagrant stuff
+ file:
+ path: "{{ item }}"
+ state: absent
+ with_items:
+ - "/home/{{ jenkins_user }}/.vagrant.d/gems"
+ - "/home/{{ jenkins_user }}/.vagrant.d/plugins.json"
+
+ - name: Remove vagrant
+ dnf:
+ name: vagrant
+ state: absent
+
+ - name: Install packages
+ dnf:
+ name: "{{ hackery_packages + [ latest_vagrant_url ] }}"
+ state: present
+ disable_gpg_check: yes
+
+ # https://github.com/vagrant-libvirt/vagrant-libvirt/issues/1127#issuecomment-713651332
+ # https://github.com/hashicorp/vagrant/issues/11020#issuecomment-633802295
+ - name: Library hackery
+ shell: |
+ cd $(mktemp -d)
+ wget https://vault.centos.org/8.4.2105/BaseOS/Source/SPackages/krb5-1.18.2-8.el8.src.rpm
+ rpm2cpio krb5-1.18.2-8.el8.src.rpm | cpio -imdV
+ tar xf krb5-1.18.2.tar.gz; cd krb5-1.18.2/src
+ LDFLAGS='-L/opt/vagrant/embedded/' ./configure
+ make
+ sudo cp lib/libk5crypto.* /opt/vagrant/embedded/lib/
+ wget https://vault.centos.org/8.4.2105/BaseOS/Source/SPackages/libssh-0.9.4-2.el8.src.rpm
+ rpm2cpio libssh-0.9.4-2.el8.src.rpm | cpio -imdV
+ tar xf libssh-0.9.4.tar.xz
+ mkdir build
+ cd build
+ cmake ../libssh-0.9.4 -DOPENSSL_ROOT_DIR=/opt/vagrant/embedded/
+ make
+ cp lib/libssh* /opt/vagrant/embedded/lib64
+
+ # https://github.com/vagrant-libvirt/vagrant-libvirt/issues/943#issuecomment-479360033
+ - name: Install the vagrant-libvirt plugin (EL)
+ shell: vagrant plugin install vagrant-libvirt --plugin-version 0.8.1
+ become_user: "{{ jenkins_user }}"
+ environment:
+ CONFIGURE_ARGS: 'with-ldflags=-L/opt/vagrant/embedded/lib with-libvirt-include=/usr/include/libvirt with-libvirt-lib=/usr/lib'
+ GEM_HOME: '~/.vagrant.d/gems'
+ GEM_PATH: '$GEM_HOME:/opt/vagrant/embedded/gems'
--- /dev/null
+---
+
+- hosts: jenkins_controller
+ user: cm
+ become: true
+ roles:
+ - ansible-jenkins
+ vars:
+ nginx_processor_count: 20
+ nginx_connections: 2048
+ ansible_ssh_port: 2222
+ jenkins_port: 8080
+ prefix: '/build'
+ xmx: 8192
+ # Email support
+ #- email:
+ # smtp_host: 'mail.example.com'
+ # smtp_ssl: 'true'
+ # default_email_suffix: '@example.com'
+ plugins:
+ - 'ace-editor'
+ - 'additional-metrics'
+ - 'ant'
+ - 'antisamy-markup-formatter'
+ - 'apache-httpcomponents-client-4-api'
+ - 'applitools-eyes'
+ - 'authentication-tokens'
+ - 'bouncycastle-api'
+ - 'branch-api'
+ - 'build-failure-analyzer'
+ - 'build-history-metrics-plugin'
+ - 'build-monitor-plugin'
+ - 'build-user-vars-plugin'
+ - 'built-on-column'
+ - 'cloudbees-folder'
+ - 'cobertura'
+ - 'code-coverage-api'
+ - 'command-launcher'
+ - 'compress-artifacts'
+ - 'conditional-buildstep'
+ - 'configuration-as-code'
+ - 'copyartifact'
+ - 'credentials'
+ - 'credentials-binding'
+ - 'cvs'
+ - 'dashboard-view'
+ - 'description-setter'
+ - 'display-url-api'
+ - 'docker-commons'
+ - 'docker-workflow'
+ - 'durable-task'
+ - 'dynamic-axis'
+ - 'envinject'
+ - 'envinject-api'
+ - 'external-monitor-job'
+ - 'ghprb'
+ - 'git'
+ - 'git-client'
+ - 'github'
+ - 'github-api'
+ - 'github-branch-source'
+ - 'github-oauth'
+ - 'github-organization-folder'
+ - 'git-server'
+ - 'global-build-stats'
+ - 'handlebars'
+ - 'htmlpublisher'
+ - 'icon-shim'
+ - 'jackson2-api'
+ - 'javadoc'
+ - 'jdk-tool'
+ - 'jenkins-multijob-plugin'
+ - 'jquery-detached'
+ - 'jsch'
+ - 'junit'
+ - 'ldap'
+ - 'lockable-resources'
+ - 'mailer'
+ - 'mapdb-api'
+ - 'mask-passwords'
+ - 'matrix-auth'
+ - 'matrix-project'
+ - 'maven-plugin'
+ - 'momentjs'
+ - 'multiple-scms'
+ - 'naginator'
+ - 'nested-view'
+ - 'pam-auth'
+ - 'parameterized-trigger'
+ - 'pipeline-build-step'
+ - 'pipeline-github-lib'
+ - 'pipeline-graph-analysis'
+ - 'pipeline-input-step'
+ - 'pipeline-milestone-step'
+ - 'pipeline-model-api'
+ - 'pipeline-model-declarative-agent'
+ - 'pipeline-model-definition'
+ - 'pipeline-model-extensions'
+ - 'pipeline-rest-api'
+ - 'pipeline-stage-step'
+ - 'pipeline-stage-tags-metadata'
+ - 'pipeline-stage-view'
+ - 'plain-credentials'
+ - 'postbuildscript'
+ - 'preSCMbuildstep'
+ - 'publish-over'
+ - 'publish-over-ssh'
+ - 'rebuild'
+ - 'resource-disposer'
+ - 'run-condition'
+ - 'scm-api'
+ - 'script-security'
+ - 'short-workspace-path'
+ - 'ssh-agent'
+ - 'ssh-credentials'
+ - 'ssh-slaves'
+ - 'structs'
+ - 'subversion'
+ - 'token-macro'
+ - 'translation'
+ - 'trilead-api'
+ - 'urltrigger'
+ - 'windows-slaves'
+ - 'workflow-aggregator'
+ - 'workflow-api'
+ - 'workflow-basic-steps'
+ - 'workflow-cps'
+ - 'workflow-cps-global-lib'
+ - 'workflow-durable-task-step'
+ - 'workflow-job'
+ - 'workflow-multibranch'
+ - 'workflow-scm-step'
+ - 'workflow-step-api'
+ - 'workflow-support'
+ - 'ws-cleanup'
+ vars_prompt:
+ - name: "okay_with_restart"
+ prompt: "\nWARNING: Some tasks like updating/installing plugins will restart Jenkins.\nAre you okay with restarting the Jenkins service? (y|n)"
+ default: "n"
--- /dev/null
+---
+
+- hosts: all
+ user: vagrant
+ roles:
+ - nginx
+ - grafana
+ vars_files:
+ - vars/load-balance-vars.yml
+ vars:
+ fqdn: "grafana.ceph.com"
+ app_name: "grafana"
+ development_server: true
+ ansible_ssh_port: 22
+ # These are defined in `vars/load-balance-vars.yml` but defaulting to what
+ # is used in production. For local development, update these to match the
+ # hosts used in your local development environment
+ #nginx_hosts:
+ # - fqdn: "grafana.ceph.com"
+ # app_name: "grafana"
+ # proxy_pass: "http://127.0.0.1:3000"
+ # - fqdn: "shaman.ceph.com"
+ # app_name: "shaman"
+ # upstreams:
+ # - name: "shaman"
+ # strategy: "least_conn"
+ # servers:
+ # - "1.shaman.ceph.com"
+ # - "2.shaman.ceph.com"
+
+ # only needed when enabling Github Auth
+ # github_client_id: "111aaa222"
+ # github_client_secret: "qwerty1234"
--- /dev/null
+---
+
+- hosts: all
+ user: vagrant
+ roles:
+ - graphite
+ vars:
+ fqdn: "graphite.ceph.com"
+ app_name: "graphite"
+ development_server: true
+ graphite_api_key: "secret"
--- /dev/null
+[smithi]
+smithi118.front.sepia.ceph.com nodename=smithi118
+smithi119.front.sepia.ceph.com nodename=smithi119
+smithi120.front.sepia.ceph.com nodename=smithi120
+smithi121.front.sepia.ceph.com nodename=smithi121
+smithi122.front.sepia.ceph.com nodename=smithi122
+smithi123.front.sepia.ceph.com nodename=smithi123
+smithi124.front.sepia.ceph.com nodename=smithi124
+smithi125.front.sepia.ceph.com nodename=smithi125
+smithi127.front.sepia.ceph.com nodename=smithi127
+smithi128.front.sepia.ceph.com nodename=smithi128
--- /dev/null
+---
+# Public-facing machines get the port changed to prevent a bit of abuse on the
+# standard one. There are some caveats to this approach, since we are changing
+# the default port we now need to instruct everything else to use the alternate
+# one. This should be run against newly brought up hosts when they are going to
+# be publicly accessible.
+
+# install python2.7 on xenial nodes
+- hosts: all
+ become: yes
+ user: admin
+ gather_facts: false
+ tasks:
+ - name: install python-simplejson
+ raw: sudo apt-get -y install python-simplejson
+ # so that this is ignored on rpm nodes
+ failed_when: false
+
+- hosts: all
+ user: admin
+ become: true
+ tasks:
+
+ - name: uncomment SSH port
+ lineinfile:
+ dest: /etc/ssh/sshd_config
+ regexp: '^#Port '
+ line: 'Port 2222'
+ backrefs: yes
+
+ - name: change default port from 22 if set
+ lineinfile:
+ dest: /etc/ssh/sshd_config
+ regexp: '^Port '
+ line: 'Port 2222'
+ backrefs: yes
+
+ # this requires the firewalld module that
+ # I couldn't get to work. It exists in the extras modules
+ #- name: enable the port in the firewall
+ # firewalld:
+ # port: 2222/tcp
+ # permanent: true
+ # state: enabled
+
+ # this is far from ideal, we ignore errors because we can't
+ # condition this if the port was already opened
+ - name: tell selinux that ssh uses a new port
+ command: semanage port -a -t ssh_port_t -p tcp 2222
+ ignore_errors: yes
+
+ # The CentOS Wiki says this should be run but I couldn't find
+ # a firewall-cmd in the remote CentOS 7 box
+ #- name: configure firewall to add new port
+ # command: firewall-cmd --add-port 2222/tcp --permanent
+
+ # Example action to start service httpd, if not running
+ - name: restart ssh
+ service: name=ssh state=restarted
+ when: ansible_pkg_mgr == "apt"
+
+ - name: restart sshd
+ service: name=sshd state=restarted
+ when: ansible_pkg_mgr == "yum"
--- /dev/null
+---
+
+- hosts: kraken
+ roles:
+ - kraken
+ vars:
+ ansible_ssh_user: "centos"
+
+ # bot specific vars
+ venv: "{{ helga_home }}/venv"
+ helga_nick: 'kraken'
+ helga_irc_host: irc.devel.redhat.com
+ helga_irc_port: 6667
+ helga_use_ssl: false
+ helga_cmd_prefix: '!'
+
+ helga_irc_channels:
+ - bogus
+
+ helga_operators: ['alfredodeza', 'ktdreyer', 'andrewschoen']
+ helga_default_plugins:
+ - dubstep
+ - facts
+ - help
+ - manager
+ - meant_to_say
+ - oneliner
+ - operator
+ - stfu
+ # custom ones
+ - norris
+ - redmine
+ - yall
+ - ugrep
+ - wut
+ - jenkins
+ - excuses
+ - poems
+ - reminders
+ - karma
+ - bugzilla
+
+ helga_external_plugins:
+ - git+https://github.com/alfredodeza/helga-norris#egg=helga-norris
+ - git+https://github.com/alfredodeza/helga-yall#egg=helga-yall
+ - git+https://github.com/alfredodeza/helga-ugrep#egg=helga-ugrep
+ - git+https://github.com/alfredodeza/helga-wut#egg=helga-wut
+ - git+https://github.com/alfredodeza/helga-karma#egg=helga-karma
+ - git+https://github.com/alfredodeza/helga-jenkins#egg=helga-jenkins
+ - git+https://github.com/alfredodeza/helga-excuses#egg=helga-excuses
+
+ helga_pypi_plugins:
+ - helga-redmine
+ - helga-poems
+ - helga-reminders
+ - helga-facts
+ - helga-bugzilla
+ - helga-pika
+
+ bugzilla_xmlrpc_url: 'https://bugzilla.redhat.com/xmlrpc.cgi'
+ bugzilla_ticket_url: 'https://bugzilla.redhat.com/%(ticket)s'
+
+ helga_twitter_api_key: "key"
+ helga_twitter_api_secret: "consumer_secret"
+ helga_twitter_oauth_token: "token"
+ helga_twitter_oauth_token_secret: "token_secret"
+ helga_twitter_username: "username"
+
+ # needed for helga-pika
+ rabbitmq_user: ""
+ rabbitmq_password: ""
+ rabbitmq_host: ""
+ rabbitmq_exchange: ""
+ rabbitmq_routing_keys: []
+
+ # needed for helga-redmine
+ redmine_url: ''
+ # This API key corresponds to the Kraken system account in Ceph's Redmine.
+ redmine_api_key: ''
+
+ # needed for helga-jenkins
+ jenkins_url: ''
+ jenkins_credentials: {}
--- /dev/null
+# This playbook requires that you install the roles defined
+# in ./requirements/sensu-requirements.yml. Do this by running:
+#
+# ansible-galaxy install -r requirements/sensu-requirements.yml
+#
+- hosts: sensu-server
+ become: true
+ vars_files:
+ - vars/sensu-vars.yml
+ roles:
+ - role: Mayeu.RabbitMQ
+ rabbitmq_vhost_definitions:
+ - name: "/sensu"
+ rabbitmq_users_definitions:
+ - vhost: "/sensu"
+ user: sensu
+ password: secret
+ configure_priv: ".*"
+ read_priv: ".*"
+ write_priv: ".*"
+ rabbitmq_ssl: false
+ - redis
+ - role: Mayeu.sensu
+ sensu_server_rabbitmq_port: 5672
+ sensu_server_api_password: secret
+ sensu_server_rabbitmq_password: secret
+ sensu_server_dashboard_password: secret
+ sensu_client_subscription_names:
+ - common
+ - rabbitmq
+ # these are custom settings for this client
+ sensu_settings:
+ # used in the rabbitmq-alive check
+ rabbitmq:
+ user: sensu
+ password: secret
+ # we need to escape the / in /sensu
+ vhost: "%2Fsensu"
+
+- hosts: sensu-clients
+ become: true
+ vars_files:
+ - vars/sensu-vars.yml
+ roles:
+ - role: Mayeu.sensu
+ sensu_server_rabbitmq_hostname: "{{ rabbitmq_client_address|mandatory }}"
+ sensu_server_rabbitmq_port: 5672
+ sensu_server_rabbitmq_password: secret
+ sensu_install_server: false
+ sensu_install_uchiwa: false
+ # sensu_client_subscription_names need to be defined in a hosts_var file
+ # relative to your inventory
--- /dev/null
+---
+
+- hosts: all
+ user: vagrant
+ roles:
+ - nginx
+ vars_files:
+ - vars/load-balance-vars.yml
+ vars:
+ app_name: "shaman"
+ fqdn: "shaman.ceph.com"
+ development_server: true
+ # only needed when enabling Github Auth
+ # github_client_id: "111aaa222"
+ # github_client_secret: "qwerty1234"
--- /dev/null
+%dist .el{{ ansible_distribution_major_version }}
--- /dev/null
+docs.ceph.com,newdocs.ceph.com,173.236.253.149 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDXvH756ZpNtOTJrg9Exypn5Yb3ABDTqhBvUOOtj7wxzBbXtOYwrvtmls6wMfuHgAOIxjvNIjADkpRYRAuf+TVA=
+docs.ceph.com,newdocs.ceph.com,173.236.253.149 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCpJb8vcpiCchL+gRfc749x7/xyo4Gr4VltXFwvK361zlFlfUgpULD1aBpLpAth060QzNBczJymuM6JLCW8d4t0nsKVDKL0dcOySvhZ86tnVKANwDt0S75q0Pd80ClesOZsn+awQ1Rq0eT4dMhLql9PWgSggLOTL+kT8NBFovEIvAiol0uv+L4pVWeJh7FhdMokAHGf7UrbZG/EbjCOvveQmSVLnCUnqxm71y8wQQxGOcLZYdzl2hvIHlI5mJPotx8Pl5QzEs34hF9rltiQ0LJp8gLlFL3ydPW7NK88HxZEgSIwx53v1wJqB6bsf/qxrXOy+Cg4i/i4RjT9ij38Ez8L
+docs.ceph.com,newdocs.ceph.com,173.236.253.149 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFay6H/uhGZ9Z4n18vV87OgNp1sIQTuJ4TLMmqPe6Ulk
--- /dev/null
+github.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt+VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6+PKCUXaDbC7qtbW8gIkhL7aGCsOr/C56SJMy/BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9/hWCqBywINIR+5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL+38TGxkxCflmO+5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw+wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk+S4dhPeAUC5y+bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn+EjqoTwvqNj4kqx5QUCI0ThS/YkOxJCXmPUWZbhjpCg56i+2aB6CmK2JGhn57K5mj0MNdBXA4/WnwH6XoPWJzK5Nyu2zB3nAZp+S5hpQs+p1vN1/wsjk=
--- /dev/null
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD4l25l32QJxD9aWZxXcUaVPAClY53yuLwRvnn4V5ZzE3Wd5ddrbbsBhON6iPdb5Vv4d9Q3b1SfKhWjAIZgSeHwD1vcTPt/Kd2nIpwv2qQYjKAvnByS21NNdd5kf1Iz+NFieJlWi+x3vBaDR53OY9OpVnwZj7pZ4jQXHJ4bhVOEG9auxncpy4/HJi70IpnqRFg9GidqcaGsiFUu4+ji1euU2WKnX2hVmIxK/47vKTGSVpDEAPM8OqBueKXU78fh2xeFR76hVWkKpLwhhdr5OAY1dDzuADjFNUT/x76eceXZ8B1aPBsQuhQ9W2oPtQZRHqu8ElXFuhrwQ30pm5IAzimb jenkins-build@jenkins
--- /dev/null
+---
+sensu_client_subscription_names:
+ - common
+ - rabbitmq
--- /dev/null
+---
+sensu_client_subscription_names:
+ - common
--- /dev/null
+# -*- coding: utf-8 -*-
+
+DOCUMENTATION = """
+module: jenkins_node
+short_decription: Manage Jenkins nodes
+description:
+ - This module provides some management features to control Jenkins
+ nodes.
+
+options:
+ uri:
+ description:
+ - Base URI for the Jenkins instance
+ required: true
+
+ username:
+ description:
+ - The Jenkins username to log-in with.
+ required: true
+
+ password:
+ description:
+ - The Jenkins password (or API token) to log-in with.
+ required: true
+
+ operation:
+ description:
+ - Operation to perform
+ required: false
+ default: 'create'
+ choices: [ create, delete, enable, disable ]
+
+ name:
+ description:
+ - Node name
+ required: true
+
+ executors:
+ description:
+ - Number of executors in node
+ required: false
+ default: 2
+
+ description:
+ description:
+ - Description of the node
+ required: false
+ default: null
+
+ label:
+ description:
+ - List of labels in a string, space-separated, to associate with a node, like "amd64" or "python"
+ required: false
+ default: null
+
+ exclusive:
+ description:
+ - Mark this node for tied jobs only
+ required: false
+ default: 'no'
+ choices: ['no', 'yes']
+
+ launcher:
+ description:
+ - Launcher method for a remote node (only needed for 'create' operations)
+ required: false
+ default: 'hudson.plugins.sshslaves.SSHLauncher'
+
+ remoteFS:
+ description:
+ - Path to the directory used for builds
+ required: false
+
+ credentialsId:
+ description:
+ - the ID of the user needed for authentication. Usually found in
+ credentials.xml or via the url
+ {host}/credential-store/domain/_/credential/{id}. By default this is an
+ SSH user account and key (see "launcher" above).
+
+ host:
+ description:
+ - hostname or IP for the host to connect to the builder
+
+requirements: ['python-jenkins']
+author:
+ - "Alfredo Deza"
+
+"""
+
+EXAMPLES = """
+# Create new node
+- name: Create new node
+ jenkins_node: uri={{ jenkins_uri }} username={{ user }} password={{ password }}
+ name={{ node_name }} operation=create
+
+# Delete an existing node
+- name: Delete a node
+ jenkins_node: uri={{ jenkins_uri }} username={{ user }} password={{ password }}
+ name={{ node_name }} operation=delete
+"""
+import ast
+import traceback
+import xml.etree.ElementTree as ET
+
+HAS_JENKINS_API = True
+try:
+ import jenkins
+except ImportError:
+ HAS_JENKINS_API = False
+
+
+def _jenkins(uri, username, password):
+ return jenkins.Jenkins(uri, username, password)
+
+
+def translate_params(params):
+ sanitized = {}
+ mapping = {
+ 'executors': 'numExecutors',
+ 'description': 'nodeDescription',
+ }
+ for k, v in params.items():
+ key = mapping.get(k, k)
+ sanitized[key] = v
+ return sanitized
+
+
+
+#
+# it's not clear to me how ansible passes lists as lists,
+# so convert them if necessary
+#
+def maybe_convert_string_to_list(v):
+ if isinstance(v, basestring):
+ try:
+ v = ast.literal_eval(v)
+ except Exception:
+ # no, really; ast makes a best effort, and if it fails,
+ # we didn't need its conversion
+ pass
+ return v
+
+def sanitize_update_params(kw):
+
+ # this list may be smaller than it needs to be, but these are
+ # the only ones I want to support for now
+ VALID_UPDATE_PARAMS = {
+ # value, if any, is function returning new key and value to use
+ 'name': None,
+ 'remoteFS': None,
+ 'numExecutors': None,
+ 'label': None,
+ }
+ update_kws = dict()
+ invalid = list()
+ for k, v in kw.items():
+ if k not in VALID_UPDATE_PARAMS:
+ invalid.append(k)
+ else:
+ if VALID_UPDATE_PARAMS[k]:
+ k, v = VALID_UPDATE_PARAMS[k](v)
+ update_kws[k] = v
+ return invalid, update_kws
+
+
+# our own limited implementation of xmltodict, because
+# that module is hard to find in distro packages
+
+def _create_or_append(d, tag, v):
+ if tag not in d:
+ d[tag] = ''
+
+ if not d[tag]:
+ d[tag] = list((v,))
+ else:
+ d[tag].append(v)
+
+
+def xml_to_dict(e):
+ '''
+ XML element to dict. Note that multiple occurrences of
+ the same tag are translated to an item where value is a list.
+ '''
+ d = dict()
+ xml_to_dict_worker(e, d)
+ return d
+
+
+def xml_to_dict_worker(e, curdict):
+ subd = None
+ if e.attrib or len(e):
+ curdict[e.tag] = subd = dict()
+
+ if e.attrib:
+ for k,v in e.attrib.items():
+ subd['@'+k] = v
+
+ # XXX maybe don't strip?
+ if e.text:
+ e.text = e.text.strip()
+ if not (len(e)):
+ # if subd exists, there were attributes and/or children,
+ # and text goes into a #text item of subd.
+ # If subd does not exist, there are no children or attrs,
+ # and this text goes into curdict[e.tag] directly.
+ #
+ # Note: multiple text strings are weird in etree; since order
+ # matters, they can't all live in e.text, or even e; they appear in
+ # the 'tail' attribute of subsequent nested elements, if those exist.
+ # That's just too much to handle here, so we ignore any but the
+ # first text string. That should be fine for Jenkins anyway.
+ if subd:
+ if e.text:
+ # only create an addr for a non-null e.text
+ _create_or_append(subd, '#text', e.text)
+ else:
+ # but fill curdict[e.tag] even if e.text is None
+ _create_or_append(curdict, e.tag, e.text)
+ return
+
+ # there are children; there must have been no text
+ for c in e:
+ xml_to_dict_worker(c, subd)
+
+
+def _scalar_or_list(v):
+ if v and isinstance(v, list):
+ return v
+ if v:
+ # v might be iterable. Don't iterate it.
+ l = list()
+ l.append(v)
+ return l
+ # v was None
+ return list()
+
+
+def dict_to_xml(d):
+ '''
+ Python dict to xml element, the dual of xml_to_dict()
+ '''
+ if len(d) > 1:
+ raise ValueError
+ # get first item
+ k,v = next(iter(d.items()))
+ e = ET.Element(k)
+ dict_to_xml_worker(e, v)
+ return e
+
+
+def dict_to_xml_worker(e, value):
+ if isinstance(value, dict):
+ # process entire dict, recursing if necessary
+ for k,v in value.items():
+ if k.startswith('@'):
+ e.set(k[1:], v)
+ else:
+ if isinstance(v, dict):
+ c = ET.Element(k)
+ e.append(c)
+ dict_to_xml_worker(c, v)
+ else:
+ if v is None:
+ c = ET.Element(k)
+ e.append(c)
+ c.text = v
+ else:
+ for s in _scalar_or_list(v):
+ c = ET.Element(k)
+ c.text = s
+ e.append(c)
+ else:
+ # wasn't a dict at the call; just set text and return
+ e.text = value
+
+
+def create_or_modify(uri, user, password, name, **kw):
+ launcher_params = {}
+ launcher_params['credentialsId'] = kw.pop('credentialsId', None)
+ launcher_params['host'] = kw.pop('host', None)
+ if all(launcher_params.values()) is False:
+ launcher_params = {}
+ params = translate_params(kw)
+ j = _jenkins(uri, user, password)
+
+ if j.node_exists(name):
+ # if it already exists, we can reconfigure it
+
+ # select valid config keys, transform a few
+ invalid, params = sanitize_update_params(params)
+
+ configstr = j.get_node_config(name)
+ xml_config = ET.fromstring(configstr)
+ config = xml_to_dict(xml_config)
+ for k, v in params.items():
+ config['slave'][k] = v
+ new_xconfig = dict_to_xml(config)
+ new_xconfigstr = ET.tostring(new_xconfig, encoding='unicode')
+
+ j.reconfig_node(name, new_xconfigstr)
+ else:
+ if 'label' in params:
+ params['labels'] = params['label']
+ params.pop('label')
+ j.create_node(name, launcher_params=launcher_params, **params)
+ if not j.node_exists(name):
+ return False, "Failed to create node '%s'." % name
+
+ return True, None
+
+
+def delete(uri, user, password, name, **kw):
+ j = _jenkins(uri, user, password)
+ if not j.node_exists(name):
+ return False, "Could not delete '%s' - unknown node." % name
+ j.delete_node(name)
+ if j.node_exists(name):
+ return False, "Failed to delete node '%s'." % name
+ return True, None
+
+
+def enable(uri, user, password, name, **kw):
+ j = _jenkins(uri, user, password)
+ if not j.node_exists(name):
+ return False, "Could not enable '%s' - unknown node." % name
+ j.enable_node(name)
+ return True, None
+
+
+def disable(uri, user, password, name, **kw):
+ j = _jenkins(uri, user, password)
+ if not j.node_exists(name):
+ return False, "Could not disable '%s' - unknown node." % name
+ j.disable_node(name)
+ return True, None
+
+
+def main():
+ module = AnsibleModule(
+ argument_spec=dict(
+ uri=dict(required=True),
+ username=dict(required=True),
+ password=dict(required=True),
+ operation=dict(default='create', choices=['create', 'delete', 'enable', 'disable']),
+ name=dict(required=True),
+ executors=dict(required=False, default=2),
+ description=dict(required=False, default=None),
+ label=dict(required=False, default=None),
+ host=dict(required=False, default=None),
+ credentialsId=dict(required=False, default=None),
+ launcher=dict(required=False, default='hudson.plugins.sshslaves.SSHLauncher'),
+ remoteFS=dict(required=False, default=None),
+ exclusive=dict(required=False, default='no', type='bool'),
+ ),
+ supports_check_mode=False
+ )
+
+ if not HAS_JENKINS_API:
+ module.fail_json(msg="Could not import python module: jenkins. Please install the python-jenkins package.")
+
+ uri = module.params['uri']
+ username = module.params['username']
+ password = module.params['password']
+ operation = module.params.get('operation', 'create')
+ name = module.params['name']
+ executors = module.params['executors']
+ description = module.params.get('description')
+ label = module.params.get('label')
+ exclusive = module.params.get('exclusive', False)
+ host = module.params.get('host')
+ remoteFS = module.params.get('remoteFS')
+ credentialsId = module.params.get('credentialsId')
+ launcher = module.params.get('launcher', 'hudson.plugins.sshslaves.SSHLauncher')
+
+ api_calls = {
+ 'create': create_or_modify,
+ 'delete': delete,
+ 'enable': enable,
+ 'disable': disable
+ }
+
+ try:
+ func = api_calls[operation]
+ except KeyError:
+ return module.fail_json(
+ msg="operation: %s is not supported. Choose one of: %s'" % (
+ operation, str(api_calls.keys()))
+ )
+
+ try:
+ changed, msg = func(
+ uri,
+ username,
+ password,
+ name,
+ executors=executors,
+ description=description,
+ label=label,
+ exclusive=exclusive,
+ host=host,
+ credentialsId=credentialsId,
+ launcher=launcher,
+ remoteFS=remoteFS,
+ )
+ except Exception as ex:
+ # Ensure that errors going out to Jenkins, specifically the network
+ # requests, can be properly translated into meaningful errors so that
+ # Ansible can report those back.
+ if ex.__class__.__name__ == 'HTTPError':
+ msg = "HTTPError %s: %s" % (ex.code, ex.url)
+ else:
+ message = getattr(ex, 'message', None)
+ msg = getattr(ex, 'msg', message)
+ if not msg:
+ msg = str(ex)
+ msg = "%s: %s\n%s" % (ex.__class__.__name__, msg, traceback.format_tb(ex.__traceback__))
+ return module.fail_json(msg=msg)
+
+ args = {'changed': changed}
+ if msg:
+ args['msg'] = msg
+ module.exit_json(**args)
+
+
+# yep, everything: https://docs.ansible.com/developing_modules.html#common-module-boilerplate
+from ansible.module_utils.basic import *
+if __name__ == '__main__':
+ main()
--- /dev/null
+private-registry
+================
+
+Ansible playbook for deploying a self-signed private docker registry container.
+
+## What does it do?
+
+This playbook will generate a self-signed cert and start a private docker
+registry container using that cert. This private docker registry can then
+be used by any client that has the cert.
+
+This directory also includes vagrant files that will spin up two VMs and then
+run the ansible playbook to provision one as a private docker registry and the
+other as a test client to validate that it can use the self-signed cert to
+push an image to the private docker registry on the other node.
+
+## Running Vagrant to Provision and Test
+
+* Edit vagrant_variables.yml and change the `vagrant_box` variable if needed
+* Use `vagrant up` command to deploy and provision the VMs
+
+When the playbook completes successfully, it will have started the private
+docker registry container and used the other VM to test pushing a test image
+to that private docker container.
+
+## Running the playbook against an existing machine
+
+When you are ready to provision onto an existing machine, first make sure
+that docker is installed on that machine.
+
+In the top directory of this playbook where the `site.yml` file exist, add
+an `ansible-hosts` file to specify the machine you want to provision. It
+should look something like this:
+
+```
+---
+[registry]
+ceph-docker-registry ansible_host=xx.xx.xx.xx ansible_port=2222 ansible_user=ubuntu
+```
+
+Once this is specified, you are ready to run the playbook with:
+
+```
+ansible-playbook -i ansible-hosts site.yml
+```
+
+Once the playbook is complete you can go out to your machine and do a
+`sudo docker ps` to see the private registry container running.
+
+Any other docker client machine can now push to or pull from this private
+registry if it has the self-signed cert in its docker certs directory. To
+enable this on another machine:
+
+* Create the directory on the client machine to hold the cert
+
+```
+$ sudo mkdir /etc/docker/certs.d/XX.XX.XX.XX\:5000
+```
+
+where `XX.XX.XX.XX` is the ip address of your private registry machine
+
+* Copy the self-signed certificate from the private registry machine and place the cert in the newly created directory
+
+```
+$ scp XX.XX.XX.XX:/var/registry/certs/self.crt /etc/docker/certs.d/XX.XX.XX.XX\:5000/ca.crt
+```
+
+where `XX.XX.XX.XX` is the ip address of your private registry machine
+
+Now you should be able to push images to and pull images from your private docker registry.
+
+* To tag an image before pushing it to the private docker registry
+
+```
+$ docker tag myimage XX.XX.XX.XX\:5000/myimage
+```
+
+* To push the tagged image to the private docker registry
+```
+$ docker push XX.XX.XX.XX\:5000/myimage
+```
+
+* To pull an image from the private docker registry
+```
+$ docker pull XX.XX.XX.XX\:5000/someimage
+```
--- /dev/null
+# -*- mode: ruby -*-
+# vi: set ft=ruby :
+
+require 'yaml'
+VAGRANTFILE_API_VERSION = '2'
+
+config_file=File.expand_path(File.join(File.dirname(__FILE__), 'vagrant_variables.yml'))
+settings=YAML.load_file(config_file)
+
+BOX = settings['vagrant_box']
+SYNC_DIR = settings['vagrant_sync_dir']
+MEMORY = settings['memory']
+TEST_CLIENT_VM = settings['provision_test_client_vm']
+
+ansible_provision = proc do |ansible|
+ ansible.playbook = 'site.yml'
+ ansible.groups = {
+ "registry" => ["docker-registry"]
+ }
+ if TEST_CLIENT_VM then
+ ansible.groups['testclient'] = "docker-reg-test"
+ end
+
+ ansible.limit = 'all'
+end
+
+Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
+ config.vm.box = BOX
+ config.ssh.insert_key = false # workaround for https://github.com/mitchellh/vagrant/issues/5048
+
+ # Faster bootup. Disable if you need this for libvirt
+ config.vm.provider :libvirt do |v,override|
+ override.vm.synced_folder '.', SYNC_DIR, disabled: true
+ end
+
+ if TEST_CLIENT_VM then
+ config.vm.define "docker-reg-test" do |regtest|
+ regtest.vm.hostname = "docker-reg-test"
+ end
+ end
+
+ config.vm.define "docker-registry" do |registry|
+ registry.vm.hostname = "docker-registry"
+ # Virtualbox
+ registry.vm.provider :virtualbox do |vb|
+ vb.customize ['modifyvm', :id, '--memory', "#{MEMORY}"]
+ end
+
+ # VMware
+ registry.vm.provider :vmware_fusion do |v|
+ v.vmx['memsize'] = "#{MEMORY}"
+ end
+
+ # Libvirt
+ registry.vm.provider :libvirt do |lv|
+ lv.memory = MEMORY
+ end
+
+ # Parallels
+ registry.vm.provider "parallels" do |prl|
+ prl.name = "docker-registry"
+ prl.memory = "#{MEMORY}"
+ end
+
+ # Run the provisioner after the machine comes up
+ registry.vm.provision 'ansible', &ansible_provision
+ end
+end
--- /dev/null
+---
+dummy:
--- /dev/null
+---
+- name: create directory for self-signed SSL cert
+ file:
+ path: /var/registry/certs
+ state: directory
+
+- name: create self-signed cfssl json file
+ template:
+ src: "{{ role_path }}/templates/self-csr.json.j2"
+ dest: ./self-csr.json
+
+- name: get cfssl
+ get_url:
+ url: https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
+ dest: ./cfssl
+ mode: 0755
+
+- name: get cfssljson
+ get_url:
+ url: https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
+ dest: ./cfssljson
+ mode: 0755
+
+- name: gencert
+ shell: ./cfssl gencert -initca self-csr.json | ./cfssljson -bare ca
+
+- name: push self-signed cfssl cert to the ansible server
+ fetch:
+ src: ca.pem
+ dest: fetch/certs/self.crt
+ flat: yes
+
+- name: mv the cert to be accessible by container
+ command: mv ca.pem /var/registry/certs/self.crt
+
+- name: mv the key to be accessible by container
+ command: mv ca-key.pem /var/registry/certs/self.key
+
+- name: start registry container
+ command: docker run -d --name=docker-registry \
+ -p 5000:5000 \
+ --privileged=true \
+ --restart=unless-stopped \
+ -v /var/registry:/var/registry \
+ -e STORAGE_PATH=/var/registry/data \
+ -e REGISTRY_HTTP_TLS_CERTIFICATE=/var/registry/certs/self.crt \
+ -e REGISTRY_HTTP_TLS_KEY=/var/registry/certs/self.key \
+ registry
--- /dev/null
+{
+ "CN": "docker-registry",
+ "hosts": [
+ "{{ ansible_default_ipv4.address }}",
+ "127.0.0.1"
+ ],
+ "key": {
+ "algo": "rsa",
+ "size": 2048
+ },
+ "names": [
+ {
+ "C": "XX",
+ "L": "Default City",
+ "O": "Default Company Ltd",
+ "ST": "."
+ }
+ ]
+}
--- /dev/null
+---
+- name: create directory for self-signed cert of docker-registry
+ file:
+ path: /etc/docker/certs.d/{{ hostvars['docker-registry']['ansible_default_ipv4']['address'] }}:5000
+ state: directory
+
+- name: copy self-signed cert of docker-registry
+ copy:
+ src: fetch/certs/self.crt
+ dest: /etc/docker/certs.d/{{ hostvars['docker-registry']['ansible_default_ipv4']['address'] }}:5000/ca.crt
+
+- name: pull a small image from docker hub
+ command: docker pull busybox
+
+- name: tag image
+ command: docker tag busybox {{ hostvars['docker-registry']['ansible_default_ipv4']['address'] }}:5000/mybusybox
+
+- name: push tagged image to private registry
+ command: docker push {{ hostvars['docker-registry']['ansible_default_ipv4']['address'] }}:5000/mybusybox
--- /dev/null
+---
+# Defines deployment design and assigns role to server groups
+
+- hosts: registry
+ become: True
+ roles:
+ - docker-registry
+
+- hosts: testclient
+ become: True
+ roles:
+ - test-client
--- /dev/null
+---
+provision_test_client_vm: true
+memory: 1024
+vagrant_box: centos/atomic-host
+# The sync directory changes based on vagrant box
+# Set to /home/vagrant/sync for Centos/7, /home/{ user }/vagrant for openstack and defaults to /vagrant
+vagrant_sync_dir: /home/vagrant/sync
--- /dev/null
+---
+
+- hosts: localhost
+ vars:
+ # should be passed in the CLI like `--extra-vars "version=1.23.45 branch=main"`
+ version: 0-dev # e.g. 0.78
+ branch: main # any existing branch on Github
+ release: STABLE # STABLE, RELEASE_CANDIDATE, HOTFIX, and SECURITY are valid options
+ tag_name: "v{{ version}}"
+ project: "ceph"
+ clean: true # if re-doing a deployment this deletes the remote branch in Jenkin's git repo
+ force_dch: false # if coming from a rc and wanting to release a stable you need to force dch
+ debemail: ceph-maintainers@ceph.io
+ debfullname: "Ceph Release Team"
+ pr_checklist: |
+ ## Checklist
+ - Tracker (select at least one)
+ - [ ] References tracker ticket
+ - [ ] Very recent bug; references commit where it was introduced
+ - [ ] New feature (ticket optional)
+ - [x] Doc update (no ticket needed)
+ - [ ] Code cleanup (no ticket needed)
+ - Component impact
+ - [ ] Affects [Dashboard](https://tracker.ceph.com/projects/dashboard/issues/new), opened tracker ticket
+ - [ ] Affects [Orchestrator](https://tracker.ceph.com/projects/orchestrator/issues/new), opened tracker ticket
+ - [x] No impact that needs to be tracked
+ - Documentation (select at least one)
+ - [ ] Updates relevant documentation
+ - [x] No doc update is appropriate
+ - Tests (select at least one)
+ - [ ] Includes [unit test(s)](https://docs.ceph.com/en/latest/dev/developer_guide/tests-unit-tests/)
+ - [ ] Includes [integration test(s)](https://docs.ceph.com/en/latest/dev/developer_guide/testing_integration_tests/)
+ - [ ] Includes bug reproducer
+ - [x] No tests
+ roles:
+ - { role: ceph-release, when: "project == 'ceph'" }
+ - { role: ceph-deploy-release, when: "project == 'ceph-deploy'" }
+ - { role: remoto-release, when: "project == 'remoto'" }
--- /dev/null
+- src: https://github.com/dmick/ansible-playbook-sensu
+ scm: git
+ name: Mayeu.sensu
+ version: remotes/origin/plugin-gem-install
+
+- src: https://github.com/dmick/ansible-playbook-rabbitmq
+ scm: git
+ name: Mayeu.RabbitMQ
+ version: remotes/origin/master
--- /dev/null
+*~
+.vagrant
+Vagrantfile
\ No newline at end of file
--- /dev/null
+ansible-jenkins
+===============
+
+This role will allow you to install a new Jenkins server from scratch or manage an existing instance.
+
+It assumes the following:
+
+1. You've installed a VM with Ubuntu Xenial (16.04)
+2. You're using _`https://github.com/ceph/ceph-sepia-secrets/` as your ansible inventory
+3. You've already run the ``ansible_managed`` and ``common`` roles from https://github.com/ceph/ceph-cm-ansible
+4. You've already generated github oauth application credentials under the Ceph org
+
+The role is idempotent but it should be noted that the Jenkins service will be restarted when updating or installing plugins. You will be prompted at the beginning of the playbook run if you're okay with restarting the service.
+
+Initial Installation
+--------------------
+
+To set up a new Jenkins server from scratch:
+
+1. ``cd ceph-build/ansible``
+2. ``cp examples/controller.yml .``
+3. ``ansible-playbook controller.yml --limit="new.jenkins.example.com" --extra-vars="{github_oauth_client: 'foo',github_oauth_secret: 'bar'}"``
+4. Continue with https://github.com/ceph/ceph-sepia-secrets/blob/main/jenkins-controller.rst
--- /dev/null
+---
+placeholder: 'placeholder'
+jenkins_port: 8080
+letsencrypt_email: 'ceph-infra@redhat.com'
+# plugins:
+# - 'ldap'
+# - 'github'
+# - 'translation'
+# - 'preSCMbuildstep'
+# - 'gravatar'
--- /dev/null
+-----BEGIN PGP PUBLIC KEY BLOCK-----
+Version: GnuPG v1.4.9 (GNU/Linux)
+
+mQGiBEmFQG0RBACXScOxb6BTV6rQE/tcJopAEWsdvmE0jNIRWjDDzB7HovX6Anrq
+n7+Vq4spAReSFbBVaYiiOx2cGDymj2dyx2i9NAI/9/cQXJOU+RPdDzHVlO1Edksp
+5rKn0cGPWY5sLxRf8s/tO5oyKgwCVgTaB5a8gBHaoGms3nNC4YYf+lqlpwCgjbti
+3u1iMIx6Rs+dG0+xw1oi5FUD/2tLJMx7vCUQHhPRupeYFPoD8vWpcbGb5nHfHi4U
+8/x4qZspAIwvXtGw0UBHildGpqe9onp22Syadn/7JgMWhHoFw5Ke/rTMlxREL7pa
+TiXuagD2G84tjJ66oJP1FigslJzrnG61y85V7THL61OFqDg6IOP4onbsdqHby4VD
+zZj9A/9uQxIn5250AGLNpARStAcNPJNJbHOQuv0iF3vnG8uO7/oscB0TYb8/juxr
+hs9GdSN0U0BxENR+8KWy5lttpqLMKlKRknQYy34UstQiyFgAQ9Epncu9uIbVDgWt
+y7utnqXN033EyYkcWx5EhLAgHkC7wSzeSWABV3JSXN7CeeOif7QiS29oc3VrZSBL
+YXdhZ3VjaGkgPGtrQGtvaHN1a2Uub3JnPohjBBMRAgAjAhsDBgsJCAcDAgQVAggD
+BBYCAwECHgECF4AFAko/7vYCGQEACgkQm30y8tUFguabhgCgi54IQR4rpJZ/uUHe
+ZB879zUWTQwAniQDBO+Zly7Fsvm0Mcvqvl02UzxCtC1Lb2hzdWtlIEthd2FndWNo
+aSA8a29oc3VrZS5rYXdhZ3VjaGlAc3VuLmNvbT6IYAQTEQIAIAUCSj/qbQIbAwYL
+CQgHAwIEFQIIAwQWAgMBAh4BAheAAAoJEJt9MvLVBYLm38gAoIGR2+TQeJaCeEa8
+CQhZYzDoiJkQAJ0cpmD+0VA+leOAr5LEccNVd70Z/dHNy83JARAAAQEAAAAAAAAA
+AAAAAAD/2P/gABBKRklGAAEBAQBgAGAAAP/hAGBFeGlmAABJSSoACAAAAAQAMQEC
+ABkAAAA+AAAAEFEBAAEAAAABQ5AAEVEEAAEAAAASCwAAElEEAAEAAAASCwAAAAAA
+AE1hY3JvbWVkaWEgRmlyZXdvcmtzIDQuMAAA/9sAQwAIBgYHBgUIBwcHCQkICgwU
+DQwLCwwZEhMPFB0aHx4dGhwcICQuJyAiLCMcHCg3KSwwMTQ0NB8nOT04MjwuMzQy
+/9sAQwEJCQkMCwwYDQ0YMiEcITIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIy
+MjIyMjIyMjIyMjIyMjIyMjIyMjIy/8AAEQgArgCWAwEiAAIRAQMRAf/EAB8AAAEF
+AQEBAQEBAAAAAAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQEC
+AwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkq
+NDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqS
+k5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk
+5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEAAAAAAAABAgMEBQYHCAkK
+C//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXETIjKBCBRCkaGx
+wQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZXWFla
+Y2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2
+t7i5usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/aAAwDAQAC
+EQMRAD8A9wEj/wB9vzpfMf8Avt+dRinCpGSeY398/nS72/vH86YKBQBJvb+8fzpd
+7f3j+dMFLQA/e394/nS7j6n86ZSimA7cfU07cfU1HnFOFADtx9aXJ9TTKUUxD8n1
+pc+9Mp1AC5ozSUtAC0maKKADNFJ2ooAoCnCmilzWZQ6lFJSimAopaQUtAC54rOvN
+dsLCTZPPGrdwXAry/wCKHxXfRppND0Mq16Bie5PIi9lHdv5V8/X+rXt/O8tzcyyy
+MclnYkk0avYdl1PqPxT8VtH8NwqwzdXEuSkaHoB61wjftCXhlzHosBjHZpSCa8PW
+O7uhuAkcDueaaYbhOqMMe1L5lcvWx9U+FPjJ4f8AEU0dpdhtLvXOFWdgY3PoH6fg
+cV6MrZGa+EklIOJOPqK9i+G3xem0TytI8QSSXGnHCQXJO57f2Pqn6indrclq+x9G
+5pwNVoLiO4hSaGRZIpFDKynIYHuKmBqyB+aWmg0uaAFopKKAFoozRQBQFLSUorMo
+UUtJSigB1ZHijWovD/hu+1KZlUQxErnu3YfnWsK8k+Pt60PhaxtAxAnuOQO4UE0P
+YaWp8/Xd1LeTz3Mzl5pnLuxPJJNa+i+HDclZZ1yp5C1Q0axa+1BEx8i8mvS7S3WG
+NQo6elcWLxDprljuelgsOp+/IgtdCiVFVYx07CnXHhyNgflA/Ct+1BwOKmkVq8xS
+m9bnr2S0sec6n4UVo2KKNw6Vx9xby2U3lSAj617NcR5J4zmua1/Q4r+1JVQJU5Ui
+uzD4qUXyz2OHFYWM1zR3Ol+CHj0xXX/CKajMTHKd1g7H7r94/oeo9wR3r3tWr4ht
+pZtNv4bmElZ7eVZUIOPmU5/pX2dpOpR6tpFnqMQxHdQpMoz03DOK9ePY8OaszUDU
++oFNSg0yUOopKWgAoo6mikBRpwptLWZY4UoptLTELXin7QbD7PoiFv45Gx+Ar2uv
+FP2g7dfsejXOfm8x48evGf6UmNbnm3hCBls57sJuYnYg9TXSyW2uIgNpJbs5GWDj
+gewrP8EJu0XIHKyNV2+j1txM0MzIQV8oIQN3POSenHTg15VSV6z2+Z7dGNqKt26G
+hpeo6rC3lajZxKOgdG6/hW5NcoIC4HOOhrmbJb1IokuZWkfbmUsQQGz2xW5OAbAE
+Y3d/es5TtJo6oRbjcx7uXVryXbblIIv723JqN9PuUdLhJ2aQf6xW6OP6VBqS6jcQ
+sLS4aOQHCqH2qVx64znP8verWn2d/DKrPOzxbFBWQ5O7HJyOxParv7t7oxcfeasz
+z3xBaC01uZV4RxvAr6X+F0rv8NdCMj7iICAfQBjgV87+NYzHr6jHHlA/rX0H8LFa
+P4baLu3DMTHDDtvOPwr1sM7xR4mK0m/U7lTUyniqyGp1NbtHOmSUuaaDS1JQuaKS
+ikBTpaQUVmUOpabS0xC15f8AF7wxc+IEsJI3VI4Nyrkfxtjn6YH616hWL4ptDd6B
+cBTh0+cH0xUVL8j5dzSk0prm2PD/AAnYvp+ltazDbNHM6yD3BrpEiWToOKzPLktr
+mRiSwlO7J9atQ3bFsCvGnJSlzM+gpLlXKJcpDAegGT+ZqYgPYAgVnz3DpKW8pJW6
+AM2MUranci38vy4gBz/9alGF3dG0ppKzZYs0ilB4BwauOixggVlW9w8sivsWJuhC
+nOanmun83bimtNCZPS5g6/osOta3Zq8giRImMjeoyMAe5Oa940SxTStFsbCM/JbQ
+JGv0AryTTLRLzXIwwDOWVAvfGecCvZx19q9fAttPyPBzCyatuyyhqZTxVdDU612M
+4ESg04GowacDUFjqKQc0UDKlLSUtZFC0UUCmAtMmiSeF4pBlHBVh7U6imI878Y+G
+rbTbGC7tFf5WKyFmz1rimZoy2wZJGQBXter2C6lpc9qw++vy/XtXiMoe3upLeTiS
+Nipry8XSUJJpaHq4Os5JpvUoC4uJr1rYIkJC7vMuGCgj2rZ/4Ry/aHzvtdltO4Ei
+TPTH+NUrhFlUB1yQODVB8RjyRboR6hiAfqM4rCLTPR6aSt8rktzJc2l7HZgRXLOu
+4SQPkKPU1fUtu3SHJUc/WqlvGIULKo3kdhgCr+lwfb9ZtLItxLIAx9upp25pKKMq
+klFN3PTfDenR2mjWbtEvnsm8sVG4buevXpit1aiUAcAYHapVr6GMVGKij5iUnKTk
+ydKnU1ClSikwRIKcD60wU4GpLQ7PpRSUUhlaikpc1kWLRSUUxC5ozSZozTA5Txn8
+QtF8DxRDUDLPeTDdFaQAFyucbiTwo+vXsK8huNZHiVJddtbY2/mysfJL7iAD0JwM
+1h/F+X7V8RtUKybzEUj5PTCjj8KseD2S304WbzxPJ9/CsDjcM4/DvXLjF+6TXc7M
+F/EafY1YNThkVcttccMpqY3trtx8ufeoJbK3acrLECPcUsmj2EUYfyw27kcmvOjY
+9NuS6kc+pxp8seWc8Koq7pWox+HLiHWdQSR0gO90jALYxjAzjnmsxbnTbCXMssMK
+r6nk/h1rH17xLY3lpJa25dw4wWxgfrW9KnOU04owqziotSZ7r4S8daT4xFwunpcR
+S24DPHOoBweMjBOa6pDXzn8JNej0nxdFbSKqwX6/ZtzHG1s5U59yMfjX0SpwcGvc
+Wp4MlZltDUoNV42qYGpYIlFOzUYNOBqSx4opBzRSArUtNqrqWqWOj2D32pXcVrap
+96WVsDPoO5PsOayNC5TXkWKJpZHVI0GWd2AVR7k9K8Z8R/HTazweHNPBHQXd4Ovu
+sY/9mP4V5Trvi7XPETltW1S4uVzkRM2I1+iDCj8qtRYWPfvEXxh8L6GJIrWZ9Vu1
+48u1/wBWD7yHj8s15Jr/AMYfFGtyOlvdDS7c5xFZnace7n5j+n0rz1n3Hmmk4zzV
+KKFcdNPI7s0jM7sxZmY5LE9ST61CsrI25SQR3BwacWzwRUZX0qiblgaheq25bucH
+18w086tqDrta9uCvp5hqng56UDmp5I9iueXcmDsxyzEn1JqROvNQKD7VKCqDJOas
+kuRuMgjp711+n/FHxVpyIsWqNPHEAojukWQEe5PP61wvnEj0B4ApykZJJ7UDPfvD
+Hxp0+/dLfXbYWMp4E8RLxH6jqv616laXtve26XFrPFPC/KyRsGU/iK+MBICcDitn
+QPFWseHbvztMv5YM/eTOUb6qeDRcXKj7CV8ing1434V+NtpezR2niC3W0Y8fa4cm
+PP8AtL1H1Ga9atrqG6gSe3mjmhcZSSNgysPYikTZouCiow4xRRYLmRr+tW/h7Qrv
+Vbkbo7dCwQHBdugUfU4r5Y8TeKtV8Tak15qlwztk+XEDiOFf7qL2H6nvXr3x01n7
+Po2n6SjfNcymaQf7K8D9T+leBzPuT3H8qiC0NHoNeYnvURc4phPNITmrJuLu5pSa
+Z3pRyKYhM80oOaaetAoAkzwKTd6Cm5pBQA8EnqTijOSeOKaT2oHSgY7cTTg1Rilp
+AS7/AGpc4UA96iHJApXbk0wJ0kOeDXTeGPGer+GbtZNOunWMnLwscxv9V/ya5T7q
+D1NOEhQYHU9/Siw0z7A8KeJ7XxToceo2ymNs7JoicmNx1HuOcg0V5j8A9RQnWNKl
+lVARHcruOOfut/7LRTViJaPQ5b42agbrx29uGytrAiAehPJ/nXmrNu2H14NdD481
+A6j401S6zkPMQPoOP6VzQOcj0OaiGxctxpNA5obqfrQKokSlWkzQDzQAMOaSnN1F
+JQAtFFFAAKKKO9AC0UlGaBj0HemjlvrTkOEY03vQIe7YJP4CkTBfn6mkbkA/Wkzg
+YHU9aYGtpl7Pau8kEzxMwwShwcUVUtWIU4oqbXLTFv3M0jSkksWJP481SU/Pk1Ym
+b7/1FViMY96diWKwy5+tITTm+7mo6BC0DrR2oHWgB7dBTac3SmUALS0lKOtACUua
+KKACm0vSk70ASLxF9TSYzznrQf8AVqKTPH40wHHHQfjTM55pTwp9+KQdKALEL7Is
++poph4Cr6Cigdz//2YhgBBMRAgAgBQJKP/cgAhsDBgsJCAcDAgQVAggDBBYCAwEC
+HgECF4AACgkQm30y8tUFgua3awCdFQlChLgn/n4tb4jLe1RgxOxHxosAn2Cn2oNh
+sZ91wUb4d5JuH88TCupsuQINBEmFQG0QCADqAXWgiis4yi96os3QZmK5809ojjTT
+nlICgbztrT55cMVTDBc9SneyRQlC0cS+M1z4Do6lj81sNJdJiBPqTYYA1+exTFvs
+5zCxPInDP3hvqXxHTP142XN1hdzt53R7smn8O0wyO+RCBUb44e9NkusvBd5UP3Je
+449hnpXJ4WO3cVMFm4ghxs7ERlpAi5NTEsVVdM8dqHbZJtk8gbzdAHH0ybiAXmWy
+LFGZDuuKiFAkqm/Wled7id6N+cPx107dwBclwPxzfEYKEqJ1YDDHoDlyfx4012y1
+53e5sGyah/IPBYrrLMfG+Wmiwr5nCX0tmwOcyukuE94hbzJCX2wBdbWLAAMGCACz
+l3cuM4lGt/wr5liM4gotXpZAopY+EnbLIBuOHFXXR7HnyAgST1jH/AUbafvPjyDh
+EkFDyUP14XtHNIAqsN1UpuyYbM90bMPAWXJxrazMsSF+Tv5yIxHiy4cc1pjoqHA2
+kwqIGHmTxYzOPOS19ZWQAtevoTE6pCARphY0dzpscCWaXGs/ZqNAhjL96WLYV1Oo
+Ut+9mTnOcs6Vuxaxp2wN2S5DK1S9gdIxWEc8wMUPiQe8CYk0OySdORIblMs3bGqD
+FoM5HcBAZP1YlXitPH2nIRv0DtOQGMQOCkqUWmQuQAUgKV+YO86lO4S7EhTET/GP
+sQb6P7efm/Cs8wbq/wyIiEkEGBECAAkFAkmFQG0CGwwACgkQm30y8tUFgua2mACe
+JNBW4snDC4OzjKU6QT386/GA9ssAn3vLzSwn8N1xv5MihWGr5kVzvaE2
+=cjdq
+-----END PGP PUBLIC KEY BLOCK-----
--- /dev/null
+# {{ ansible_managed }}
+
+server {
+
+ listen 80;
+ # FIXME
+ #server_name jenkins.domain.tld;
+
+ location / {
+
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+
+ # Fix the "It appears that your reverse proxy set up is broken" error.
+ proxy_pass http://127.0.0.1:8080;
+ proxy_read_timeout 90;
+
+ # FIXME
+ #proxy_redirect http://127.0.0.1:8080 https://jenkins.domain.tld;
+ }
+ }
--- /dev/null
+# {{ ansible_managed }}
+
+[jenkins]
+name=Jenkins
+baseurl=http://pkg.jenkins-ci.org/redhat
+gpgcheck=1
--- /dev/null
+<?xml version="1.0" encoding="UTF-8"?><project>
+ <actions/>
+ <description><!-- Managed by Jenkins Job Builder --></description>
+ <keepDependencies>false</keepDependencies>
+ <disabled>false</disabled>
+ <displayName>Jenkins Job Builder</displayName>
+ <blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
+ <blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
+ <concurrentBuild>true</concurrentBuild>
+ <quietPeriod>5</quietPeriod>
+ <assignedNode>trusty</assignedNode>
+ <canRoam>false</canRoam>
+ <scmCheckoutRetryCount>3</scmCheckoutRetryCount>
+ <properties/>
+ <scm class="hudson.plugins.git.GitSCM">
+ <configVersion>2</configVersion>
+ <userRemoteConfigs>
+ <hudson.plugins.git.UserRemoteConfig>
+ <name>origin</name>
+ <refspec>+refs/heads/*:refs/remotes/origin/*</refspec>
+ <url>https://github.com/ceph/ceph-build.git</url>
+ </hudson.plugins.git.UserRemoteConfig>
+ </userRemoteConfigs>
+ <branches>
+ <hudson.plugins.git.BranchSpec>
+ <name>main</name>
+ </hudson.plugins.git.BranchSpec>
+ </branches>
+ <excludedUsers/>
+ <buildChooser class="hudson.plugins.git.util.DefaultBuildChooser"/>
+ <disableSubmodules>false</disableSubmodules>
+ <recursiveSubmodules>false</recursiveSubmodules>
+ <doGenerateSubmoduleConfigurations>false</doGenerateSubmoduleConfigurations>
+ <authorOrCommitter>false</authorOrCommitter>
+ <wipeOutWorkspace>true</wipeOutWorkspace>
+ <pruneBranches>false</pruneBranches>
+ <remotePoll>false</remotePoll>
+ <gitTool>Default</gitTool>
+ <submoduleCfg class="list"/>
+ <relativeTargetDir/>
+ <reference/>
+ <gitConfigName/>
+ <gitConfigEmail/>
+ <skipTag>false</skipTag>
+ <scmName/>
+ <useShallowClone>false</useShallowClone>
+ <ignoreNotifyCommit>false</ignoreNotifyCommit>
+ <extensions>
+ <hudson.plugins.git.extensions.impl.CheckoutOption>
+ <timeout>20</timeout>
+ </hudson.plugins.git.extensions.impl.CheckoutOption>
+ <hudson.plugins.git.extensions.impl.WipeWorkspace/>
+ </extensions>
+ <browser class="hudson.plugins.git.browser.GithubWeb">
+ <url>http://github.com/ceph/ceph-build.git</url>
+ </browser>
+ </scm>
+ <triggers class="vector">
+ <hudson.triggers.SCMTrigger>
+ <spec>0 */3 * * *</spec>
+ </hudson.triggers.SCMTrigger>
+ </triggers>
+ <builders>
+ <hudson.tasks.Shell>
+ <command>bash jenkins-job-builder/build/build</command>
+ </hudson.tasks.Shell>
+ </builders>
+ <publishers/>
+ <buildWrappers/>
+</project>
--- /dev/null
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDI0tHxJQ7n+uMiLpsoR6CKAVd0xatgVQuqp/gmnGpZU0kE54a29vPNnEt7/aLitbfyhc57rrbHOT09H3ov74GZKkoVBSbMJUSsK3drbN+58wcuk+HK0htRewmwCfcfi9AkrVbyw6pbPXW/pbjxnxLep52fKmpJJnImZ5eHRV5le9OSAcLA1LHYR4y9R3IOrTp7jgpE205UxZi5OopAx7gkyTsmfydvmq4MjaSwbVOJ7aW/Fdt5FVxNJP3Zl/OrvDoo/1WovoRIDbVQH8JFpLikMSnCqtBVIHDeW6imAKl6dpn9Gf4FxD94+OcurhXo2p0pvSzC4Strg4d2Sxqh4wph jenkins-build
--- /dev/null
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDI0tHxJQ7n+uMiLpsoR6CKAVd0xatgVQuqp/gmnGpZU0kE54a29vPNnEt7/aLitbfyhc57rrbHOT09H3ov74GZKkoVBSbMJUSsK3drbN+58wcuk+HK0htRewmwCfcfi9AkrVbyw6pbPXW/pbjxnxLep52fKmpJJnImZ5eHRV5le9OSAcLA1LHYR4y9R3IOrTp7jgpE205UxZi5OopAx7gkyTsmfydvmq4MjaSwbVOJ7aW/Fdt5FVxNJP3Zl/OrvDoo/1WovoRIDbVQH8JFpLikMSnCqtBVIHDeW6imAKl6dpn9Gf4FxD94+OcurhXo2p0pvSzC4Strg4d2Sxqh4wph jenkins-build
--- /dev/null
+- name: restart jenkins
+ service:
+ name: jenkins
+ state: restarted
+ when: okay_with_restart == "y"
+
+- name: restart nginx
+ service:
+ name: nginx
+ state: restarted
+
+- name: stop nginx
+ service:
+ name: nginx
+ state: stopped
--- /dev/null
+---
+- name: Check if Jenkins config exists
+ stat:
+ path: "{{ jenkins_lib }}/config.xml"
+ register: jenkins_config_file
+
+- name: Check if github oauth is already enabled
+ shell: "grep -q github-oauth {{ jenkins_lib }}/config.xml"
+ register: github_oauth_enabled
+ when: jenkins_config_file.stat.exists
+ failed_when: false
+
+- name: Remove AuthorizationStrategy$Unsecured
+ lineinfile:
+ path: "{{ jenkins_lib }}/config.xml"
+ regexp: ".*hudson\\.security\\.AuthorizationStrategy\\$Unsecured.*"
+ state: absent
+
+- name: Remove SecurityRealm$None
+ lineinfile:
+ path: "{{ jenkins_lib }}/config.xml"
+ regexp: ".*hudson\\.security\\.SecurityRealm\\$None.*"
+ state: absent
+
+# Jenkins will automatically update the plugin version,
+# remove the ansible blockinfile comments,
+# and encrypt github_oauth_secret when the service is restarted
+- name: Add/update github-oauth settings
+ blockinfile:
+ path: "{{ jenkins_lib }}/config.xml"
+ insertafter: ".*useSecurity.*"
+ block: |2
+ <authorizationStrategy class="org.jenkinsci.plugins.GithubAuthorizationStrategy" plugin="github-oauth@0.25">
+ <rootACL>
+ <organizationNameList class="linked-list">
+ <string></string>
+ </organizationNameList>
+ <adminUserNameList class="linked-list">
+ <string>ktdreyer</string>
+ <string>alfredodeza</string>
+ <string>gregmeno</string>
+ <string>dmick</string>
+ <string>zmc</string>
+ <string>andrewschoen</string>
+ <string>djgalloway</string>
+ <string>ceph-jenkins</string>
+ </adminUserNameList>
+ <authenticatedUserReadPermission>true</authenticatedUserReadPermission>
+ <useRepositoryPermissions>false</useRepositoryPermissions>
+ <authenticatedUserCreateJobPermission>false</authenticatedUserCreateJobPermission>
+ <allowGithubWebHookPermission>true</allowGithubWebHookPermission>
+ <allowCcTrayPermission>false</allowCcTrayPermission>
+ <allowAnonymousReadPermission>true</allowAnonymousReadPermission>
+ <allowAnonymousJobStatusPermission>true</allowAnonymousJobStatusPermission>
+ </rootACL>
+ </authorizationStrategy>
+ <securityRealm class="org.jenkinsci.plugins.GithubSecurityRealm">
+ <githubWebUri>https://github.com</githubWebUri>
+ <githubApiUri>https://api.github.com</githubApiUri>
+ <clientID>{{ github_oauth_client }}</clientID>
+ <clientSecret>{{ github_oauth_secret }}</clientSecret>
+ <oauthScopes>read:org,user:email</oauthScopes>
+ </securityRealm>
+ when: jenkins_config_file.stat.exists and github_oauth_enabled.rc == 1
+ no_log: true
+ notify:
+ - restart jenkins
--- /dev/null
+---
+
+- name: Configure Jenkins service defaults
+ template:
+ src: etc_default.j2
+ dest: '{{ jenkins.config_file }}'
+ backup: yes
+ register: config_changed
+
+- name: Configure Jenkins E-mail
+ when: email is defined
+ template:
+ src: hudson.tasks.Mailer.xml.j2
+ dest: '{{ jenkins_lib }}/hudson.tasks.Mailer.xml'
+ owner: jenkins
+ group: jenkins
+ mode: 0644
+
+- name: "copy JJB config file to {{ jenkins_jobs }}"
+ synchronize:
+ src: jobs/jenkins-job-builder
+ dest: '{{ jenkins_jobs }}'
+ owner: no
+ group: no
+
+- name: "ensure correct ownership of {{ jenkins_jobs }}"
+ file:
+ path: '{{ jenkins_jobs }}'
+ state: directory
+ owner: jenkins
+ group: jenkins
+ recurse: yes
+ notify:
+ - restart jenkins
+
+# Handle plugins
+- name: "{{ startup_delay_s | default(10) }}s delay while starting Jenkins"
+ wait_for:
+ host: localhost
+ port: '{{ jenkins_port }}'
+ delay: '{{ startup_delay_s | default(10) }}'
+ when: jenkins_install.changed or config_changed.changed
+
+- name: "Create Jenkins CLI destination directory: {{ jenkins_dest }}"
+ file:
+ path: '{{ jenkins_dest }}'
+ state: directory
+
+- name: Get Jenkins CLI
+ get_url:
+ url: http://localhost:{{ jenkins_port }}/jnlpJars/jenkins-cli.jar
+ dest: '{{ jenkins.cli_dest }}'
+ mode: 0440
+ register: jenkins_local_cli
+ until: "'OK' in jenkins_local_cli.msg or 'file already exists' in jenkins_local_cli.msg"
+ #retries: 5
+ #delay: 10
+ ignore_errors: true
+
+- name: Get Jenkins updates
+ get_url:
+ url: http://updates.jenkins-ci.org/update-center.json
+ dest: '{{ jenkins.updates_dest }}'
+ thirsty: yes
+ mode: 0440
+ timeout: 30
+ register: jenkins_updates
+
+- name: Update-center Jenkins
+ shell: "cat {{ jenkins.updates_dest }} | sed '1d;$d' | curl -X POST -H 'Accept: application/json' -d @- http://localhost:{{ jenkins_port }}/updateCenter/byId/default/postBack"
+ when: jenkins_updates.changed
+ notify:
+ - 'restart jenkins'
+
+- name: create a jenkins-build user
+ user:
+ name: jenkins-build
+ comment: "Jenkins Build Slave User"
+
+- name: Create .ssh directory
+ file:
+ path: /home/jenkins-build/.ssh
+ state: directory
+
+- name: set the authorized keys '{{ playbook_dir }}/ansible-jenkins/files/ssh/keys/jenkins_build.pub'
+ authorized_key:
+ user: jenkins-build
+ key: "{{ lookup('file', 'ssh/keys/jenkins_build.pub') }}"
+ tags: fix
+
--- /dev/null
+---
+- name: Install DEB dependencies
+ apt:
+ name: "{{ item }}"
+ state: installed
+ with_items: "{{ jenkins.dependencies }}"
--- /dev/null
+---
+- include_tasks: repo.yml
+
+- include_tasks: dependencies.yml
+
+- include_tasks: nginx.yml
+
+- include_tasks: letsencrypt.yml
+ tags:
+ - letsencrypt
+
+- include_tasks: ufw.yml
+ tags:
+ - ufw
+
+- name: Install Jenkins
+ apt:
+ name: jenkins
+ state: present
+ register: jenkins_install
+
+- include_tasks: config.yml
+
+- include_tasks: plugins.yml
+ when: okay_with_restart == "y"
+ tags:
+ - plugins
+
+# This should only get run the first time the role is run.
+# The variables should be passed as --extra-vars via ansible-playbook command
+- include_tasks: auth.yml
+ when: github_oauth_client is defined and github_oauth_secret is defined
+ tags:
+ - auth
+
+- include_tasks: config.yml
--- /dev/null
+---
+# letsencrypt doesn't recommend using the Ubuntu-provided letsencrypt package
+# https://github.com/certbot/certbot/issues/3538
+# They do recommend using certbot from their PPA for Xenial
+# https://certbot.eff.org/#ubuntuxenial-nginx
+
+- name: install software-properties-common
+ apt:
+ name: software-properties-common
+ state: latest
+ update_cache: yes
+
+- name: add certbot PPA
+ apt_repository:
+ repo: "ppa:certbot/certbot"
+
+- name: install certbot
+ apt:
+ name: python-certbot-nginx
+ state: latest
+ update_cache: yes
+
+- name: ensure letsencrypt acme-challenge path
+ file:
+ path: "/var/www/{{ inventory_hostname }}"
+ state: "directory"
+ mode: 0755
+
+- name: attempt to renew cert if this is not first playbook run
+ command: "certbot renew"
+ register: certbot_renew
+ ignore_errors: true
+
+- name: create letsencrypt ssl cert
+ command: "certbot certonly --webroot -w /var/www/{{ inventory_hostname }} -d {{ inventory_hostname }} --email {{ letsencrypt_email }} --agree-tos"
+ when: certbot_renew.rc != 0
+
+- name: insert nginx ssl_certificate paths
+ blockinfile:
+ path: /etc/nginx/sites-enabled/jenkins.conf
+ insertbefore: ".*ssl_protocols.*"
+ block: |4
+ ssl_certificate /etc/letsencrypt/live/{{ inventory_hostname }}/fullchain.pem;
+ ssl_certificate_key /etc/letsencrypt/live/{{ inventory_hostname }}/privkey.pem;
+ notify:
+ - restart nginx
+
+- name: setup a cron to attempt to renew the SSL cert every 15ish days
+ cron:
+ name: "renew letsencrypt cert"
+ minute: "0"
+ hour: "0"
+ day: "1,15"
+ job: "certbot renew --renew-hook='service nginx reload'"
--- /dev/null
+---
+- include_tasks: jenkins.yml
+ tags: jenkins
--- /dev/null
+---
+
+ - name: ensure sites-available for nginx
+ file:
+ path: /etc/nginx/sites-available
+ state: directory
+
+ - name: ensure sites-enable for nginx
+ file:
+ path: /etc/nginx/sites-enabled
+ state: directory
+
+ - name: remove default nginx site
+ file:
+ path: /etc/nginx/sites-enabled/default
+ state: absent
+
+ - name: write nginx.conf
+ template:
+ src: templates/nginx.conf
+ dest: /etc/nginx/nginx.conf
+
+ - name: create nginx site config
+ template:
+ src: templates/jenkins.conf
+ dest: /etc/nginx/sites-available/jenkins.conf
+ notify:
+ - restart nginx
+
+ - name: link nginx config
+ file:
+ src: /etc/nginx/sites-available/jenkins.conf
+ dest: /etc/nginx/sites-enabled/jenkins.conf
+ state: link
+ force: yes
+
+ - name: Enable Nginx service
+ service:
+ name: nginx
+ enabled: yes
+ state: started
--- /dev/null
+---
+- name: Make sure Jenkins CLI is enabled
+ blockinfile:
+ path: "{{ jenkins_lib }}/jenkins.CLI.xml"
+ create: yes
+ block: |
+ <?xml version='1.0' encoding='UTF-8'?>
+ <jenkins.CLI>
+ <enabled>true</enabled>
+ </jenkins.CLI>
+
+- name: Temporarily disable public http access
+ service:
+ name: nginx
+ state: stopped
+
+- name: Temporarily disable security so anonymous can use jenkins-cli
+ replace:
+ path: "{{ jenkins_lib }}/config.xml"
+ regexp: '<useSecurity>true</useSecurity>'
+ replace: '<useSecurity>false</useSecurity>'
+ backup: yes
+ register: original_jenkins_config
+
+- name: Restart Jenkins with no security so we can use jenkins-cli
+ service:
+ name: jenkins
+ state: restarted
+
+- name: List plugins
+ shell: java -jar {{ jenkins.cli_dest }} -s http://127.0.0.1:{{ jenkins_port }} list-plugins | cut -f 1 -d ' '
+ when: plugins is defined
+ register: plugins_installed
+ # Jenkins takes a while to come back up
+ retries: 10
+ until: '"503" not in plugins_installed.stderr and plugins_installed.stdout != ""'
+
+- name: Install/update plugins
+ shell: java -jar {{ jenkins.cli_dest }} -s http://127.0.0.1:{{ jenkins_port }} install-plugin {{ item }}
+ when: plugins_installed.changed and plugins_installed.stdout.find('{{ item }}') == -1
+ with_items: "{{ plugins }}"
+ # This is only here because the postbuildscript plugin is currently deprecated
+ ignore_errors: yes
+
+- name: List plugins to be updated
+ shell: java -jar {{ jenkins.cli_dest }} -s http://127.0.0.1:{{ jenkins_port }} list-plugins | grep ')$' | cut -f 1 -d ' ' | sed ':a;N;$!ba;s/\n/ /g'
+ register: plugins_updates
+
+- name: Update plugins
+ shell: java -jar {{ jenkins.cli_dest }} -s http://127.0.0.1:{{ jenkins_port }} install-plugin {{ item }}
+ with_items: "{{ plugins_updates.stdout.split() }}"
+ when: plugins_updates.stdout != ''
+ ignore_errors: yes
+
+- name: Restore original Jenkins config to re-enable auth
+ copy:
+ src: "{{ original_jenkins_config.backup_file }}"
+ dest: "{{ jenkins_lib }}/config.xml"
+ remote_src: yes
+ owner: jenkins
+ group: jenkins
+
+- name: Restart Jenkins with security
+ service:
+ name: jenkins
+ state: restarted
+
+- name: Re-enable public http access
+ service:
+ name: nginx
+ state: started
--- /dev/null
+---
+- name: Add Jenkins GPG Key
+ apt_key:
+ url: "https://jenkins-ci.org/debian/jenkins-ci.org.key"
+ state: present
+
+- name: Add the jenkins repo
+ apt_repository:
+ repo: 'deb https://pkg.jenkins.io/debian-stable binary/'
+ state: present
+ update_cache: true
--- /dev/null
+---
+- name: install ufw
+ apt:
+ name: ufw
+ state: latest
+
+- name: only listen to localhost on port 8080
+ ufw:
+ port: 8080
+ src: 127.0.0.1
+ rule: allow
+
+- name: allow custom ssh, http, https, and JNLP slave port
+ ufw:
+ port: "{{ item }}"
+ rule: allow
+ with_items:
+ - 2222
+ - 80
+ - 443
+ - 49187
+
+- name: reload ufw
+ ufw:
+ state: reloaded
+
+- name: start ufw
+ ufw:
+ state: enabled
--- /dev/null
+# {{ ansible_managed }}
+
+# pulled in from the init script; makes things easier.
+NAME=jenkins
+
+# location of java
+JAVA=/usr/bin/java
+
+# From https://jenkins.io/blog/2016/11/21/gc-tuning/
+JAVA_ARGS="-Xmx20g -Xms20g -Djava.awt.headless=true -Dhudson.model.User.SECURITY_243_FULL_DEFENSE=false -Dhudson.model.ParametersAction.keepUndefinedParameters=true -server -XX:+AlwaysPreTouch -Xloggc:/var/log/jenkins/gc-%t.log -XX:+PrintGC -XX:+PrintGCDetails -XX:+UseG1GC -XX:+ExplicitGCInvokesConcurrent -XX:+ParallelRefProcEnabled -XX:+UseStringDeduplication -XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=20 -XX:+UnlockDiagnosticVMOptions -XX:G1SummarizeRSetStatsPeriod=1"
+
+PIDFILE=/var/run/$NAME/$NAME.pid
+
+# user and group to be invoked as (default to jenkins)
+JENKINS_USER=$NAME
+JENKINS_GROUP=$NAME
+
+# location of the jenkins war file
+JENKINS_WAR=/usr/share/$NAME/$NAME.war
+
+# jenkins home location
+JENKINS_HOME=/var/lib/$NAME
+
+# set this to false if you don't want Hudson to run by itself
+# in this set up, you are expected to provide a servlet container
+# to host jenkins.
+RUN_STANDALONE=true
+
+# log location. this may be a syslog facility.priority
+JENKINS_LOG=/var/log/$NAME/$NAME.log
+
+# OS LIMITS SETUP
+# comment this out to observe /etc/security/limits.conf
+# this is on by default because http://github.com/jenkinsci/jenkins/commit/2fb288474e980d0e7ff9c4a3b768874835a3e92e
+# reported that Ubuntu's PAM configuration doesn't include pam_limits.so, and as a result the # of file
+# descriptors are forced to 1024 regardless of /etc/security/limits.conf
+MAXOPENFILES=8192
+
+# port for HTTP connector (default 8080; disable with -1)
+HTTP_PORT={{ jenkins_port }}
+
+# port for AJP connector (disabled by default)
+AJP_PORT=-1
+
+# servlet context, important if you want to use apache proxying
+PREFIX={{ prefix }}
+
+JENKINS_ARGS="--webroot=/var/cache/$NAME/war --httpPort=$HTTP_PORT --ajp13Port=$AJP_PORT"
--- /dev/null
+<?xml version='1.0' encoding='UTF-8'?>
+<hudson.tasks.Mailer_-DescriptorImpl plugin="mailer@1.11">
+ <defaultSuffix>{{ email.default_email_suffix }}</defaultSuffix>
+ <smtpHost>{{ email.smtp_host }}</smtpHost>
+ <useSsl>{{ email.smtp_ssl }}</useSsl>
+ <charset>UTF-8</charset>
+</hudson.tasks.Mailer_-DescriptorImpl>
--- /dev/null
+# {{ ansible_managed }}
+
+server {
+ listen 80 default_server;
+ listen 443 default_server ssl;
+
+ server_name {{ inventory_hostname }};
+
+ ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
+ add_header Strict-Transport-Security "max-age=31536000";
+
+ access_log /var/log/nginx/jenkins_access.log;
+ error_log /var/log/nginx/jenkins_error.log;
+
+ location '/.well-known/acme-challenge' {
+ default_type "text/plain";
+ root /var/www/{{ inventory_hostname }};
+ }
+
+ location / {
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+
+ proxy_pass http://127.0.0.1:{{ jenkins_port }};
+ proxy_read_timeout 180;
+
+ # http://tracker.ceph.com/issues/18176
+ proxy_buffer_size 128k;
+ proxy_buffers 4 256k;
+ proxy_busy_buffers_size 256k;
+
+ # Redirect all plaintext HTTP to HTTPS
+ if ($scheme != "https") {
+ rewrite ^ https://$host$uri permanent;
+ }
+ }
+}
--- /dev/null
+# {{ ansible_managed }}
+user www-data;
+worker_processes {{ nginx_processor_count }};
+worker_rlimit_nofile 8192;
+
+pid /var/run/nginx.pid;
+
+events {
+ worker_connections {{ nginx_connections }} ;
+ # multi_accept on;
+}
+
+http {
+
+ ##
+ # Basic Settings
+ ##
+
+ #sendfile on;
+ tcp_nopush on;
+ tcp_nodelay on;
+ keepalive_timeout 65;
+ types_hash_max_size 2048;
+ server_tokens off;
+
+ # server_names_hash_bucket_size 64;
+ # server_name_in_redirect off;
+
+ include /etc/nginx/mime.types;
+ default_type application/octet-stream;
+
+ ##
+ # Logging Settings
+ ##
+
+ access_log /var/log/nginx/access.log;
+ error_log /var/log/nginx/error.log;
+
+ ##
+ # Gzip Settings
+ ##
+
+ gzip on;
+ gzip_disable "msie6";
+
+ # gzip_vary on;
+ # gzip_proxied any;
+ # gzip_comp_level 6;
+ # gzip_buffers 16 8k;
+ # gzip_http_version 1.1;
+ # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
+
+ ##
+ # If HTTPS, then set a variable so it can be passed along.
+ ##
+
+ map $scheme $server_https {
+ default off;
+ https on;
+ }
+
+ ##
+ # Virtual Host Configs
+ ##
+
+ include /etc/nginx/conf.d/*.conf;
+ include /etc/nginx/sites-enabled/*;
+}
+
--- /dev/null
+---
+jenkins_dest: /opt/jenkins
+jenkins_lib: /var/lib/jenkins
+jenkins_jobs: '{{ jenkins_lib }}/jobs'
+jenkins:
+ dependencies: # Jenkins dependencies
+ - 'git'
+ - 'curl'
+ - 'nginx'
+ - 'default-jdk'
+ config_file: '/etc/default/jenkins'
+ cli_dest: '{{ jenkins_dest }}/jenkins-cli.jar' # Jenkins CLI destination
+ updates_dest: '{{ jenkins_dest }}/updates_jenkins.json' # Jenkins updates file
--- /dev/null
+---
+
+- name: undo last commit from failed release
+ command: git reset --soft HEAD~1 chdir=ceph-deploy
+ when: (clean and last_commit.stdout == tag_name)
+
+- name: git checkout {{ branch }} branch
+ command: git checkout {{ branch }} chdir=ceph-deploy
+
+- name: remove local tag
+ command: git tag -d v{{ version }} chdir=ceph-deploy
+ ignore_errors: yes
+
+- name: remove remote tag
+ command: git push jenkins :refs/tags/v{{ version }} chdir=ceph-deploy
+ ignore_errors: yes
+
+- name: force push changes to jenkins git repo
+ command: git push -f jenkins {{ branch }} chdir=ceph-deploy
--- /dev/null
+---
+
+- name: check if ceph-deploy repo exists
+ stat: path='./ceph-deploy'
+ register: 'cdep_repo'
+
+- name: clone the ceph-deploy repository
+ git: repo=git@github.com:ceph/ceph-deploy dest=ceph-deploy
+ when: cdep_repo.stat.exists is defined and cdep_repo.stat.exists == false
+
+- name: rename origin to jenkins
+ command: git remote rename origin jenkins chdir=ceph-deploy
+ ignore_errors: yes
+
+- name: fetch the latest from remote
+ command: git fetch jenkins chdir=ceph-deploy
+
+- name: ensure local repo is in sync with remote
+ command: git reset --hard jenkins/{{ branch }} chdir=ceph-deploy
+
+- name: check if we are re-pushing the release commit
+ command: git log -1 --pretty=%B chdir=ceph-deploy
+ register: 'last_commit'
+
+ # we probably messed up the previous commit+tag, so we chose to use 'clean'
+ # that will rollback that commit, delete the local and remote tag, and force
+ # push the new changes
+- include_tasks: clear_version.yml
+ when: (clean and last_commit.stdout == tag_name)
+
+ # if the last commit wasn't one that we already did, then go ahead and make
+ # the changes + tag for the release. Otherwise, just skip because it was
+ # already done for this release
+- include_tasks: release.yml
+ when: (tag_name != last_commit.stdout)
--- /dev/null
+---
+
+- name: fetch the latest from remote
+ command: git fetch jenkins chdir=ceph-deploy
+
+- name: ensure local repo is in sync with remote
+ command: git reset --hard jenkins/{{ branch }} chdir=ceph-deploy
+
+- name: git checkout {{ branch }} branch
+ command: git checkout {{ branch }} chdir=ceph-deploy
+
+- name: set the debian version
+ command: dch -v {{ version }} -D stable "New upstream release" chdir=ceph-deploy
+ environment:
+ DEBEMAIL: "{{ debemail }}"
+ DEBFULLNAME: "{{ debfullname }}"
+
+- name: set the version in the spec file
+ lineinfile: dest=ceph-deploy/ceph-deploy.spec
+ regexp="Version{{':'}}\s+"
+ line="Version{{':'}} {{ version }}"
+ state=present
+
+- name: commit the version changes
+ command: git commit -a -m "{{ version }}" chdir=ceph-deploy
+
+ # from script: /srv/ceph-build/tag_release.sh
+ # Contents of tag_release.sh
+ # FIXME: this used to be a signed tag:
+ # command: git tag -s "v{{ version }}" -u 17ED316D -m "v{{ version }}" chdir=ceph-deploy
+- name: tag and commit the version
+ command: git tag "v{{ version }}" -m "v{{ version }}" chdir=ceph-deploy
+ environment:
+ GNUPGHOME: ~/build/gnupg.ceph-release
+ DEBEMAIL: "{{ debemail }}"
+ DEBFULLNAME: "{{ debfullname }}"
+
+- name: push changes to jenkins git repo
+ command: git push jenkins {{ branch }} chdir=ceph-deploy
+
+ # FIXME: this used to be set when signing the tag:
+ # environment:
+ # GNUPGHOME: ~/build/gnupg.ceph-release
+- name: push the newly created tag
+ command: git push jenkins v{{ version }} chdir=ceph-deploy
--- /dev/null
+---
+ - name: "update apt cache"
+ action: apt update-cache=yes
--- /dev/null
+---
+- name: ensure a clean clone
+ file:
+ path: ceph
+ state: absent
+
+- name: clone the ceph repository
+ git:
+ repo: https://github.com/ceph/ceph
+ dest: ceph
+ remote: upstream
+ accept_hostkey: yes
+ recursive: false
+
+- name: add releases repo
+ command: git remote add -f releases git@github.com:ceph/ceph-releases.git
+ args:
+ chdir: ceph
+ ignore_errors: yes
+
+- name: add security repo
+ command: git remote add -f security git@github.com:ceph/ceph-private.git
+ args:
+ chdir: ceph
+ ignore_errors: yes
+ when: "release == 'SECURITY'"
+
+- name: git fetch --all
+ command: git fetch --all
+ args:
+ chdir: ceph
+
+# REGULAR / RC / HOTFIX
+# This assumes {{ branch }} has been pushed to {{ branch }}-release and is ready to be built
+- name: "git checkout {{ branch }}-release for non-SECURITY release"
+ command: git checkout -f -B {{ branch }}-release upstream/{{ branch }}-release
+ args:
+ chdir: ceph
+ when:
+ - "release != 'SECURITY'"
+ - tag|bool is true
+
+- name: "git checkout previously existing tag for re-build"
+ command: git checkout -f v{{ version }}
+ args:
+ chdir: ceph
+ when:
+ - "release != 'SECURITY'"
+ - tag|bool is false
+
+# SECURITY
+- name: "git checkout security {{ branch }}-release branch"
+ command: git checkout -f -B {{ branch }}-release security/{{ branch }}-release
+ args:
+ chdir: ceph
+ ignore_errors: yes
+ when: "release == 'SECURITY'"
+
+- name: git submodule update
+ command: git submodule update --init
+ args:
+ chdir: ceph
+
+- name: check if CMakeLists.txt exists
+ stat:
+ path: ceph/CMakeLists.txt
+ register: cmake_lists
+
+- name: replace the version in CMakeLists.txt
+ lineinfile:
+ dest: ceph/CMakeLists.txt
+ regexp: '^ VERSION \d+\.\d+\.\d+$'
+ line: ' VERSION {{ version }}'
+ when:
+ - cmake_lists.stat.exists
+ - tag|bool is true
+
+- set_fact:
+ dch_release_type: rc
+ when: "release == 'RELEASE_CANDIDATE'"
+
+- name: set the debian version
+ command: dch -v {{ version }}-1 -D {{ dch_release_type|default('stable') }} "New upstream release"
+ args:
+ chdir: ceph
+ environment:
+ DEBEMAIL: "{{ debemail }}"
+ DEBFULLNAME: "{{ debfullname }}"
+ when: tag|bool is true
+
+- name: git config user.name
+ command: git config user.name "Ceph Release Team"
+ args:
+ chdir: ceph
+ when: tag|bool is true
+
+- name: git config user.email
+ command: git config user.email "ceph-maintainers@ceph.io"
+ args:
+ chdir: ceph
+ when: tag|bool is true
+
+- name: commit the version changes
+ command: git commit -a -s -m "{{ version }}"
+ args:
+ chdir: ceph
+ when: tag|bool is true
+
+- import_tasks: write_sha1_file.yml
+
+- name: tag the version
+ command: git tag -f "v{{ version }}" -m "v{{ version }}"
+ args:
+ chdir: ceph
+ when: tag|bool is true
+
+- name: push the version commit to ceph-releases.git
+ command: git push -f releases {{ branch }}-release
+ args:
+ chdir: ceph
+ when: tag|bool is true
+
+# the colon appended to the v{{ version }} tag removes the previous tag
+# https://git-scm.com/docs/git-push#Documentation/git-push.txt--d
+- name: clear the previous remote tag
+ command: git push releases :v{{ version }}
+ args:
+ chdir: ceph
+ ignore_errors: yes
+ when: tag|bool is true
+
+- name: push the tag to ceph-releases.git
+ command: git push releases v{{ version }}
+ args:
+ chdir: ceph
+ when: tag|bool is true
--- /dev/null
+---
+- import_tasks: create.yml
+ when: "stage == 'create'"
+
+- import_tasks: push.yml
+ when:
+ - "stage == 'push'"
+ - "release != 'SECURITY'"
+ - tag|bool is true
+
+# This only runs to prevent the Archive Artifacts plugin from hanging
+- import_tasks: write_sha1_file.yml
+ when:
+ - "stage == 'push'"
+ - "release == 'SECURITY'"
--- /dev/null
+---
+# Note: None of this will get run when "release == 'SECURITY'"
+# We want to make sure packages get pulled, signed, and pushed before publicly
+# pushing the security fix. Pushing tags will be done manually by a human.
+
+- name: Check if a pull request already exists for branch
+ ansible.builtin.uri:
+ url: "https://api.github.com/repos/ceph/ceph/pulls?state=open&head=ceph:{{ branch }}-release&base={{ branch }}"
+ method: GET
+ headers:
+ Accept: "application/vnd.github.v3+json"
+ Authorization: "token {{ token }}"
+ return_content: true
+ register: pr_check
+ no_log: true # hides token and response details safely
+
+- name: Fail if a PR already exists
+ ansible.builtin.fail:
+ msg: >-
+ A pull request already exists for branch '{{ branch }}-release':
+ {{ pr_check.json[0].html_url }}
+ when: pr_check.json | length > 0
+
+- name: clone the ceph repository
+ git:
+ repo: git@github.com:ceph/ceph.git
+ dest: ceph
+ remote: upstream
+ accept_hostkey: yes
+ recursive: no
+
+# the colon appended to the v{{ version }} tag removes the previous tag
+# https://git-scm.com/docs/git-push#Documentation/git-push.txt--d
+- name: clear the previous remote tag
+ command: git push upstream :v{{ version }}
+ args:
+ chdir: ceph
+ register: push_clear_remote_tag
+ ignore_errors: yes
+ when: tag|bool is true
+ failed_when: >-
+ push_clear_remote_tag is failed and
+ ('remote ref does not exist' not in (push_clear_remote_tag.stderr | default('')))
+
+- name: add releases repo
+ command: git remote add -f releases git@github.com:ceph/ceph-releases.git
+ args:
+ chdir: ceph
+ ignore_errors: yes
+
+- name: git fetch --all
+ command: git fetch --all
+ args:
+ chdir: ceph
+
+- name: "git checkout the version commit from ceph-releases"
+ command: git checkout -f -B {{ branch }}-release releases/{{ branch }}-release
+ args:
+ chdir: ceph
+
+- import_tasks: write_sha1_file.yml
+
+- name: push version commit to BRANCH-release branch
+ command: git push upstream {{ branch }}-release
+ args:
+ chdir: ceph
+
+- name: "create pull request to merge {{ branch }}-release back into {{ branch }}"
+ uri:
+ url: https://api.github.com/repos/ceph/ceph/pulls
+ method: POST
+ status_code: 201
+ headers:
+ Accept: "application/vnd.github.v3+json"
+ Authorization: "token {{ token }}"
+ body:
+ title: "v{{ version }}"
+ body: "{{ pr_checklist }}"
+ head: "{{ branch }}-release"
+ base: "{{ branch }}"
+ body_format: json
+ tags: pr
+ no_log: true
+
+- name: push the newly created tag
+ command: git push upstream v{{ version }}
+ args:
+ chdir: ceph
--- /dev/null
+
+ - name: set the debian version
+ command: dch -v {{ version }}-1 -D stable "New upstream release" chdir=ceph
+ environment:
+ DEBEMAIL: "{{ debemail }}"
+ DEBFULLNAME: "{{ debfullname }}"
--- /dev/null
+
+ - name: set the debian version
+ command: dch -v {{ version }}-1 -D stable "Development release" chdir=ceph
+ environment:
+ DEBEMAIL: "{{ debemail }}"
+ DEBFULLNAME: "{{ debfullname }}"
--- /dev/null
+
+ - name: set the debian version
+ command: dch -v {{ version }}-1 -D stable "New upstream release" chdir=ceph
+ environment:
+ DEBEMAIL: "{{ debemail }}"
+ DEBFULLNAME: "{{ debfullname }}"
+
--- /dev/null
+---
+# These tasks write the version commit's sha1 to a file for
+# ceph-source-dist and ceph-dev-pipeline to later consume.
+#
+# We write it again when running the playbook with stage=push only so
+# the Archive Artifacts plugin doesn't hang indefinitely. We don't want
+# to configure the plugin to allow an empty or missing file.
+
+- name: record commit sha1
+ command: git rev-parse HEAD
+ args:
+ chdir: ceph
+ register: commit_sha
+
+- name: ensure ceph/dist dir exists
+ file:
+ path: ceph/dist
+ state: directory
+
+- name: save commit sha1 to file
+ copy:
+ content: "{{ commit_sha.stdout }}"
+ dest: ceph/dist/sha1
--- /dev/null
+---
+
+app_name: "grafana"
+fqdn: "grafana.local"
--- /dev/null
+---
+
+- name: reload systemd
+ become: yes
+ command: systemctl daemon-reload
+
+- name: restart app
+ become: true
+ service:
+ name: grafana-server
+ state: restarted
+ enabled: yes
+
+- name: restart nginx
+ become: true
+ service:
+ name: nginx
+ state: restarted
+ enabled: yes
--- /dev/null
+---
+- name: update apt cache
+ apt:
+ update_cache: yes
+ become: yes
+
+- name: install ssl system requirements
+ become: yes
+ apt:
+ name: "{{ item }}"
+ state: present
+ with_items: ssl_requirements
+ tags:
+ - packages
+
+- name: install system packages
+ become: yes
+ apt:
+ name: "{{ item }}"
+ state: present
+ with_items: system_packages
+ tags:
+ - packages
+
+- name: generate pseudo-random password for admin user
+ shell: python -c "exec 'import os; print os.urandom(30).encode(\'base64\')[:${length}]'"
+ register: admin_password
+ changed_when: false
+
+- name: generate pseudo-random password for the database connection
+ shell: python -c "exec 'import os; print os.urandom(30).encode(\'base64\')[:${length}]'"
+ register: db_password
+ changed_when: false
+
+- name: configure grafana
+ template:
+ src: ../templates/grafana.ini.j2
+ dest: "/etc/grafana/grafana.ini"
+ notify:
+ - restart app
+ become: true
+
+- include_tasks: postgresql.yml
+ tags:
+ - postgresql
+
+- include_tasks: nginx.yml
+
+- name: ensure nginx is running
+ become: true
+ service:
+ name: nginx
+ state: started
+ enabled: yes
+
+- name: ensure grafana is restarted
+ become: true
+ service:
+ name: grafana-server
+ state: restarted
+ enabled: yes
--- /dev/null
+---
+- name: create nginx site config
+ action: template src=../templates/nginx_site.conf dest=/etc/nginx/sites-available/{{ app_name }}.conf
+ become: true
+ notify:
+ - restart nginx
+
+- name: link nginx config
+ action: file src=/etc/nginx/sites-available/{{ app_name }}.conf dest=/etc/nginx/sites-enabled/{{ app_name }}.conf state=link
+ become: true
--- /dev/null
+---
+- name: ensure database service is up
+ service:
+ name: postgresql
+ state: started
+ enabled: yes
+ become: yes
+
+- name: allow users to connect locally
+ become: yes
+ lineinfile:
+ # TODO: should not hardcode that version
+ dest: /etc/postgresql/9.5/main/pg_hba.conf
+ regexp: '^host\s+all\s+all\s+127.0.0.1/32'
+ line: 'host all all 127.0.0.1/32 md5'
+ backrefs: yes
+ register: pg_hba_conf
+
+- service:
+ name: postgresql
+ state: restarted
+ become: true
+ when: pg_hba_conf.changed
+
+- name: make {{ app_name }} user
+ postgresql_user:
+ name: "{{ app_name }}"
+ password: "{{ db_password.stdout }}"
+ role_attr_flags: SUPERUSER
+ login_user: postgres
+ become_user: postgres
+ become: yes
+
+- name: Make {{ app_name }} database
+ postgresql_db:
+ name: "{{ app_name }}"
+ owner: "{{ app_name }}"
+ state: present
+ login_user: postgres
+ become_user: postgres
+ become: yes
+
+- name: ensure database service is up
+ service:
+ name: postgresql
+ state: started
+ enabled: yes
+ become: yes
--- /dev/null
+# {{ ansible_managed }}
+##################### Grafana Configuration Example #####################
+#
+# Everything has defaults so you only need to uncomment things you want to
+# change
+
+# possible values : production, development
+; app_mode = production
+
+#################################### Paths ####################################
+[paths]
+# Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used)
+#
+;data = /var/lib/grafana
+#
+# Directory where grafana can store logs
+#
+;logs = /var/log/grafana
+
+#################################### Server ####################################
+[server]
+# Protocol (http or https)
+;protocol = http
+
+# The ip address to bind to, empty will bind to all interfaces
+;http_addr =
+
+# The http port to use
+;http_port = 3000
+
+# The public facing domain name used to access grafana from a browser
+;domain = localhost
+
+# Redirect to correct domain if host header does not match domain
+# Prevents DNS rebinding attacks
+;enforce_domain = false
+
+# The full public facing url
+;root_url = %(protocol)s://%(domain)s:%(http_port)s/
+
+# Log web requests
+;router_logging = false
+
+# the path relative working path
+;static_root_path = public
+
+# enable gzip
+;enable_gzip = false
+
+# https certs & key file
+;cert_file =
+;cert_key =
+
+#################################### Database ####################################
+[database]
+# Either "mysql", "postgres" or "sqlite3", it's your choice
+type = postgres
+host = 127.0.0.1:5432
+name = {{ app_name }}
+user = {{ app_name }}
+password = {{ db_password.stdout }}
+
+# For "postgres" only, either "disable", "require" or "verify-full"
+ssl_mode = disable
+
+# For "sqlite3" only, path relative to data_path setting
+;path = grafana.db
+
+#################################### Session ####################################
+[session]
+# Either "memory", "file", "redis", "mysql", "postgres", default is "file"
+;provider = file
+
+# Provider config options
+# memory: not have any config yet
+# file: session dir path, is relative to grafana data_path
+# redis: config like redis server e.g. `addr=127.0.0.1:6379,pool_size=100,db=grafana`
+# mysql: go-sql-driver/mysql dsn config string, e.g. `user:password@tcp(127.0.0.1:3306)/database_name`
+# postgres: user=a password=b host=localhost port=5432 dbname=c sslmode=disable
+;provider_config = sessions
+
+# Session cookie name
+;cookie_name = grafana_sess
+
+# If you use session in https only, default is false
+;cookie_secure = false
+
+# Session life time, default is 86400
+;session_life_time = 86400
+
+#################################### Analytics ####################################
+[analytics]
+# Server reporting, sends usage counters to stats.grafana.org every 24 hours.
+# No ip addresses are being tracked, only simple counters to track
+# running instances, dashboard and error counts. It is very helpful to us.
+# Change this option to false to disable reporting.
+reporting_enabled = false
+
+# Google Analytics universal tracking code, only enabled if you specify an id here
+;google_analytics_ua_id =
+
+#################################### Security ####################################
+[security]
+# default admin user, created on startup
+admin_user = admin
+
+# default admin password, can be changed before first start of grafana, or in profile settings
+admin_password = {{ admin_password.stdout }}
+
+# used for signing
+;secret_key = SW2YcwTIb9zpOOhoPsMm
+
+# Auto-login remember days
+;login_remember_days = 7
+;cookie_username = grafana_user
+;cookie_remember_name = grafana_remember
+
+# disable gravatar profile images
+;disable_gravatar = false
+
+# data source proxy whitelist (ip_or_domain:port seperated by spaces)
+;data_source_proxy_whitelist =
+
+#################################### Users ####################################
+[users]
+# disable user signup / registration
+allow_sign_up = false
+
+# Allow non admin users to create organizations
+allow_org_create = false
+
+# Set to true to automatically assign new users to the default organization (id 1)
+;auto_assign_org = true
+
+# Default role new users will be automatically assigned (if disabled above is set to true)
+;auto_assign_org_role = Viewer
+
+# Background text for the user field on the login page
+;login_hint = email or username
+
+#################################### Anonymous Auth ##########################
+[auth.anonymous]
+# enable anonymous access
+;enabled = false
+
+# specify organization name that should be used for unauthenticated users
+org_name = Ceph
+
+# specify role for unauthenticated users
+;org_role = Viewer
+
+{% if github_client_id is defined %}
+#################################### Github Auth ##########################
+[auth.github]
+enabled = false
+;allow_sign_up = false
+client_id = {{ github_client_id }}
+client_secret = {{ github_client_secret }}
+scopes = user:email,read:org
+;auth_url = https://github.com/login/oauth/authorize
+;token_url = https://github.com/login/oauth/access_token
+;api_url = https://api.github.com/user
+;team_ids =
+allowed_organizations = ceph
+{% endif %}
+#################################### Google Auth ##########################
+[auth.google]
+;enabled = false
+;allow_sign_up = false
+;client_id = some_client_id
+;client_secret = some_client_secret
+;scopes = https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email
+;auth_url = https://accounts.google.com/o/oauth2/auth
+;token_url = https://accounts.google.com/o/oauth2/token
+;api_url = https://www.googleapis.com/oauth2/v1/userinfo
+;allowed_domains =
+
+#################################### Auth Proxy ##########################
+[auth.proxy]
+;enabled = false
+;header_name = X-WEBAUTH-USER
+;header_property = username
+;auto_sign_up = true
+
+#################################### Basic Auth ##########################
+[auth.basic]
+;enabled = true
+
+#################################### Auth LDAP ##########################
+[auth.ldap]
+;enabled = false
+;config_file = /etc/grafana/ldap.toml
+
+#################################### SMTP / Emailing ##########################
+[smtp]
+;enabled = false
+;host = localhost:25
+;user =
+;password =
+;cert_file =
+;key_file =
+;skip_verify = false
+;from_address = admin@grafana.localhost
+
+[emails]
+;welcome_email_on_sign_up = false
+
+#################################### Logging ##########################
+[log]
+# Either "console", "file", default is "console"
+# Use comma to separate multiple modes, e.g. "console, file"
+;mode = console, file
+
+# Buffer length of channel, keep it as it is if you don't know what it is.
+;buffer_len = 10000
+
+# Either "Trace", "Debug", "Info", "Warn", "Error", "Critical", default is "Trace"
+;level = Info
+
+# For "console" mode only
+[log.console]
+;level =
+
+# For "file" mode only
+[log.file]
+;level =
+# This enables automated log rotate(switch of following options), default is true
+;log_rotate = true
+
+# Max line number of single file, default is 1000000
+;max_lines = 1000000
+
+# Max size shift of single file, default is 28 means 1 << 28, 256MB
+;max_lines_shift = 28
+
+# Segment log daily, default is true
+;daily_rotate = true
+
+# Expired days of log file(delete after max days), default is 7
+;max_days = 7
+
+#################################### AMPQ Event Publisher ##########################
+[event_publisher]
+;enabled = false
+;rabbitmq_url = amqp://localhost/
+;exchange = grafana_events
+
+;#################################### Dashboard JSON files ##########################
+[dashboards.json]
+;enabled = false
+;path = /var/lib/grafana/dashboards
--- /dev/null
+server {
+ listen 443 default_server ssl;
+ server_name {{ fqdn }};
+
+ ssl_certificate /etc/ssl/certs/{{ fqdn }}-bundled.crt;
+ ssl_certificate_key /etc/ssl/private/{{ fqdn }}.key;
+ ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
+ add_header Strict-Transport-Security "max-age=31536000";
+
+ access_log /var/log/nginx/{{ app_name }}-access.log;
+ error_log /var/log/nginx/{{ app_name }}-error.log;
+
+ # Some binaries are gigantic
+ client_max_body_size 2048m;
+
+ location / {
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+
+ proxy_pass http://127.0.0.1:3000;
+ proxy_read_timeout 500;
+ }
+
+}
--- /dev/null
+---
+
+system_packages:
+ - grafana
+ - git
+ - g++
+ - gcc
+ - libpq-dev
+ - postgresql
+ - postgresql-common
+ - postgresql-contrib
+ - python-psycopg2
+ - nginx
+ - vim
+ # needed for the ansible apt_repository module
+ - python-apt
+ - python
+
+ssl_requirements:
+ - openssl
+ - libssl-dev
--- /dev/null
+---
+
+app_name: "graphite"
+fqdn: "graphite.local"
--- /dev/null
+---
+
+- name: reload systemd
+ become: yes
+ command: systemctl daemon-reload
+
+- name: restart app
+ become: true
+ service:
+ name: graphite
+ state: restarted
+ enabled: yes
+
+- name: restart carbon
+ service:
+ name: carbon-cache
+ state: restarted
+ enabled: yes
+ become: yes
+
--- /dev/null
+---
+
+- name: enable carbon
+ lineinfile:
+ dest: /etc/default/graphite-carbon
+ regexp: "^CARBON_CACHE_ENABLED=false"
+ line: "CARBON_CACHE_ENABLED=true"
+ state: present
+ become: true
+
+- name: enable whitelisting in carbon
+ lineinfile:
+ dest: /etc/carbon/carbon.conf
+ regexp: "^# USE_WHITELIST = False"
+ line: "USE_WHITELIST = True"
+ state: present
+ backrefs: true
+ become: true
+
+- name: create the rewrite config with the secret api key
+ template:
+ src: ../templates/rewrite-rules.conf.j2
+ dest: "/etc/carbon/rewrite-rules.conf"
+ notify:
+ - restart carbon
+ become: true
+
+- name: create the whitelist/blacklist config allowing the api key only
+ template:
+ src: ../templates/whitelist.conf.j2
+ dest: "/etc/carbon/whitelist.conf"
+ notify:
+ - restart carbon
+ become: true
+
+- name: define the storage schemas
+ template:
+ src: ../templates/storage-schemas.conf.j2
+ dest: "/etc/carbon/storage-schemas.conf"
+ notify:
+ - restart carbon
+ become: true
+
+- name: ensure database service is up
+ service:
+ name: carbon-cache
+ state: restarted
+ enabled: yes
+ become: yes
--- /dev/null
+---
+
+- name: "Build hosts file"
+ become: yes
+ lineinfile:
+ dest: /etc/hosts
+ regexp: ".*{{ fqdn }}$"
+ line: "127.0.1.1 {{ fqdn }}"
+ state: present
+
+- name: Set Hostname with hostname command
+ become: yes
+ hostname: name="{{ fqdn }}"
+
+- name: update apt cache
+ apt:
+ update_cache: yes
+ become: yes
+
+- name: install ssl system requirements
+ become: yes
+ apt:
+ name: "{{ item }}"
+ state: present
+ with_items: ssl_requirements
+ tags:
+ - packages
+
+- name: install system packages
+ become: yes
+ apt:
+ name: "{{ item }}"
+ state: present
+ with_items: system_packages
+ tags:
+ - packages
+
+- command: cp /usr/share/graphite-web/graphite.wsgi /usr/lib/python2.7/dist-packages/graphite/graphite_web.py
+ args:
+ creates: "/usr/lib/python2.7/dist-packages/graphite/graphite_web.py"
+ become: true
+
+- include_tasks: carbon.yml
+
+- include_tasks: systemd.yml
+ tags:
+ - systemd
+
+- include_tasks: postgresql.yml
+ tags:
+ - postgresql
+
+- name: ensure graphite is running
+ become: true
+ service:
+ name: graphite
+ state: restarted
+ enabled: yes
--- /dev/null
+---
+- name: ensure database service is up
+ service:
+ name: postgresql
+ state: started
+ enabled: yes
+ become: yes
+
+- name: allow users to connect locally
+ become: yes
+ lineinfile:
+ # TODO: should not hardcode that version
+ dest: /etc/postgresql/9.5/main/pg_hba.conf
+ regexp: '^host\s+all\s+all\s+127.0.0.1/32'
+ line: 'host all all 127.0.0.1/32 md5'
+ backrefs: yes
+ register: pg_hba_conf
+
+- service:
+ name: postgresql
+ state: restarted
+ become: true
+ when: pg_hba_conf.changed
+
+- name: generate pseudo-random password for the database connection
+ shell: python -c "exec 'import os; print os.urandom(30).encode(\'base64\')[:${length}]'"
+ register: db_password
+ changed_when: false
+
+- name: make {{ app_name }} user
+ postgresql_user:
+ name: "{{ app_name }}"
+ password: "{{ db_password.stdout }}"
+ role_attr_flags: SUPERUSER
+ login_user: postgres
+ become_user: postgres
+ become: yes
+
+- name: Make {{ app_name }} database
+ postgresql_db:
+ name: "{{ app_name }}"
+ owner: "{{ app_name }}"
+ state: present
+ login_user: postgres
+ become_user: postgres
+ become: yes
+
+- name: ensure database service is up
+ service:
+ name: postgresql
+ state: started
+ enabled: yes
+ become: yes
+
+- name: create the config file with the db password
+ template:
+ src: ../templates/local_settings.py.j2
+ dest: "/etc/graphite/local_settings.py"
+ notify:
+ - restart app
+ become: true
+
+ # there is a bug where if you don't migrate auth first only it will fail
+ # with "ProgrammingError: relation "auth_user" does not exist"
+- name: run migrate for auth first
+ command: graphite-manage migrate --noinput auth
+ become: true
+
+- name: run migrate to ensure database schema
+ command: graphite-manage migrate --noinput
+ become: true
--- /dev/null
+---
+
+- name: ensure /var/log/graphite dir exists
+ become: true
+ file:
+ path: /var/log/graphite
+ state: directory
+ owner: _graphite
+ group: _graphite
+ recurse: yes
+
+- name: install the systemd unit file for graphite
+ template:
+ src: systemd/graphite.service.j2
+ dest: /etc/systemd/system/graphite.service
+ become: true
+ notify:
+ - reload systemd
+
+- name: ensure graphite is enabled and running
+ become: true
+ service:
+ name: graphite
+ state: running
+ enabled: yes
--- /dev/null
+# {{ ansible_managed }}
+## Graphite local_settings.py
+# Edit this file to customize the default Graphite webapp settings
+#
+# Additional customizations to Django settings can be added to this file as well
+
+#####################################
+# General Configuration #
+#####################################
+# Set this to a long, random unique string to use as a secret key for this
+# install. This key is used for salting of hashes used in auth tokens,
+# CRSF middleware, cookie storage, etc. This should be set identically among
+# instances if used behind a load balancer.
+#SECRET_KEY = 'UNSAFE_DEFAULT'
+
+# In Django 1.5+ set this to the list of hosts your graphite instances is
+# accessible as. See:
+# https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-ALLOWED_HOSTS
+#ALLOWED_HOSTS = [ '*' ]
+
+# Set your local timezone (Django's default is America/Chicago)
+# If your graphs appear to be offset by a couple hours then this probably
+# needs to be explicitly set to your local timezone.
+#TIME_ZONE = 'America/Los_Angeles'
+
+# Override this to provide documentation specific to your Graphite deployment
+#DOCUMENTATION_URL = "http://graphite.readthedocs.org/"
+
+# Logging
+# True see: https://answers.launchpad.net/graphite/+question/159731
+LOG_RENDERING_PERFORMANCE = True
+LOG_CACHE_PERFORMANCE = True
+LOG_METRIC_ACCESS = True
+
+# Enable full debug page display on exceptions (Internal Server Error pages)
+#DEBUG = True
+
+# If using RRD files and rrdcached, set to the address or socket of the daemon
+#FLUSHRRDCACHED = 'unix:/var/run/rrdcached.sock'
+
+# This lists the memcached servers that will be used by this webapp.
+# If you have a cluster of webapps you should ensure all of them
+# have the *exact* same value for this setting. That will maximize cache
+# efficiency. Setting MEMCACHE_HOSTS to be empty will turn off use of
+# memcached entirely.
+#
+# You should not use the loopback address (127.0.0.1) here if using clustering
+# as every webapp in the cluster should use the exact same values to prevent
+# unneeded cache misses. Set to [] to disable caching of images and fetched data
+#MEMCACHE_HOSTS = ['10.10.10.10:11211', '10.10.10.11:11211', '10.10.10.12:11211']
+#DEFAULT_CACHE_DURATION = 60 # Cache images and data for 1 minute
+
+
+#####################################
+# Filesystem Paths #
+#####################################
+# Change only GRAPHITE_ROOT if your install is merely shifted from /opt/graphite
+# to somewhere else
+GRAPHITE_ROOT = '/usr/share/graphite-web'
+
+# Most installs done outside of a separate tree such as /opt/graphite will only
+# need to change these three settings. Note that the default settings for each
+# of these is relative to GRAPHITE_ROOT
+CONF_DIR = '/etc/graphite'
+STORAGE_DIR = '/var/lib/graphite/whisper'
+CONTENT_DIR = '/usr/share/graphite-web/static'
+
+# To further or fully customize the paths, modify the following. Note that the
+# default settings for each of these are relative to CONF_DIR and STORAGE_DIR
+#
+## Webapp config files
+#DASHBOARD_CONF = '/opt/graphite/conf/dashboard.conf'
+#GRAPHTEMPLATES_CONF = '/opt/graphite/conf/graphTemplates.conf'
+
+## Data directories
+# NOTE: If any directory is unreadable in DATA_DIRS it will break metric browsing
+WHISPER_DIR = '/var/lib/graphite/whisper'
+#RRD_DIR = '/opt/graphite/storage/rrd'
+#DATA_DIRS = [WHISPER_DIR, RRD_DIR] # Default: set from the above variables
+LOG_DIR = '/var/log/graphite'
+INDEX_FILE = '/var/lib/graphite/search_index' # Search index file
+
+
+#####################################
+# Email Configuration #
+#####################################
+# This is used for emailing rendered Graphs
+# Default backend is SMTP
+#EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
+#EMAIL_HOST = 'localhost'
+#EMAIL_PORT = 25
+#EMAIL_HOST_USER = ''
+#EMAIL_HOST_PASSWORD = ''
+#EMAIL_USE_TLS = False
+# To drop emails on the floor, enable the Dummy backend:
+#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
+
+
+#####################################
+# Authentication Configuration #
+#####################################
+## LDAP / ActiveDirectory authentication setup
+#USE_LDAP_AUTH = True
+#LDAP_SERVER = "ldap.mycompany.com"
+#LDAP_PORT = 389
+# OR
+#LDAP_URI = "ldaps://ldap.mycompany.com:636"
+#LDAP_SEARCH_BASE = "OU=users,DC=mycompany,DC=com"
+#LDAP_BASE_USER = "CN=some_readonly_account,DC=mycompany,DC=com"
+#LDAP_BASE_PASS = "readonly_account_password"
+#LDAP_USER_QUERY = "(username=%s)" #For Active Directory use "(sAMAccountName=%s)"
+#
+# If you want to further customize the ldap connection options you should
+# directly use ldap.set_option to set the ldap module's global options.
+# For example:
+#
+#import ldap
+#ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_ALLOW)
+#ldap.set_option(ldap.OPT_X_TLS_CACERTDIR, "/etc/ssl/ca")
+#ldap.set_option(ldap.OPT_X_TLS_CERTFILE, "/etc/ssl/mycert.pem")
+#ldap.set_option(ldap.OPT_X_TLS_KEYFILE, "/etc/ssl/mykey.pem")
+# See http://www.python-ldap.org/ for further details on these options.
+
+## REMOTE_USER authentication. See: https://docs.djangoproject.com/en/dev/howto/auth-remote-user/
+#USE_REMOTE_USER_AUTHENTICATION = True
+
+# Override the URL for the login link (e.g. for django_openid_auth)
+#LOGIN_URL = '/account/login'
+
+
+##########################
+# Database Configuration #
+##########################
+# By default sqlite is used. If you cluster multiple webapps you will need
+# to setup an external database (such as MySQL) and configure all of the webapp
+# instances to use the same database. Note that this database is only used to store
+# Django models such as saved graphs, dashboards, user preferences, etc.
+# Metric data is not stored here.
+#
+# DO NOT FORGET TO RUN 'manage.py syncdb' AFTER SETTING UP A NEW DATABASE
+#
+# The following built-in database engines are available:
+# django.db.backends.postgresql # Removed in Django 1.4
+# django.db.backends.postgresql_psycopg2
+# django.db.backends.mysql
+# django.db.backends.sqlite3
+# django.db.backends.oracle
+#
+# The default is 'django.db.backends.sqlite3' with file 'graphite.db'
+# located in STORAGE_DIR
+#
+DATABASES = {
+ 'default': {
+ 'NAME': '{{ app_name }}',
+ 'ENGINE': 'django.db.backends.postgresql_psycopg2',
+ 'USER': '{{ app_name }}',
+ 'PASSWORD': '{{ db_password.stdout }}',
+ 'HOST': 'localhost',
+ 'PORT': ''
+ }
+}
+
+
+
+#########################
+# Cluster Configuration #
+#########################
+# (To avoid excessive DNS lookups you want to stick to using IP addresses only in this entire section)
+#
+# This should list the IP address (and optionally port) of the webapp on each
+# remote server in the cluster. These servers must each have local access to
+# metric data. Note that the first server to return a match for a query will be
+# used.
+#CLUSTER_SERVERS = ["10.0.2.2:80", "10.0.2.3:80"]
+
+## These are timeout values (in seconds) for requests to remote webapps
+#REMOTE_STORE_FETCH_TIMEOUT = 6 # Timeout to fetch series data
+#REMOTE_STORE_FIND_TIMEOUT = 2.5 # Timeout for metric find requests
+#REMOTE_STORE_RETRY_DELAY = 60 # Time before retrying a failed remote webapp
+#REMOTE_STORE_USE_POST = False # Use POST instead of GET for remote requests
+#REMOTE_FIND_CACHE_DURATION = 300 # Time to cache remote metric find results
+
+## Prefetch cache
+# set to True to fetch all metrics using a single http request per remote server
+# instead of one http request per target, per remote server.
+# Especially useful when generating graphs with more than 4-5 targets or if
+# there's significant latency between this server and the backends. (>20ms)
+#REMOTE_PREFETCH_DATA = False
+
+# During a rebalance of a consistent hash cluster, after a partition event on a replication > 1 cluster,
+# or in other cases we might receive multiple TimeSeries data for a metric key. Merge them together rather
+# that choosing the "most complete" one (pre-0.9.14 behaviour).
+#REMOTE_STORE_MERGE_RESULTS = True
+
+## Remote rendering settings
+# Set to True to enable rendering of Graphs on a remote webapp
+#REMOTE_RENDERING = True
+# List of IP (and optionally port) of the webapp on each remote server that
+# will be used for rendering. Note that each rendering host should have local
+# access to metric data or should have CLUSTER_SERVERS configured
+#RENDERING_HOSTS = []
+#REMOTE_RENDER_CONNECT_TIMEOUT = 1.0
+
+# If you are running multiple carbon-caches on this machine (typically behind a relay using
+# consistent hashing), you'll need to list the ip address, cache query port, and instance name of each carbon-cache
+# instance on the local machine (NOT every carbon-cache in the entire cluster). The default cache query port is 7002
+# and a common scheme is to use 7102 for instance b, 7202 for instance c, etc.
+#
+# You *should* use 127.0.0.1 here in most cases
+#CARBONLINK_HOSTS = ["127.0.0.1:7002:a", "127.0.0.1:7102:b", "127.0.0.1:7202:c"]
+#CARBONLINK_TIMEOUT = 1.0
+# Using 'query-bulk' queries for carbon
+# It's more effective, but python-carbon 0.9.13 (or latest from 0.9.x branch) is required
+# See https://github.com/graphite-project/carbon/pull/132 for details
+#CARBONLINK_QUERY_BULK = False
+
+#####################################
+# Additional Django Settings #
+#####################################
+# Uncomment the following line for direct access to Django settings such as
+# MIDDLEWARE_CLASSES or APPS
+#from graphite.app_settings import *
+
--- /dev/null
+# {{ ansible_managed }}
+# This file defines regular expression patterns that can be used to
+# rewrite metric names in a search & replace fashion. It consists of two
+# sections, [pre] and [post]. The rules in the pre section are applied to
+# metric names as soon as they are received. The post rules are applied
+# after aggregation has taken place.
+#
+# The general form of each rule is as follows:
+#
+# regex-pattern = replacement-text
+#
+# For example:
+#
+# [post]
+# _sum$ =
+# _avg$ =
+#
+# These rules would strip off a suffix of _sum or _avg from any metric names
+# after aggregation.
+
+[post]
+^{{ graphite_api_key }} =
--- /dev/null
+# {{ ansible_managed }}
+#
+[default]
+pattern = .*
+retentions = 5s:30d,1m:120d,10m:1y
--- /dev/null
+# graphite web service
+
+enable graphite.service
--- /dev/null
+# {{ ansible_managed }}
+[Unit]
+Description=graphite gunicorn service
+After=network.target
+
+[Service]
+Type=simple
+ExecStart=/usr/bin/gunicorn -b 127.0.0.1:8000 -w 10 -t 30 graphite_web:application
+User=_graphite
+WorkingDirectory=/usr/lib/python2.7/dist-packages/graphite
+StandardOutput=journal
+StandardError=journal
+
+[Install]
+WantedBy=multi-user.target
--- /dev/null
+# {{ ansible_managed }}
+# This file takes a single regular expression per line
+# If USE_WHITELIST is set to True in carbon.conf, only metrics received which
+# match one of these expressions will be persisted. If this file is empty or
+# missing, all metrics will pass through.
+# This file is reloaded automatically when changes are made
+^{{ graphite_api_key }}.*
--- /dev/null
+---
+
+system_packages:
+ - grafana
+ - graphite-web
+ - graphite-api
+ - graphite-carbon
+ - git
+ - g++
+ - gcc
+ - libpq-dev
+ - postgresql
+ - postgresql-common
+ - postgresql-contrib
+ - python-psycopg2
+ - nginx
+ - vim
+ # needed for the ansible apt_repository module
+ - python-apt
+ - python
+ - gunicorn
+
+ssl_requirements:
+ - openssl
+ - libssl-dev
--- /dev/null
+---
+helga_home: /opt/helga
+helga_settings_path: '{{ helga_home }}/bin/settings.d'
+helga_nick: helga
+helga_irc_host: localhost
+helga_irc_port: 6667
+helga_use_ssl: yes
+helga_operators: []
+helga_irc_channels:
+- "#bots"
+helga_timezone: 'UTC'
+helga_default_plugins:
+- dubstep
+- facts
+- help
+- manager
+- meant_to_say
+- oneliner
+- operator
+- poems
+- reminders
+- stfu
+
+helga_external_plugins: []
+helga_cmd_prefix: '!'
+helga_webhooks_port: 8080
+helga_twitter_api_key: null
+helga_twitter_api_secret: null
+helga_twitter_oauth_token: null
+helga_twitter_oauth_secret: null
+helga_twitter_username: null
+helga_system_packages:
+ - python-devel
+ - git
+ - python-virtualenv
+ - mongodb-server
+ - gcc-c++
+
+helga_ssl_requirements:
+ - openssl
+ - openssl-devel
--- /dev/null
+---
+
+- name: restart helga service
+ service:
+ name: helga
+ status: restarted
+
+# prevents issues when updating systemd files
+- name: reload systemd
+ become: yes
+ command: systemctl daemon-reload
--- /dev/null
+---
+
+- name: Create a home for Helga.
+ become: yes
+ file:
+ path: "{{ helga_home }}"
+ owner: "{{ ansible_user }}"
+ group: "{{ ansible_user }}"
+ state: directory
+ recurse: yes
+
+- name: Install ssl requirements.
+ become: yes
+ yum:
+ name: "{{ item }}"
+ state: present
+ with_items: helga_ssl_requirements
+ when: helga_use_ssl
+
+- name: Install GCC
+ become: yes
+ yum:
+ name: gcc
+ state: present
+
+- name: Enable EPEL
+ become: yes
+ yum:
+ name: epel-release
+ state: present
+
+- name: Retrieve software requirements.
+ become: yes
+ yum:
+ name: "{{ item }}"
+ state: present
+ with_items: "{{ helga_system_packages }}"
+
+- name: Create a virtualenv with latest pip.
+ pip:
+ name: pip
+ virtualenv: "{{ helga_home }}"
+ extra_args: '--upgrade'
+
+- name: Install Helga.
+ pip:
+ name: helga
+ virtualenv: "{{ helga_home }}"
+
+- name: Install Helga unreleased enhancements.
+ pip:
+ name: "{{ item }}"
+ state: present
+ extra_args: "-e"
+ virtualenv: "{{ helga_home }}"
+ with_items: "{{ helga_external_plugins }}"
+ notify: restart helga service
+
+- name: Install Helga released enhancements.
+ pip:
+ name: "{{ item }}"
+ state: latest
+ virtualenv: "{{ helga_home }}"
+ with_items: "{{ helga_pypi_plugins }}"
+ notify: restart helga service
+
+- name: Create settings directory
+ file:
+ path: "{{ helga_settings_path }}"
+ state: directory
+
+- name: Install base personality.
+ template:
+ src: custom_settings.j2
+ dest: "{{ helga_settings_path }}/00_base_settings.py"
+
+- name: Install personality customizations (files).
+ copy:
+ src: "{{ item }}"
+ dest: "{{ helga_settings_path }}"
+# this one is tricky, because the relative path is relative to
+# roles/common/files
+ with_fileglob:
+ - helga/settings.d/*
+
+- name: Custom settings, ASSEMBLE!
+ assemble:
+ src: "{{ helga_settings_path }}/"
+ dest: "{{ helga_home }}/bin/custom_settings.py"
+
+- name: ensure mongod is running
+ become: true
+ service:
+ name: mongod
+ state: started
+
+- name: ensure mongod is set to start at boot (enabled)
+ become: true
+ service:
+ name: mongod
+ enabled: true
+
+- include_tasks: systemd.yml
--- /dev/null
+---
+
+
+- name: ensure /etc/sysconfig/ dir exists
+ become: true
+ file:
+ path: /etc/sysconfig
+ state: directory
+
+# prevents issues when updating systemd files
+- name: reload systemd
+ become: yes
+ command: systemctl daemon-reload
+
+- name: install the systemd configuration file for celery
+ template:
+ src: helga.sysconfig.j2
+ dest: /etc/sysconfig/helga
+ become: true
+ notify:
+ - reload systemd
+
+- name: install the systemd unit file for helga
+ template:
+ src: helga.service.j2
+ dest: /etc/systemd/system/helga.service
+ become: true
+ notify:
+ - reload systemd
+
+- name: ensure helga is enabled and running
+ become: true
+ service:
+ name: helga
+ state: running
+ enabled: yes
--- /dev/null
+NICK = '{{ helga_nick }}'
+
+SERVER = {
+ 'HOST': '{{ helga_irc_host }}',
+ 'PORT': {{ helga_irc_port }},
+ 'SSL': {{ helga_use_ssl }},
+}
+
+CHANNELS = [
+ {% for channel in helga_irc_channels %}
+ ('{{ channel }}',),
+ {% endfor %}
+]
+
+OPERATORS = [
+ {% for operator in helga_operators %}
+ '{{ operator }}',
+ {% endfor %}
+]
+
+TIMEZONE = '{{ helga_timezone }}'
+
+ENABLED_PLUGINS = [
+ {% for plugin in helga_default_plugins %}
+ '{{ plugin }}',
+ {% endfor %}
+ {% for plugin in helga_external_plugins %}
+ '{{ plugin }}',
+ {% endfor %}
+]
+
+COMMAND_PREFIX_CHAR = '{{ helga_cmd_prefix }}'
+WEBHOOKS_PORT = {{ helga_webhooks_port }}
+
+# Twitter API
+TWITTER_CONSUMER_KEY = '{{ helga_twitter_api_key }}'
+TWITTER_CONSUMER_SECRET = '{{ helga_twitter_api_secret }}'
+TWITTER_OAUTH_TOKEN = '{{ helga_twitter_oauth_token }}'
+TWITTER_OAUTH_TOKEN_SECRET = '{{ helga_twitter_oauth_secret }}'
+TWITTER_USERNAME = '{{ helga_twitter_username }}'
+
+BUGZILLA_XMLRPC_URL = "{{ bugzilla_xmlrpc_url }}"
+BUGZILLA_TICKET_URL = "{{ bugzilla_ticket_url }}"
+
+FACTS_REQUIRE_NICKNAME = True
+
+RABBITMQ_USER = "{{ rabbitmq_user }}"
+RABBITMQ_PASSWORD= "{{ rabbitmq_password }}"
+RABBITMQ_HOST = "{{ rabbitmq_host }}"
+RABBITMQ_EXCHANGE= "{{ rabbitmq_exchange }}"
+
+RABBITMQ_ROUTING_KEYS = [
+ {% for key in rabbitmq_routing_keys %}
+ '{{ key }}',
+ {% endfor %}
+]
+
+REDMINE_URL = '{{ redmine_url }}'
+# This API key corresponds to the Kraken system account in Ceph's Redmine.
+REDMINE_API_KEY = '{{ redmine_api_key }}'
+
+JENKINS_URL = '{{ jenkins_url }}'
+
+JENKINS_CREDENTIALS = {
+ {% for key, value in jenkins_credentials.iteritems() %}
+ '{{ key }}': {
+ 'username': '{{ value.username }}',
+ 'token': '{{ value.token }}',
+ },
+ {% endfor %}
+}
--- /dev/null
+[Unit]
+Description=helga (kraken) bot service
+After=network.target
+
+[Service]
+Type=simple
+ExecStart={{ helga_home }}/bin/helga
+EnvironmentFile=/etc/sysconfig/helga
+User={{ ansible_ssh_user }}
+WorkingDirectory={{ helga_home }}/src/
+StandardOutput=journal
+StandardError=journal
+
+[Install]
+WantedBy=multi-user.target
--- /dev/null
+HOME=/home/{{ ansible_ssh_user }}
+HELGA_SETTINGS=custom_settings
--- /dev/null
+---
+ssl_support_email: "adeza@redhat.com"
+ssl_webroot_base_path: "/var/www"
+
+ssl_cert: "files/ssl/dev/ssl/ssl.crt"
+ssl_key: "files/ssl/dev/ssl/ssl.key"
--- /dev/null
+---
+
+- name: restart nginx
+ become: yes
+ action: service name=nginx state=restarted enabled=yes
--- /dev/null
+---
+
+- name: install system packages
+ become: yes
+ apt:
+ name: "letsencrypt"
+ state: present
+
+- name: ensure letsencrypt acme-challenge path
+ file:
+ path: "{{ ssl_webroot_base_path }}/{{ item.fqdn }}"
+ state: "directory"
+ mode: 0755
+ become: yes
+ with_items: nginx_hosts
+
+- name: unlink nginx configs
+ file:
+ path: "/etc/nginx/sites-enabled/{{ item.app_name }}.conf"
+ state: "absent"
+ become: true
+ with_items: nginx_hosts
+
+- name: create temporary nginx config
+ template:
+ src: "nginx_tmp_site.conf"
+ dest: "/etc/nginx/sites-enabled/{{ item.app_name }}.conf"
+ become: true
+ with_items: nginx_hosts
+
+- name: restart nginx
+ become: yes
+ service:
+ name: nginx
+ state: restarted
+
+- name: create (or renew) letsencrypt ssl cert
+ command: "letsencrypt certonly --webroot -w {{ ssl_webroot_base_path }}/{{ item.fqdn }} -d {{ item.fqdn }} --email {{ ssl_support_email }} --agree-tos --renew-by-default"
+ become: yes
+ with_items: nginx_hosts
+
+- name: setup a cron to renew the SSL cert every day
+ cron:
+ name: "renew letsencrypt cert for {{ item.app_name }}"
+ minute: "21"
+ hour: "6,18"
+ job: "letsencrypt renew --agree-tos --email {{ ssl_support_email }}"
+ become: yes
+ with_items: nginx_hosts
+
+- name: unlink tmp nginx config
+ file:
+ path: "/etc/nginx/sites-enabled/{{ item.app_name }}.conf"
+ state: "absent"
+ become: true
+ with_items: nginx_hosts
--- /dev/null
+---
+- name: ensure sites-available for nginx
+ file:
+ path: /etc/nginx/sites-available
+ state: directory
+ become: true
+
+- name: ensure there is an nginx user
+ user:
+ name: nginx
+ comment: "Nginx user"
+ become: true
+
+- name: ensure sites-enable for nginx
+ file:
+ path: /etc/nginx/sites-enabled
+ state: directory
+ become: true
+
+- name: remove default nginx site
+ file:
+ path: /etc/nginx/sites-enabled/default
+ state: absent
+ become: true
+
+- name: write nginx.conf
+ template:
+ src: nginx.conf
+ dest: /etc/nginx/nginx.conf
+ become: true
+
+- name: enable nginx
+ become: true
+ service:
+ name: nginx
+ enabled: true
+
+- name: create nginx site config
+ template:
+ src: "nginx_site.conf"
+ dest: "/etc/nginx/sites-available/{{ item.app_name }}.conf"
+ become: true
+ with_items: nginx_hosts
+ notify:
+ - restart nginx
+
+- include_tasks: ssl.yml
+ when: development_server == true
+
+- include_tasks: letsencrypt.yml
+ when: development_server == false
+
+- name: link nginx config
+ file:
+ src: "/etc/nginx/sites-available/{{ item.app_name }}.conf"
+ dest: "/etc/nginx/sites-enabled/{{ item.app_name }}.conf"
+ state: link
+ become: true
+ with_items: nginx_hosts
+
+- name: ensure nginx is restarted
+ become: true
+ service:
+ name: nginx
+ state: restarted
--- /dev/null
+---
+
+- name: ensure ssl certs directory
+ file:
+ dest: /etc/ssl/certs
+ state: directory
+ become: true
+
+- name: ensure ssl private directory
+ file:
+ dest: /etc/ssl/private
+ state: directory
+ become: true
+
+- name: copy SSL cert
+ copy:
+ src: "{{ ssl_cert_path }}"
+ dest: "/etc/ssl/certs/{{ item.fqdn }}-bundled.crt"
+ mode: 0777
+ force: no
+ become: true
+ notify: restart nginx
+ when: nginx_hosts is defined
+ with_items: nginx_hosts
+
+- name: copy SSL key
+ copy:
+ src: "{{ ssl_key_path }}"
+ dest: "/etc/ssl/private/{{ item.fqdn }}.key"
+ force: no
+ become: true
+ notify: restart nginx
+ when: nginx_hosts is defined
+ with_items: nginx_hosts
--- /dev/null
+# {{ ansible_managed }}
+user nginx;
+worker_processes 20;
+worker_rlimit_nofile 8192;
+
+pid /var/run/nginx.pid;
+
+events {
+ worker_connections 1024;
+ # multi_accept on;
+}
+
+http {
+
+ ##
+ # Basic Settings
+ ##
+
+ #sendfile on;
+ tcp_nopush on;
+ tcp_nodelay on;
+ keepalive_timeout 65;
+ types_hash_max_size 2048;
+ server_tokens off;
+
+ # server_names_hash_bucket_size 64;
+ # server_name_in_redirect off;
+ include /etc/nginx/mime.types;
+ default_type application/octet-stream;
+
+ ##
+ # Logging Settings
+ ##
+ # Useful for logging load balancing requests
+ log_format upstreamlog '[$time_local] $remote_addr - backend: $upstream_addr $request upstream_response_time $upstream_response_time request_time $request_time';
+ access_log /var/log/nginx/access.log;
+ error_log /var/log/nginx/error.log;
+
+ ##
+ # Gzip Settings
+ ##
+
+ gzip on;
+ gzip_disable "msie6";
+
+ # gzip_vary on;
+ # gzip_proxied any;
+ # gzip_comp_level 6;
+ # gzip_buffers 16 8k;
+ # gzip_http_version 1.1;
+ # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
+
+ ##
+ # If HTTPS, then set a variable so it can be passed along.
+ ##
+
+ map $scheme $server_https {
+ default off;
+ https on;
+ }
+
+ {% for host in nginx_hosts %}
+ {% if host.upstreams is defined %}
+ upstream {{ host.upstreams.name }} {
+ {% if host.upstreams.strategy is defined %}
+ {{ host.upstreams.strategy }};
+ {% endif %}
+ {% for server in host.upstreams.servers %}
+ server {{ server }}:443;
+ {% endfor %}
+ }
+ {% endif %}
+ {% endfor %}
+
+ ##
+ # Virtual Host Configs
+ ##
+
+ include /etc/nginx/conf.d/*.conf;
+ include /etc/nginx/sites-enabled/*;
+}
--- /dev/null
+server {
+ server_name {{ item.fqdn }};
+ location '/.well-known/acme-challenge' {
+ default_type "text/plain";
+ root {{ ssl_webroot_base_path }}/{{ item.fqdn }};
+ }
+ location / {
+ add_header Strict-Transport-Security max-age=31536000;
+ return 301 https://$server_name$request_uri;
+ }
+}
+
+server {
+ listen 443 ssl;
+ server_name {{ item.fqdn }};
+ {% if development_server %}
+ ssl_certificate /etc/ssl/certs/{{ item.fqdn }}-bundled.crt;
+ ssl_certificate_key /etc/ssl/private/{{ item.fqdn }}.key;
+ {% else %}
+ ssl_certificate /etc/letsencrypt/live/{{ item.fqdn }}/fullchain.pem;
+ ssl_certificate_key /etc/letsencrypt/live/{{ item.fqdn }}/privkey.pem;
+ {% endif %}
+ ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
+ ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
+ ssl_prefer_server_ciphers on;
+ add_header Strict-Transport-Security "max-age=31536000";
+
+ access_log /var/log/nginx/{{ item.app_name }}-access.log upstreamlog;
+ error_log /var/log/nginx/{{ item.app_name }}-error.log;
+
+
+ location / {
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+
+
+ {% if item.upstreams is defined %}
+ proxy_pass https://{{ item.upstreams.name }};
+ {% elif item.proxy_pass is defined %}
+ proxy_pass {{ item.proxy_pass }};
+ {% endif %}
+ proxy_read_timeout 30;
+ }
+
+}
--- /dev/null
+server {
+ server_name {{ item.fqdn }};
+ location '/.well-known/acme-challenge' {
+ default_type "text/plain";
+ root {{ ssl_webroot_base_path }}/{{ item.fqdn }};
+ }
+}
--- /dev/null
+---
+- name: install Redis server
+ apt:
+ name: redis-server
+ state: latest
+ update_cache: yes
+
+- name: check if Redis is running
+ service:
+ name: redis-server
+ state: started
+
+- name: enable redis-server to survive reboot
+ service:
+ name: redis-server
+ enabled: yes
--- /dev/null
+---
+
+- name: undo last commit from failed release
+ command: git reset --soft HEAD~1 chdir=remoto
+ when: (clean and last_commit.stdout == tag_name)
+
+- name: git checkout {{ branch }} branch
+ command: git checkout {{ branch }} chdir=remoto
+
+- name: remove local tag
+ command: git tag -d v{{ version }} chdir=remoto
+ ignore_errors: yes
+
+- name: remove remote tag
+ command: git push jenkins :refs/tags/v{{ version }} chdir=remoto
+ ignore_errors: yes
+
+- name: force push changes to jenkins git repo
+ command: git push -f jenkins {{ branch }} chdir=remoto
--- /dev/null
+---
+
+- name: check if remoto repo exists
+ stat: path='./remoto'
+ register: 'cdep_repo'
+
+- name: clone the remoto repository
+ git: repo=git@github.com:ceph/remoto dest=remoto
+ when: cdep_repo.stat.exists is defined and cdep_repo.stat.exists == false
+
+- name: rename origin to jenkins
+ command: git remote rename origin jenkins chdir=remoto
+ ignore_errors: yes
+
+- name: fetch the latest from remote
+ command: git fetch jenkins chdir=remoto
+
+- name: ensure local repo is in sync with remote
+ command: git reset --hard jenkins/{{ branch }} chdir=remoto
+
+- name: check if we are re-pushing the release commit
+ command: git log -1 --pretty=%B chdir=remoto
+ register: 'last_commit'
+
+ # we probably messed up the previous commit+tag, so we chose to use 'clean'
+ # that will rollback that commit, delete the local and remote tag, and force
+ # push the new changes
+- include_tasks: clear_version.yml
+ when: (clean and last_commit.stdout == tag_name)
+
+ # if the last commit wasn't one that we already did, then go ahead and make
+ # the changes + tag for the release. Otherwise, just skip because it was
+ # already done for this release
+- include_tasks: release.yml
+ when: (tag_name != last_commit.stdout)
--- /dev/null
+---
+
+- name: fetch the latest from remote
+ command: git fetch jenkins chdir=remoto
+
+- name: ensure local repo is in sync with remote
+ command: git reset --hard jenkins/{{ branch }} chdir=remoto
+
+- name: git checkout {{ branch }} branch
+ command: git checkout {{ branch }} chdir=remoto
+
+- name: set the debian version
+ command: dch -v {{ version }} -D stable "New upstream release" chdir=remoto
+ environment:
+ DEBEMAIL: "{{ debemail }}"
+ DEBFULLNAME: "{{ debfullname }}"
+
+# we don't have a spec file in remoto, this is being built
+# separately
+#- name: set the version in the spec file
+# lineinfile: dest=remoto/remoto.spec
+# regexp="Version{{':'}}\s+"
+# line="Version{{':'}} {{ version }}"
+# state=present
+
+- name: commit the version changes
+ command: git commit -a -m "{{ version }}" chdir=remoto
+
+ # from script: /srv/ceph-build/tag_release.sh
+ # Contents of tag_release.sh
+ # FIXME: this used to be a signed tag:
+ # command: git tag -s "v{{ version }}" -u 17ED316D -m "v{{ version }}" chdir=remoto
+- name: tag and commit the version
+ command: git tag "v{{ version }}" -m "v{{ version }}" chdir=remoto
+ environment:
+ GNUPGHOME: ~/build/gnupg.ceph-release
+ DEBEMAIL: "{{ debemail }}"
+ DEBFULLNAME: "{{ debfullname }}"
+
+- name: push changes to jenkins git repo
+ command: git push jenkins {{ branch }} chdir=remoto
+
+ # FIXME: this used to be set when signing the tag:
+ # environment:
+ # GNUPGHOME: ~/build/gnupg.ceph-release
+- name: push the newly created tag
+ command: git push jenkins v{{ version }} chdir=remoto
--- /dev/null
+[sensu-server]
+158.69.67.168
+
+[sensu-clients]
+jenkins.ceph.com
+chacra.ceph.com
+prado.ceph.com
--- /dev/null
+{{ api_user }}:{{ token }}
--- /dev/null
+# {{ ansible_managed }}
+[Unit]
+Description=Jenkins Builder
+Wants=network.target
+After=network.target
+
+[Install]
+WantedBy=multi-user.target
+
+[Service]
+Type=simple
+User={{ jenkins_user }}
+ExecStart=/usr/bin/java \
+ -Dfile.encoding=UTF8 \
+ -jar /home/{{ jenkins_user }}/agent.jar \
+ -jnlpUrl {{ api_uri }}/computer/{{ ansible_default_ipv4.address }}+{{ nodename }}/slave-agent.jnlp \
+ -jnlpCredentials @/etc/systemd/system/jenkins.secret
+StandardOutput=journal
+StandardError=journal
+Restart=always
+RestartSec=30
+StartLimitInterval=0
--- /dev/null
+[epel]
+baseurl = http://download.fedoraproject.org/pub/epel/$releasever/$basearch/
+gpgkey = https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
+name = EPEL YUM repo
--- /dev/null
+---
+
+# These values allow the nginx role to fully configure load balancing and
+# regular hosts.
+nginx_hosts:
+ - fqdn: "grafana.ceph.com"
+ app_name: "grafana"
+ proxy_pass: "http://127.0.0.1:3000"
+ - fqdn: "shaman.ceph.com"
+ app_name: "shaman"
+ upstreams:
+ name: "shaman"
+ strategy: "least_conn"
+ servers:
+ - "1.shaman.ceph.com"
+ - "2.shaman.ceph.com"
--- /dev/null
+---
+# Set the rabbitmq address
+rabbitmq_conf_tcp_listeners_address: '0.0.0.0'
+
+sensu_server_rabbitmq_insecure: true
+
+# Sensu client variable
+
+sensu_checks:
+ cpu:
+ command: "check-cpu.rb"
+ interval: 10
+ subscribers:
+ - common
+ disk:
+ command: "check-disk-usage.rb -w 85 -c 95"
+ interval: 10
+ subscribers:
+ - common
+ load:
+ command: "check-load.rb"
+ interval: 10
+ subscribers:
+ - common
+ memory:
+ command: "check-memory.rb -w 85 -c 95"
+ interval: 10
+ subscribers:
+ - common
+ rabbitmq-alive:
+ command: "check-rabbitmq-amqp-alive.rb -u :::rabbitmq.user|guest::: -p :::rabbitmq.password|guest::: -v :::rabbitmq.vhost|%2F:::"
+ interval: 10
+ subscribers:
+ - rabbitmq
+
+sensu_ruby_gem_plugins:
+ - "sensu-plugins-load-checks"
+ - "sensu-plugins-memory-checks"
+ - "sensu-plugins-disk-checks"
+ - "sensu-plugins-cpu-checks"
+ - "sensu-plugins-rabbitmq"
+
+# Dummy sensu_handlers
+sensu_handlers:
+ test_handler:
+ type : pipe
+ command: "echo"
--- /dev/null
+#!/bin/bash
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+cd $WORKSPACE/docs/
+$VENV/tox -rv
--- /dev/null
+2.jenkins.ceph.com
--- /dev/null
+- job:
+ name: ceph-ansible-docs-pull-requests
+ disabled: true
+ node: (small && (centos8 || trusty)) || (vagrant && libvirt && smithi)
+ project-type: freestyle
+ defaults: global
+ display-name: 'ceph-ansible: docs pull requests'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-ansible
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ trigger-phrase: 'jenkins test docs'
+ # This is set so the job can be manually triggered or by the ceph-ansible-pipeline multijob
+ only-trigger-phrase: true
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "Docs"
+ started-status: "checking if docs build"
+ success-status: "docs built successfully "
+ failure-status: "docs could not build correctly"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-ansible
+ browser: auto
+ skip-tag: true
+ timeout: 20
+
+ builders:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
--- /dev/null
+#!/bin/bash
+set -e
+
+
+sudo apt-get install jq -y
+
+cd "$WORKSPACE"/ceph-container/ || exit
+export PRERELEASE=false
+ARCH=aarch64 bash -x contrib/build-ceph-base.sh
+
+echo "Now running manifest script"
+BUILD_SERVER_GOARCH=arm64 bash -x contrib/make-ceph-base-manifests.sh
--- /dev/null
+2.jenkins.ceph.com
--- /dev/null
+- job:
+ name: ceph-container-build-ceph-base-push-imgs-arm64
+ node: arm64 && xenial
+ project-type: freestyle
+ defaults: global
+ display-name: ceph-container-build-ceph-base-push-imgs-arm64
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 1
+ num-to-keep: 1
+ artifact-days-to-keep: 1
+ artifact-num-to-keep: 1
+ - github:
+ url: https://github.com/ceph/ceph-container
+
+ triggers:
+ - timed: '@daily'
+
+ parameters:
+ - string:
+ name: AARCH64_FLAVORS_TO_BUILD
+ description: "arm64 flavor(s) to build"
+ default: "pacific,centos,8 quincy,centos,8 reef,centos,8"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-container.git
+ branches:
+ - main
+ browser: auto
+ basedir: "ceph-container"
+ timeout: 20
+
+ builders:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: ceph-container-quay-io
+ username: REGISTRY_USERNAME
+ password: REGISTRY_PASSWORD
--- /dev/null
+#!/bin/bash
+set -e
+
+
+sudo apt-get install jq -y
+
+cd "$WORKSPACE"/ceph-container/ || exit
+export PRERELEASE=false
+ARCH=x86_64 bash -x contrib/build-ceph-base.sh
+
+echo "Now running manifest script"
+BUILD_SERVER_GOARCH=amd64 bash -x contrib/make-ceph-base-manifests.sh
--- /dev/null
+2.jenkins.ceph.com
--- /dev/null
+- job:
+ name: ceph-container-build-ceph-base-push-imgs
+ node: huge && trusty && x86_64
+ project-type: freestyle
+ defaults: global
+ display-name: ceph-container-build-ceph-base-push-imgs
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 1
+ num-to-keep: 1
+ artifact-days-to-keep: 1
+ artifact-num-to-keep: 1
+ - github:
+ url: https://github.com/ceph/ceph-container
+
+ triggers:
+ - timed: '@daily'
+
+ parameters:
+ - string:
+ name: X86_64_FLAVORS_TO_BUILD
+ description: "x86 flavor(s) to build"
+ default: "pacific,centos,8 quincy,centos,8 reef,centos,8"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-container.git
+ branches:
+ - main
+ browser: auto
+ basedir: "ceph-container"
+ timeout: 20
+
+ builders:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: ceph-container-quay-io
+ username: REGISTRY_USERNAME
+ password: REGISTRY_PASSWORD
--- /dev/null
+#!/bin/bash
+set -e
+
+
+cd "$WORKSPACE"/ceph-container/ || exit
+bash -x contrib/build-push-ceph-container-imgs-arm64.sh
--- /dev/null
+2.jenkins.ceph.com
--- /dev/null
+- job:
+ name: ceph-container-build-push-imgs-arm64
+ node: arm64 && xenial
+ project-type: freestyle
+ defaults: global
+ name: ceph-container-build-push-imgs-arm64
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: -1
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-container
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-container.git
+ branches:
+ - main
+ browser: auto
+ basedir: "ceph-container"
+ timeout: 20
+
+ builders:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: ceph-container-quay-io
+ username: REGISTRY_USERNAME
+ password: REGISTRY_PASSWORD
--- /dev/null
+#!/bin/bash
+set -e
+
+
+cd "$WORKSPACE"/ceph-container/ || exit
+bash -x contrib/build-push-ceph-container-imgs.sh
--- /dev/null
+2.jenkins.ceph.com
--- /dev/null
+- job:
+ name: ceph-container-build-push-imgs-devel-nightly
+ node: huge && trusty && x86_64
+ project-type: freestyle
+ defaults: global
+ name: ceph-container-build-push-imgs-devel-nightly
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: -1
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-container
+
+ triggers:
+ - timed: '@daily'
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-container.git
+ branches:
+ - main
+ browser: auto
+ basedir: "ceph-container"
+ timeout: 20
+
+ builders:
+ - inject:
+ properties-content: |
+ DEVEL=true
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: ceph-container-quay-io
+ username: REGISTRY_USERNAME
+ password: REGISTRY_PASSWORD
--- /dev/null
+#!/bin/bash
+set -e
+
+
+cd "$WORKSPACE"/ceph-container/ || exit
+bash -x contrib/build-push-ceph-container-imgs.sh
--- /dev/null
+2.jenkins.ceph.com
--- /dev/null
+- job:
+ name: ceph-container-build-push-imgs
+ node: huge && trusty && x86_64
+ project-type: freestyle
+ defaults: global
+ name: ceph-container-build-push-imgs
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: -1
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-container
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-container.git
+ branches:
+ - main
+ browser: auto
+ basedir: "ceph-container"
+ timeout: 20
+
+ builders:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: ceph-container-quay-io
+ username: REGISTRY_USERNAME
+ password: REGISTRY_PASSWORD
--- /dev/null
+#!/bin/bash
+
+set -e
+set -x
+
+function generate_filelist(){
+ if [[ "$pull_request_id" -eq "" || "${ghprbCommentBody:-}" = "jenkins flake8 all" ]]
+ then
+ find . -name '*.py'
+ else
+ curl -XGET "https://api.github.com/repos/ceph/ceph-container/pulls/$pull_request_id/files" |
+ jq '.[].filename' | # just the files please
+ tr -d '"' | # remove the quoting from JSON
+ grep ".py$" # just the python
+ fi
+
+}
+
+function check(){
+ local file
+ while read -r filename; do
+ pushd "$(dirname "$filename")"
+ file=$(basename "$filename")
+ sudo docker run --rm -v "$(pwd)"/"$file":/apps/"$file":z docker.io/alpine/flake8 "$file"
+ popd
+ done
+ return $?
+}
+
+function main() {
+ # install some of our dependencies if running on a jenkins builder
+ if [[ -n "$HUDSON_URL" ]]
+ then
+ sudo yum -y install epel-release
+ sudo yum -y install docker jq
+ sudo systemctl start docker || sudo systemctl start podman
+ pull_request_id=${ghprbPullId:-$2}
+ workspace=${WORKSPACE:-$1}
+ else
+ if ! command -v docker || ! command -v jq
+ then
+ echo "docker or jq is/are missing, install it/them"
+ exit 1
+ fi
+ pull_request_id=${ghprbPullId:-$2}
+ workspace=${WORKSPACE:-$1}
+ fi
+
+
+ sudo docker pull eeacms/flake8
+ pushd "$workspace/ceph-container"
+ generate_filelist | check
+ popd
+ exit $?
+}
+
+main "$@"
--- /dev/null
+2.jenkins.ceph.com
--- /dev/null
+- scm:
+ name: ceph-container
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-container.git
+ branches:
+ - ${sha1}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+ basedir: "ceph-container"
+
+- job:
+ name: ceph-container-flake8
+ node: small && centos9
+ defaults: global
+ display-name: 'ceph-container-flake8'
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-container/
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ admin-list:
+ - alfredodeza
+ - ktdreyer
+ - gmeno
+ - zcerza
+ org-list:
+ - ceph
+ trigger-phrase: 'jenkins flake8'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "Testing: for sloppy python"
+ started-status: "Running: flake8"
+ success-status: "OK - nice work"
+ failure-status: "FAIL - please clean up for merge"
+
+ scm:
+ - ceph-container
+
+ builders:
+ - shell:
+ !include-raw:
+ - ../../build/build
+
--- /dev/null
+#!/bin/bash
+
+set -e
+set -x
+
+IGNORE_THESE_CODES="SC1091,SC2009,SC2001"
+IGNORE_THESE_FILES="variables_entrypoint.sh" # pipe-separated file names, e.g: foo|bar|foobar, this avoids shellcheck complaining that vars are not used (see: SC2034)
+
+function generate_filelist(){
+ if [[ "$pull_request_id" -eq "" || "${ghprbCommentBody:-}" = "jenkins lint all" ]]
+ then
+ find . -name '*.sh' | grep -vE "$IGNORE_THESE_FILES"
+ else
+ curl -XGET "https://api.github.com/repos/ceph/ceph-container/pulls/$pull_request_id/files" |
+ jq -r '.[] | select(.status != "removed") | .filename' | # just the files please (not removed)
+ grep ".sh$" | # just the bash
+ grep -vE "$IGNORE_THESE_FILES"
+ fi
+
+}
+
+function check(){
+ local file
+ while read -r filename; do
+ pushd "$(dirname "$filename")"
+ file=$(basename "$filename")
+ sudo docker run --rm -v "$(pwd)"/"$file":/"$file":z koalaman/shellcheck --external-sources --exclude "$IGNORE_THESE_CODES" /"$file"
+ popd
+ done
+ return $?
+}
+
+function main() {
+ # install some of our dependencies if running on a jenkins builder
+ if [[ -n "$HUDSON_URL" ]]
+ then
+ sudo yum -y install epel-release
+ sudo yum -y install docker jq
+ sudo systemctl start docker || sudo systemctl start podman
+ pull_request_id=${ghprbPullId:-$2}
+ workspace=${WORKSPACE:-$1}
+ else
+ if ! command -v docker || ! command -v jq
+ then
+ echo "docker or jq is/are missing, install it/them"
+ exit 1
+ fi
+ pull_request_id=${ghprbPullId:-$2}
+ workspace=${WORKSPACE:-$1}
+ fi
+
+
+ sudo docker pull koalaman/shellcheck
+ pushd "$workspace/ceph-container"
+ generate_filelist | check
+ popd
+ exit $?
+}
+
+main "$@"
--- /dev/null
+2.jenkins.ceph.com
--- /dev/null
+- scm:
+ name: ceph-container
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-container.git
+ branches:
+ - ${sha1}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+ basedir: "ceph-container"
+
+- job:
+ name: ceph-container-lint
+ node: small && centos9
+ defaults: global
+ display-name: 'ceph-container-lint'
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-container/
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ admin-list:
+ - alfredodeza
+ - ktdreyer
+ - gmeno
+ org-list:
+ - ceph
+ trigger-phrase: 'jenkins lint'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "Testing: for sloppy bash"
+ started-status: "Running: shellchecker"
+ success-status: "OK - nice work"
+ failure-status: "FAIL - please clean up for merge"
+
+ scm:
+ - ceph-container
+
+ builders:
+ - shell:
+ !include-raw:
+ - ../../build/build
+
--- /dev/null
+#!/bin/bash
+set -e
+
+sudo apt-get install jq -y
+
+cd "$WORKSPACE"/ceph-container/ || exit
+TMPNAME=$(mktemp)
+
+ARCH=x86_64 \
+ TEST_BUILD_ONLY=true \
+ PRERELEASE=true \
+ FORCE_BUILD=true \
+ X86_64_FLAVORS_TO_BUILD=${X86_64_FLAVORS_TO_BUILD} \
+ AARCH64_FLAVORS_TO_BUILD="" \
+ FULL_BUILD_TAG_TMPFILE=${TMPNAME} \
+ bash -x contrib/build-ceph-base.sh
+
+imagename=$(<${TMPNAME})
+
+# strip leading path components, sub _ for : in name
+imagetag=${imagename##*/}
+imagetag=${imagetag//:/_}
+imagetag=quay.ceph.io/ceph/prerelease:${imagetag}
+
+docker tag ${imagename} ${imagetag}
+docker login --username ${QUAY_CEPH_IO_USERNAME} --password ${QUAY_CEPH_IO_PASSWORD} quay.ceph.io
+docker push ${imagetag}
+docker rmi ${imagename}
--- /dev/null
+2.jenkins.ceph.com
--- /dev/null
+- job:
+ name: ceph-container-prerelease-build
+ node: huge && trusty && x86_64
+ project-type: freestyle
+ defaults: global
+ display-name: 'ceph-container-prerelease-build: build prerelease container images and push to quay.ceph.io'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 1
+ num-to-keep: 1
+ artifact-days-to-keep: 1
+ artifact-num-to-keep: 1
+ - github:
+ url: https://github.com/ceph/ceph-container
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "Branch of ceph-container.git to use"
+ default: main
+
+ - string:
+ name: X86_64_FLAVORS_TO_BUILD
+ description: "x86 flavor(s) to build"
+ default: "reef,centos,8"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-container.git
+ branches:
+ - ${BRANCH}
+ browser: auto
+ basedir: "ceph-container"
+ timeout: 20
+
+ builders:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: release-build-quay-ceph-io
+ username: QUAY_CEPH_IO_USERNAME
+ password: QUAY_CEPH_IO_PASSWORD
+ - username-password-separated:
+ credential-id: download-ceph-com-prerelease
+ username: PRERELEASE_USERNAME
+ password: PRERELEASE_PASSWORD
--- /dev/null
+#!/bin/bash
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+set_centos_python3_version "python3.9"
+install_python_packages $TEMPVENV "pkgs[@]" "pip==22.0.4"
+
+# XXX this might not be needed
+source $VENV/activate
+
+WORKDIR=$(mktemp -td tox.XXXXXXXXXX)
+
+prune_stale_vagrant_running_vms
+delete_libvirt_vms
+clear_libvirt_networks
+restart_libvirt_services
+update_vagrant_boxes
+
+if ! timeout 3h $VENV/tox -rv -e=$SCENARIO --workdir=$WORKDIR; then
+ echo "ERROR: Job didn't complete successfully or got stuck for more than 3h."
+ exit 1
+fi
--- /dev/null
+#!/bin/bash
+# There has to be a better way to do this than this script which just looks
+# for every Vagrantfile in scenarios and then just destroys whatever is left.
+
+cd $WORKSPACE/ceph-ansible/tests/functional
+
+scenarios=$(find . | grep Vagrantfile | xargs -r dirname)
+
+for scenario in $scenarios; do
+ cd $scenario
+ vagrant destroy -f
+ cd -
+done
--- /dev/null
+2.jenkins.ceph.com
--- /dev/null
+- project:
+ name: ceph-container-prs-auto
+ test:
+ - all_daemons
+ - lvm_osds
+ - collocation
+ jobs:
+ - 'ceph-container-prs-auto'
+
+- job-template:
+ name: 'ceph-container-prs-ceph_ansible-{test}'
+ id: 'ceph-container-prs-auto'
+ node: vagrant&&libvirt&¢os8&&(braggi||adami)
+ concurrent: true
+ defaults: global
+ display-name: 'ceph-container: Pull Requests [ceph_ansible-{test}]'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph-container
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: -1
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ cancel-builds-on-update: true
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ skip-build-phrase: '^jenkins do not test.*|.*\[skip ci\].*'
+ trigger-phrase: 'jenkins test ceph_ansible-{test}'
+ only-trigger-phrase: true
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "Testing: ceph_ansible-{test}"
+ started-status: "Running: ceph_ansible-{test}"
+ success-status: "OK - ceph_ansible-{test}"
+ failure-status: "FAIL - ceph_ansible-{test}"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-container.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+
+ builders:
+ - inject:
+ properties-content: |
+ SCENARIO=ceph_ansible-{test}
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: ceph-container-quay-io
+ username: REGISTRY_USERNAME
+ password: REGISTRY_PASSWORD
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell: !include-raw: ../../build/teardown
+
+- job-template:
+ name: 'ceph-container-prs-ceph_ansible-{test}'
+ id: 'ceph-container-prs-trigger'
+ node: vagrant&&libvirt&¢os8
+ concurrent: true
+ defaults: global
+ display-name: 'ceph-container: Pull Requests [ceph_ansible-{test}]'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph-container
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: -1
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ cancel-builds-on-update: true
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ skip-build-phrase: '^jenkins do not test.*|.*\[skip ci\].*'
+ trigger-phrase: 'jenkins test ceph_ansible-{test}'
+ only-trigger-phrase: true
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "Testing: ceph_ansible-{test}"
+ started-status: "Running: ceph_ansible-{test}"
+ success-status: "OK - ceph_ansible-{test}"
+ failure-status: "FAIL - ceph_ansible-{test}"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-container.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+
+ builders:
+ - inject:
+ properties-content: |
+ SCENARIO=ceph_ansible-{test}
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: ceph-container-quay-io
+ username: REGISTRY_USERNAME
+ password: REGISTRY_PASSWORD
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell: !include-raw: ../../build/teardown
--- /dev/null
+#!/bin/bash
+# vim: ts=4 sw=4 expandtab
+set -ex
+HOST=$(hostname --short)
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " KEYID=${KEYID}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo "*****"
+env
+echo "*****"
+
+if test $(id -u) != 0 ; then
+ SUDO=sudo
+fi
+
+get_rpm_dist
+
+BRANCH=`branch_slash_filter $BRANCH`
+
+# Normalize variables across rpm/deb builds
+NORMAL_DISTRO=$DISTRO
+NORMAL_DISTRO_VERSION=$RELEASE
+NORMAL_ARCH=$ARCH
+
+chacra_endpoint="ceph/${BRANCH}/${SHA1}/${DISTRO}/${RELEASE}"
+
+SHAMAN_URL="https://shaman.ceph.com/api/search/?project=ceph&distros=centos/${RELEASE}/${ARCH}&sha1=${SHA1}&ref=${BRANCH}&flavor=${FLAVOR}"
+
+loop=0
+ready=false
+while ((loop < 15)); do
+ if [[ $(curl -s "$SHAMAN_URL" | jq -r '.[0].status') == 'ready' ]] ; then ready=true; break; fi
+ ((loop = loop + 1))
+ sleep 60
+done
+
+if [[ "$ready" == "false" ]] ; then
+ echo "FAIL: timed out waiting for shaman repo to be built: https://shaman.ceph.com/api/repos/${chacra_endpoint}/flavors/${FLAVOR}/"
+ echo
+ echo "NOTE: You should only use this job if there was already a successful ceph-dev*build job!"
+fi
+
+SHA1=$(curl -s "$SHAMAN_URL" | jq -r '.[0].sha1')
+
+pushd $WORKSPACE/ceph-container
+$SUDO -E CI_CONTAINER=true BASEOS_REGISTRY="quay.io/centos" SHA1=${SHA1} OSD_FLAVOR=${FLAVOR} CONTAINER_FLAVOR=${BRANCH},${DISTRO},${RELEASE} \
+ /bin/bash ./contrib/build-push-ceph-container-imgs.sh
+popd
+$SUDO rm -rf $WORKSPACE/ceph-container
+
+# update shaman with the completed build status
+if $NOTIFY_SHAMAN; then
+ update_build_status "completed" "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+fi
--- /dev/null
+#!/bin/bash -ex
+
+# The ceph-container dir is supposed to get deleted in the build_rpm script.
+# We used to add '|| true' to the container build so the dir would still get
+# deleted even if it failed. This changed in https://github.com/ceph/ceph-build/pull/1603
+# So now we need to delete the directory or the Wipe Workspace plugin will fail on the next build.
+cd $WORKSPACE
+sudo rm -rf ceph-container
+
+get_rpm_dist
+# note: the failed_build_status call relies on normalized variable names that
+# are infered by the builds themselves. If the build fails before these are
+# set, they will be posted with empty values
+BRANCH=`branch_slash_filter $BRANCH`
+
+# Normalize variables across rpm/deb builds
+NORMAL_DISTRO=$DISTRO
+NORMAL_DISTRO_VERSION=$RELEASE
+NORMAL_ARCH=$ARCH
+
+# update shaman with the failed build status
+failed_build_status "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
--- /dev/null
+- job:
+ name: ceph-dev-container-only
+ node: built-in
+ project-type: matrix
+ defaults: global
+ display-name: 'ceph-dev-container-only: Builds a quay.ceph.io/ceph-ci container given a BRANCH'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ properties:
+ - build-discarder:
+ days-to-keep: 30
+ artifact-days-to-keep: 30
+
+ scm:
+ - git:
+ url: git@github.com:ceph/ceph-container.git
+ basedir: ceph-container
+ credentials-id: 'jenkins-build'
+ branches:
+ - $CONTAINER_BRANCH
+ skip-tag: true
+ wipe-workspace: true
+
+ execution-strategy:
+ combination-filter: |
+ DIST == AVAILABLE_DIST && ARCH == AVAILABLE_ARCH &&
+ (ARCH == "x86_64" || (ARCH == "arm64" && ["xenial", "bionic", "centos7", "centos8"].contains(DIST)))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - gigantic
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - arm64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos8
+ - centos9
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build. NOTE: This branch must already be built and packages pushed to a chacra node!"
+ default: main
+
+ - string:
+ name: SHA1
+ description: "Change to a specific SHA1 if desired."
+ default: "latest"
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos8 or centos9"
+ default: "centos8"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64, and arm64"
+ default: "x86_64 arm64"
+
+ - choice:
+ name: FLAVOR
+ choices:
+ - default
+ - crimson
+ - jaeger
+ default: "default"
+ description: "Type of Ceph build, choices are: crimson, jaeger, default. Defaults to: 'default'"
+
+ - string:
+ name: CONTAINER_BRANCH
+ description: "For CI_CONTAINER: Branch of ceph-container to use"
+ default: main
+
+ - string:
+ name: CONTAINER_REPO_HOSTNAME
+ description: "For CI_CONTAINER: Name of container repo server (i.e. 'quay.io')"
+ default: "quay.ceph.io"
+
+ - string:
+ name: CONTAINER_REPO_ORGANIZATION
+ description: "For CI_CONTAINER: Name of container repo organization (i.e. 'ceph-ci')"
+ default: "ceph-ci"
+
+ - bool:
+ name: NOTIFY_SHAMAN
+ description: "Should we tell shaman this container built and change the corresponding build to READY?"
+ default: true
+
+
+ builders:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build_rpm
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/failure
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
+ - username-password-separated:
+ credential-id: quay-ceph-io-ceph-ci
+ username: CONTAINER_REPO_USERNAME
+ password: CONTAINER_REPO_PASSWORD
+ - build-name:
+ name: "#${{BUILD_NUMBER}} ${{BRANCH}}, ${{DISTROS}}, ${{ARCH}}, ${{FLAVOR}}"
--- /dev/null
+#!/bin/bash -ex
+
+# update shaman with the triggered build status. At this point there aren't any
+# architectures or distro information, so we just report this with the current
+# build information
+BRANCH=`branch_slash_filter ${GIT_BRANCH}`
+SHA1=${GIT_COMMIT}
+
+update_build_status "queued" "ceph"
--- /dev/null
+- job:
+ disabled: true
+ name: ceph-dev-trigger
+ node: built-in
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: 1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph
+ discard-old-builds: true
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph
+ browser: auto
+ branches:
+ - 'origin/main'
+ - 'origin/quincy'
+ - 'origin/reef'
+ - 'origin/squid'
+ - 'origin/tentacle'
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ # build quincy on:
+ # default: focal centos8 leap15
+ - conditional-step:
+ condition-kind: regex-match
+ regex: .*quincy.*
+ label: '${GIT_BRANCH}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/notify
+ - trigger-builds:
+ - project: 'ceph-dev'
+ predefined-parameters: |
+ BRANCH=${GIT_BRANCH}
+ FORCE=True
+ DISTROS=focal centos8 leap15
+ # build reef on:
+ # default: jammy focal centos8 centos9
+ - conditional-step:
+ condition-kind: regex-match
+ regex: .*reef.*
+ label: '${GIT_BRANCH}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/notify
+ - trigger-builds:
+ - project: 'ceph-dev'
+ predefined-parameters: |
+ BRANCH=${GIT_BRANCH}
+ FORCE=True
+ DISTROS=jammy focal centos8 centos9
+ # build squid on:
+ # default: jammy focal centos8 centos9
+ - conditional-step:
+ condition-kind: regex-match
+ regex: .*squid.*
+ label: '${GIT_BRANCH}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/notify
+ - trigger-builds:
+ - project: 'ceph-dev'
+ predefined-parameters: |
+ BRANCH=${GIT_BRANCH}
+ FORCE=True
+ DISTROS=jammy focal centos8 centos9
+ # build tentacle on:
+ # default: jammy focal centos8 centos9
+ # crimson: centos8 centos9
+ - conditional-step:
+ condition-kind: regex-match
+ regex: .*tentacle.*
+ label: '${GIT_BRANCH}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/notify
+ - trigger-builds:
+ - project: 'ceph-dev'
+ predefined-parameters: |
+ BRANCH=${GIT_BRANCH}
+ FORCE=True
+ DISTROS=jammy focal centos8 centos9
+ - project: 'ceph-dev'
+ predefined-parameters: |
+ BRANCH=${GIT_BRANCH}
+ FORCE=True
+ DISTROS=centos9
+ FLAVOR=crimson
+ # build main on:
+ # default: jammy focal centos8 centos9
+ # crimson: centos9
+ - conditional-step:
+ condition-kind: regex-match
+ regex: .*main.*
+ label: '${GIT_BRANCH}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/notify
+ - trigger-builds:
+ - project: 'ceph-dev'
+ predefined-parameters: |
+ BRANCH=${GIT_BRANCH}
+ FORCE=True
+ DISTROS=jammy focal centos8 centos9
+ - project: 'ceph-dev'
+ predefined-parameters: |
+ BRANCH=${GIT_BRANCH}
+ FORCE=True
+ DISTROS=centos9
+ FLAVOR=crimson
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# trims leading slashes
+BRANCH=`branch_slash_filter ${GIT_BRANCH}`
+
+# create the docs build with tox
+$VENV/tox -rv -e docs
+
+# publish docs to http://docs.ceph.com/ceph-medic/$BRANCH/ create
+# a `$BRANCH` dir because the project has stable branches that will
+# publish docs that might be different from other versions (similar,
+# but not exactly the same to what the Ceph project does)
+mkdir -p "/var/ceph-medic/docs/$BRANCH"
+rsync -auv --delete .tox/docs/tmp/html/* "/var/ceph-medic/docs/$BRANCH/"
+
--- /dev/null
+- job:
+ name: ceph-medic-docs
+ node: docs
+ project-type: freestyle
+ defaults: global
+ display-name: 'ceph-medic: docs build'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-medic
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-medic
+ branches:
+ - main
+ # as more stable branches are published, they need to be
+ # added here
+ #- stable-1.0
+ browser: auto
+ skip-tag: true
+ timeout: 20
+
+ builders:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
--- /dev/null
+#!/bin/bash
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+sudo yum install -y epel-release
+sudo yum --enablerepo epel install -y python36
+
+cd "$WORKSPACE/ceph-medic"
+
+export TOX_SKIP_ENV=py37
+$VENV/tox -rv
--- /dev/null
+- scm:
+ name: ceph-medic
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-medic
+ branches:
+ - ${sha1}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ basedir: "ceph-medic"
+ skip-tag: true
+ wipe-workspace: true
+
+
+- job:
+ name: ceph-medic-pull-requests
+ description: Runs tox tests for ceph-medic on each GitHub PR
+ project-type: freestyle
+ node: python3 && centos7
+ block-downstream: false
+ block-upstream: false
+ defaults: global
+ display-name: 'ceph-medic: Pull Requests'
+ quiet-period: 5
+ retry-count: 3
+
+
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: 15
+ artifact-num-to-keep: 15
+ - github:
+ url: https://github.com/ceph/ceph-medic/
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ admin-list:
+ - dmick
+ - ktdreyer
+ - andrewschoen
+ - zmc
+ org-list:
+ - ceph
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+
+ scm:
+ - ceph-medic
+
+ builders:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+# Sanity-check:
+[ -z "$GIT_BRANCH" ] && echo Missing GIT_BRANCH variable && exit 1
+[ -z "$JOB_NAME" ] && echo Missing JOB_NAME variable && exit 1
+
+
+sudo yum -y install epel-release
+sudo yum -y install fedpkg mock
+
+# Attempt the build. If it fails, print the mock logs to STDOUT.
+make rpm || ( tail -n +1 {root,build}.log && exit 1 )
+
+# Chacra time
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+make_chacractl_config
+
+BRANCH=`branch_slash_filter $GIT_BRANCH`
+
+## Upload the created RPMs to chacra
+chacra_endpoint="ceph-medic/${BRANCH}/${GIT_COMMIT}/centos/7"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+ls *.rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/noarch/
+
+# start repo creation
+$VENV/chacractl repo update ${chacra_endpoint}
+
+echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
--- /dev/null
+- job:
+ name: ceph-medic-release
+ project-type: matrix
+ defaults: global
+ display-name: 'ceph-medic-release'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph-medic
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: "main"
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos7, centos6"
+ default: "centos7"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64, and arm64"
+ default: "x86_64"
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ execution-strategy:
+ combination-filter: DIST==AVAILABLE_DIST && ARCH==AVAILABLE_ARCH && (ARCH=="x86_64" || (ARCH == "arm64" && (DIST == "xenial" || DIST == "centos7")))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - small
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - arm64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos7
+ - xenial
+ - bionic
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-medic
+ skip-tag: true
+ branches:
+ - $BRANCH
+ wipe-workspace: false
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ sudo rm -rf dist
+ sudo rm -rf venv
+ sudo rm -rf release
+ # rpm build scripts
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build_rpm
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+# Sanity-check:
+[ -z "$GIT_BRANCH" ] && echo Missing GIT_BRANCH variable && exit 1
+[ -z "$JOB_NAME" ] && echo Missing JOB_NAME variable && exit 1
+
+# Strip "-rpm" off the job name to get our package's name
+PACKAGE=${JOB_NAME%-rpm}
+
+sudo yum -y install epel-release
+sudo yum -y install fedpkg mock
+
+# Attempt the build. If it fails, print the mock logs to STDOUT.
+make rpm || ( tail -n +1 {root,build}.log && exit 1 )
+
+# Chacra time
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
+
+BRANCH=`branch_slash_filter $GIT_BRANCH`
+
+## Upload the created RPMs to chacra
+chacra_endpoint="${PACKAGE}/${BRANCH}/${GIT_COMMIT}/centos/7"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+ls *.rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/noarch/
+
+# start repo creation
+$VENV/chacractl repo update ${chacra_endpoint}
+
+echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
--- /dev/null
+- job:
+ name: ceph-medic-rpm
+ node: 'centos7 && x86_64 && small && !sepia'
+ project-type: freestyle
+ defaults: global
+ disabled: false
+ display-name: 'ceph-medic: RPMs'
+ description: 'Build RPMs for every ceph-medic Git branch'
+ concurrent: true
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-medic
+ discard-old-builds: true
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-medic
+ browser: auto
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+WORKDIR=$(mktemp -td tox.XXXXXXXXXX)
+
+cd $WORKSPACE/tests/functional
+
+delete_libvirt_vms
+clear_libvirt_networks
+restart_libvirt_services
+
+CEPH_MEDIC_DEV_BRANCH=$CEPH_MEDIC_BRANCH CEPH_ANSIBLE_BRANCH=$CEPH_ANSIBLE_BRANCH $VENV/tox -rv -e=$SCENARIO --workdir=$WORKDIR -- --provider=libvirt
--- /dev/null
+- project:
+ name: ceph-medic-tests
+ # disabled because it is broken with current ceph-ansible main and there
+ # is no active development going on
+ disabled: true
+ scenario:
+ - ansible2.3-nightly_centos7
+ jobs:
+ - 'ceph-medic-tests-{scenario}'
+
+- job-template:
+ name: 'ceph-medic-tests-{scenario}'
+ node: vagrant && libvirt
+ project-type: freestyle
+ defaults: global
+ disabled: true
+ display-name: 'ceph-medic: Tests [{scenario}]'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph-medic
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: -1
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+
+ parameters:
+ - string:
+ name: CEPH_MEDIC_BRANCH
+ description: "The ceph-medic branch to test"
+ default: main
+
+ - string:
+ name: CEPH_ANSIBLE_BRANCH
+ description: "The ceph-ansible branch to test"
+ default: main
+
+ triggers:
+ - timed: '@daily'
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-medic.git
+ branches:
+ - $CEPH_MEDIC_BRANCH
+ browser: auto
+ timeout: 20
+
+ builders:
+ - inject:
+ properties-content: |
+ SCENARIO={scenario}
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ publishers:
+ - email:
+ recipients: aschoen@redhat.com adeza@redhat.com
--- /dev/null
+- job:
+ name: ceph-pr-arm-trigger
+ node: built-in
+ # disabled for now because this is not passing the right BRANCH to
+ # `ceph-dev` which causes failures there
+ disabled: true
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph
+ discard-old-builds: true
+
+ triggers:
+ - github-pull-request:
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ trigger-phrase: 'jenkins test arm'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "arm build"
+ started-status: "building on arm"
+ success-status: "successfully built on arm"
+ failure-status: "could not build on arm"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph
+ browser: auto
+ skip-tag: true
+ shallow-clone: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - trigger-builds:
+ # 'ceph-dev' uses ceph.git, where this PR would live at
+ - project: 'ceph-dev'
+ predefined-parameters: |
+ # XXX unsure if $GIT_BRANCH will translate correctly to the actual
+ # source of the PR
+ BRANCH=${GIT_BRANCH}
+ FORCE=True
+ DISTROS=bionic xenial centos7
+ ARCHS="arm64"
+ # Do not post to chacra
+ THROWAWAY=True
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+# This job is meant to be triggered from a Github Pull Request, only when the
+# job is executed in that way a few "special" variables become available. So
+# this build script tries to use those first but then it will try to figure it
+# out using Git directly so that if triggered manually it can attempt to
+# actually work.
+PR_ID=$ghprbPullId
+
+# fallback to just using 'manual' if that ID is not available, for manually
+# triggered builds
+if [ -z "$ghprbPullId" ]; then
+ PR_ID="manual"
+fi
+
+./admin/build-doc
+
+# publish docs to http://docs.ceph.com/ceph-prs/$PR_ID/
+mkdir -p "/var/ceph-prs/$PR_ID"
+rsync -auv --delete build-doc/output/html/* "/var/ceph-prs/$PR_ID/"
+
+set +e
+set +x
+
+# Cleanup docs rendered 90+ days ago
+find /var/ceph-prs/ -mindepth 1 -maxdepth 1 -mtime +90 -exec rm -rvf {} \;
+
+echo
+echo "Docs available to preview at:"
+echo
+echo " http://docs.ceph.com/ceph-prs/$PR_ID/"
+echo
--- /dev/null
+- job:
+ name: ceph-pr-render-docs
+ disabled: true
+ display-name: 'ceph: Pull Requests Render Docs'
+ node: docs
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph
+ - build-discarder:
+ days-to-keep: 14
+ discard-old-builds: true
+
+ triggers:
+ - github-pull-request:
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ cancel-builds-on-update: true
+ # this job is only triggered by explicitly asking for it
+ only-trigger-phrase: true
+ trigger-phrase: 'jenkins render docs.*'
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "Docs: render build"
+ started-status: "Docs: building to render"
+ success-status: "OK - docs rendered"
+ failure-status: "Docs: render failed with errors"
+ success-comment: "Doc render available at https://ceph--${ghprbPullId}.org.readthedocs.build/en/${ghprbPullId}/"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph
+ browser: auto
+ branches:
+ - ${sha1}
+ refspec: +refs/pull/${ghprbPullId}/*:refs/remotes/origin/pr/${ghprbPullId}/*
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw: ../../build/build
--- /dev/null
+#!/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+if test -f /etc/redhat-release ; then
+ exit 0
+fi
+
+cd $WORKSPACE
+
+# slap -rc to the ref if we are doing a release-candidate build
+chacra_ref="$BRANCH"
+[ "$RC" = true ] && chacra_ref="$BRANCH-rc"
+[ "$TEST" = true ] && chacra_ref="test"
+
+ARCH=$(dpkg-architecture -qDEB_BUILD_ARCH)
+DISTRO=""
+case $DIST in
+ jessie|wheezy)
+ DISTRO="debian"
+ ;;
+ *)
+ DISTRO="ubuntu"
+ ;;
+esac
+
+debian_version=${VERSION}-1
+
+BPVER=`gen_debian_version $debian_version $DIST`
+
+chacra_endpoint="diamond/${BRANCH}/${SHA1}/${DISTRO}/${DIST}"
+chacra_check_url="${chacra_endpoint}/diamond_${BPVER}_${ARCH}.deb"
+
+if [ "$THROWAWAY" = false ] ; then
+ # this exists in scripts/build_utils.sh
+ check_binary_existence $VENV $chacra_check_url
+fi
+
+HOST=$(hostname --short)
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " ARCH=${ARCH}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo "*****"
+env
+echo "*****"
+
+# Use pbuilder
+echo "Building debs"
+
+pbuilddir="/srv/debian-base"
+
+sudo pbuilder --clean
+
+mkdir -p dist/deb
+
+echo "Building debs for $DIST"
+sudo pbuilder build \
+ --distribution $DIST \
+ --basetgz $pbuilddir/$DIST.tgz \
+ --buildresult dist/deb/ \
+ --debbuildopts "-j`grep -c processor /proc/cpuinfo`" \
+ dist/diamond_$VERSION.dsc
+
+# Make sure we execute at the top level directory
+cd "$WORKSPACE"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+if [ "$THROWAWAY" = false ] ; then
+ # push binaries to chacra
+ find dist/deb/ | egrep "*\.(changes|deb|dsc|gz)$" | egrep -v "(Packages|Sources|Contents)" | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/
+
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_endpoint}
+
+ echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
+fi
--- /dev/null
+#!/bin/bash
+set -ex
+
+# Only do actual work when we are an RPM distro
+if [[ ! -f /etc/redhat-release && ! -f /usr/bin/zypper ]] ; then
+ exit 0
+fi
+
+cd $WORKSPACE
+
+get_rpm_dist
+dist=$DIST
+[ -z "$dist" ] && echo no dist && exit 1
+echo dist $dist
+
+chacra_endpoint="diamond/${BRANCH}/${SHA1}/${DISTRO}/${RELEASE}"
+chacra_check_url="${chacra_endpoint}/${ARCH}/diamond-${VERSION}-0.${DIST}.${ARCH}.rpm"
+
+if [ "$THROWAWAY" = false ] ; then
+ # this exists in scripts/build_utils.sh
+ check_binary_existence $VENV $chacra_check_url
+fi
+
+HOST=$(hostname --short)
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " ARCH=${ARCH}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo "*****"
+env
+echo "*****"
+
+# Install the dependencies
+sudo yum-builddep -y dist/diamond.spec
+
+# Create the source rpm
+echo "Building SRPM"
+rpmbuild \
+ --define "_sourcedir ./dist" \
+ --define "_specdir ." \
+ --define "_builddir ." \
+ --define "_srcrpmdir ." \
+ --define "_rpmdir ." \
+ --define "dist .any" \
+ --define "fedora 21" \
+ --define "rhel 7" \
+ --nodeps -bs dist/diamond.spec
+SRPM=$(readlink -f *.src.rpm)
+
+# Build the binaries
+echo "Building RPMs"
+sudo mock -r epel-${RELEASE}-${ARCH} --resultdir=./dist/rpm/"%(dist)s"/"%(target_arch)s"/ ${SRPM}
+
+# Make sure we execute at the top level directory
+cd "$WORKSPACE"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+if [ "$THROWAWAY" = false ] ; then
+ # push binaries to chacra
+ find dist/rpm/$DIST/ | egrep '\.rpm$' | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/$ARCH/
+
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_endpoint}
+
+ echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
+fi
--- /dev/null
+#!/bin/bash
+#
+# Ceph distributed storage system
+#
+# Copyright (C) 2016 Red Hat <contact@redhat.com>
+#
+# Author: Boris Ranto <branto@redhat.com>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+set -ex
+HOST=$(hostname --short)
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " KEYID=${KEYID}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BUILD SOURCE=$COPYARTIFACT_BUILD_NUMBER_CEPH_SETUP"
+echo "*****"
+env
+echo "*****"
+
+if test $(id -u) != 0 ; then
+ SUDO=sudo
+fi
+export LC_ALL=C # the following is vulnerable to i18n
+
+if test -f /etc/redhat-release ; then
+ $SUDO yum install -y redhat-lsb-core
+fi
+
+if which apt-get > /dev/null ; then
+ $SUDO apt-get install -y lsb-release
+fi
+
+case $(lsb_release -si) in
+CentOS|Fedora|SUSE*|RedHatEnterpriseServer)
+ case $(lsb_release -si) in
+ SUSE*)
+ $SUDO zypper -y yum-utils
+ ;;
+ *)
+ $SUDO yum install -y yum-utils mock
+ ;;
+ esac
+ ;;
+*)
+ echo "$(lsb_release -si) is unknown, dependencies will have to be installed manually."
+ ;;
+esac
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -f -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
--- /dev/null
+#!/bin/sh -x
+# This file will set the tgz images needed for pbuilder on a given host. It has
+# some hard-coded values like `/srv/debian-base` because it gets built every
+# time this file is executed - completely ephemeral. If a Debian host will use
+# pbuilder, then it will need this. Since it is not idempotent it makes
+# everything a bit slower. ## FIXME ##
+
+set -e
+
+# Only run when we are a Debian or Debian-based distro
+if test -f /etc/redhat-release ; then
+ exit 0
+fi
+
+setup_pbuilder
--- /dev/null
+#!/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+if test -f /etc/redhat-release ; then
+ exit 0
+fi
--- /dev/null
+#!/bin/bash
+set -ex
+
+# only do work if we are a RPM distro
+if [[ ! -f /etc/redhat-release && ! -f /usr/bin/zypper ]] ; then
+ exit 0
+fi
--- /dev/null
+- job:
+ name: diamond-build
+ project-type: matrix
+ defaults: global
+ display-name: 'diamond-build'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ properties:
+ - github:
+ url: https://github.com/ceph/Diamond
+ execution-strategy:
+ combination-filter: DIST==AVAILABLE_DIST && ARCH==AVAILABLE_ARCH && (ARCH=="x86_64" || (ARCH == "arm64" && (DIST == "xenial" || DIST == "centos7")))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - small
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - arm64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos6
+ - centos7
+ - trusty
+ - xenial
+ - jessie
+ - precise
+ - wheezy
+ - axis:
+ type: dynamic
+ name: DIST
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ rm -rf dist
+ rm -rf venv
+ rm -rf release
+ - copyartifact:
+ project: diamond-setup
+ filter: 'dist/**'
+ which-build: last-successful
+ - inject:
+ properties-file: ${WORKSPACE}/dist/sha1
+ - inject:
+ properties-file: ${WORKSPACE}/dist/branch
+ - inject:
+ properties-file: ${WORKSPACE}/dist/version
+ # debian build scripts
+ - shell:
+ !include-raw:
+ - ../../build/validate_deb
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/setup_pbuilder
+ - ../../build/build_deb
+ # rpm build scripts
+ - shell:
+ !include-raw:
+ - ../../build/validate_rpm
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_rpm
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+#!/bin/bash -ex
+
+## Get the basic setup/info
+HOST=$(hostname --short)
+echo "Building on ${HOST}"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BRANCH=$BRANCH"
+echo " SHA1=$GIT_COMMIT"
+
+if [ -x "$BRANCH" ] ; then
+ echo "No git branch was supplied"
+ exit 1
+fi
+
+echo "Building version $(git describe) Branch $BRANCH"
+
+## Make sure the repo is clean
+# Remove all untracked files
+echo "Cleaning up the repo"
+git clean -fxd
+
+# Make sure the dist directory is clean
+rm -rf dist
+mkdir -p dist
+
+## Install any setup-time deps
+# We need this for mk-build-deps
+sudo apt-get install equivs
+# Run the install-deps.sh upstream script if it exists
+if [ -x install-deps.sh ]; then
+ echo "Ensuring dependencies are installed"
+ ./install-deps.sh
+fi
+
+## Get the version
+VERSION=$(./version.sh)
+
+## Build the source tarball
+echo "Building source distribution"
+python setup.py sdist
+
+## Prepare the spec file for build
+sed -e "s/@VERSION@/${VERSION}/g" < diamond.spec.in > dist/diamond.spec
+
+## Prepare the debian files
+# Bump the changelog
+dch -v "$VERSION" "New release ($VERSION)"
+
+# Install build-time dependencies
+yes | sudo mk-build-deps --install debian/control
+
+# Create .dsc and source tarball
+sudo dpkg-buildpackage -S -us -uc
+
+cp ../diamond_$VERSION* dist/
+
+## Save these so that we can later inject them into the build script
+cat > dist/sha1 << EOF
+SHA1=${GIT_COMMIT}
+EOF
+
+cat > dist/branch << EOF
+BRANCH=${BRANCH}
+EOF
+
+cat > dist/version << EOF
+VERSION=${VERSION}
+EOF
--- /dev/null
+- job:
+ name: diamond-setup
+ description: "This job step checks out the branch and builds the tarballs, diffs, and dsc that are passed to the diamond-build step.\r\n\r\nNotes:\r\nJob needs to run on a releatively recent debian system. The Restrict where run feature is used to specifiy an appropriate label.\r\nThe clear workspace before checkout box for the git plugin is used."
+ # we do not need to pin this to trusty anymore for the new jenkins instance
+ # FIXME: unpin when this gets ported over
+ node: small && trusty
+ display-name: 'diamond-setup'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 25
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/Diamond
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+
+ scm:
+ - git:
+ url: git@github.com:ceph/Diamond.git
+ # Use the SSH key attached to the ceph-jenkins GitHub account.
+ credentials-id: 'jenkins-build'
+ branches:
+ - $BRANCH
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw: ../../build/build
+
+ publishers:
+ - archive:
+ artifacts: 'dist/**'
+ allow-empty: false
+ latest-only: false
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
--- /dev/null
+- job:
+ name: diamond
+ description: 'This is the main diamond build task which builds for testing purposes.'
+ project-type: multijob
+ defaults: global
+ concurrent: true
+ display-name: 'diamond'
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 25
+ artifact-days-to-keep: 25
+ artifact-num-to-keep: 25
+ - github:
+ url: https://github.com/ceph/Diamond
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: main
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: xenial, centos7, centos6, trusty, precise, wheezy, and jessie"
+ default: "centos7 trusty"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64, and arm64"
+ default: "x86_64"
+
+ - bool:
+ name: THROWAWAY
+ description: "
+Default: False. When True it will not POST binaries to chacra. Artifacts will not be around for long. Useful to test builds."
+ default: false
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: DIAMOND_BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ builders:
+ - multijob:
+ name: 'diamond setup phase'
+ condition: SUCCESSFUL
+ projects:
+ - name: diamond-setup
+ current-parameters: true
+ exposed-scm: false
+ - multijob:
+ name: 'diamond build phase'
+ condition: SUCCESSFUL
+ projects:
+ - name: diamond-build
+ current-parameters: true
+ exposed-scm: false
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# create the docs build with tox
+$VENV/tox -rv -e docs
+
+# publish docs to http://docs.ceph.com/docs/teuthology
+rsync -auv --delete .tox/docs/tmp/html/* /var/teuthology/docs/
--- /dev/null
+- job:
+ name: teuthology-docs
+ disabled: true
+ node: docs
+ project-type: freestyle
+ defaults: global
+ display-name: 'Teuthology: Docs Build'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: -1
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/teuthology
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/teuthology.git
+ branches:
+ - main
+ browser: auto
+ timeout: 20
+
+ builders:
+ - shell:
+ !include-raw:
+ - ../../../scripts/build_utils.sh
+ - ../../setup/setup
+ - ../../build/build
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+APT_DEPS="libmysqlclient-dev libevent-dev libffi-dev libssl-dev pkg-config libvirt-dev"
+# We don't have tty-less sudo on docs.ceph.com; these deps must be installed
+# manually.
+#sudo apt install $APT_DEPS
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# trims leading slashes
+BRANCH=${GIT_BRANCH#*/}
+
+# create the docs build with tox
+cd $WORKSPACE/docs/
+$VENV/tox -rv
+
+# publish docs to http://docs.ceph.com/docs/ceph-ansible/$BRANCH/ create
+# a `$BRANCH` dir because the project has stable branches that will
+# publish docs that might be different from other versions (similar,
+# but not exactly the same to what the Ceph project does)
+mkdir -p "/var/ceph-ansible/docs/$BRANCH"
+rsync -auv --delete .tox/docs/tmp/html/* "/var/ceph-ansible/docs/$BRANCH/"
--- /dev/null
+- job:
+ name: ceph-ansible-docs
+ node: docs
+ project-type: freestyle
+ defaults: global
+ display-name: 'ceph-ansible: docs build'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-ansible
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-ansible
+ branches:
+ - main
+ - stable-2.1
+ - stable-2.2
+ - stable-3.0
+ - stable-3.1
+ - stable-3.2
+ - stable-4.0
+ - stable-5.0
+ - stable-6.0
+ browser: auto
+ skip-tag: true
+ timeout: 20
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
--- /dev/null
+#!/bin/bash
+
+# propagates the change in the necessary Ansible Galaxy repos.
+# i.e. https://github.com/ceph/ansible-ceph-common
+bash "$WORKSPACE"/ceph-ansible/contrib/push-roles-to-ansible-galaxy.sh
--- /dev/null
+2.jenkins.ceph.com
--- /dev/null
+- job:
+ name: ceph-ansible-galaxy
+ node: small && trusty
+ project-type: freestyle
+ defaults: global
+ display-name: 'ceph-ansible: Update galaxy roles'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: -1
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-ansible
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-ansible.git
+ branches:
+ - main
+ browser: auto
+ basedir: "ceph-ansible"
+ timeout: 20
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - ssh-agent-credentials:
+ # "jenkins-build" SSH key, needed for access to ceph-ansible.git
+ users:
+ - 'jenkins-build'
--- /dev/null
+#!/bin/bash
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox==4.2.8" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+set_centos_python3_version "python3.9"
+install_python_packages $TEMPVENV "pkgs[@]" "pip==22.0.4"
+
+# XXX this might not be needed
+source $VENV/activate
+
+WORKDIR=$(mktemp -td tox.XXXXXXXXXX)
+
+prune_stale_vagrant_running_vms
+delete_libvirt_vms
+clear_libvirt_networks
+restart_libvirt_services
+update_vagrant_boxes
+
+# This was initially in teardown but sometimes, it happens that the Jenkins Slave process
+# crashes before teardown is executed, it means we keep leftofver from previous build.
+# We ensure before the test is launched that no fetch directory from previous build is present.
+pushd $WORKSPACE/tests
+scenarios=$(find . -name Vagrantfile | xargs -r dirname)
+for scenario in $scenarios; do
+ pushd $scenario
+ rm -rf fetch/
+ popd
+done
+popd
+# In the same logic, clean fact cache
+rm -rf $HOME/ansible/facts/*
+
+# Skip these scenarios, they don't exist.
+[[ "$ghprbTargetBranch" != stable-4.0 && "$SCENARIO" == podman ]] ||
+[[ "$ghprbTargetBranch" =~ stable-4.0|stable-3 && "$SCENARIO" =~ cephadm|cephadm_adopt ]] ||
+[[ "$ghprbTargetBranch" != stable-3.2 && "$SCENARIO" == shrink_osd_legacy ]] ||
+[[ "$ghprbTargetBranch" =~ stable-3 && "$SCENARIO" =~ filestore_to_bluestore|subset_update ]] ||
+[[ "$ghprbTargetBranch" == stable-5.0 && "$DEPLOYMENT" == "non_container" && "$SCENARIO" == update ]] ||
+start_tox $TEMPVENV
+
+# Update scenario on stable-5.0 must be enabled back once 15.2.8 is out
--- /dev/null
+#!/bin/bash
+
+cd $WORKSPACE/tests
+
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+
+echo "========= VAGRANT DEBUGGING ========="
+sudo virsh list --all
+for net in $(sudo virsh net-list --name); do sudo virsh net-dhcp-leases ${net}; done
+sudo journalctl -u libvirtd --pager-end --no-pager
+echo "======= END VAGRANT DEBUGGING ======="
+
+# the method exists in scripts/build_utils.sh
+COLLECT_LOGS_PLAYBOOK_PATH="$WORKSPACE/tests/functional/collect-logs.yml"
+teardown_vagrant_tests $VENV $COLLECT_LOGS_PLAYBOOK_PATH
+
+# clean fact cache
+rm -rf $HOME/ansible/facts/*
--- /dev/null
+2.jenkins.ceph.com
--- /dev/null
+- project:
+ name: ceph-ansible-prs-braggi-adami
+ builder_labels: 'vagrant && libvirt && (braggi||adami)'
+ distribution:
+ - centos
+ deployment:
+ - container
+ - non_container
+ scenario:
+ - all_daemons
+ - all_in_one
+ - update
+ - subset_update
+ - purge
+ - switch_to_containers
+ exclude:
+ - deployment: container
+ scenario: switch_to_containers
+ - deployment: non_container
+ scenario: podman
+ jobs:
+ - 'ceph-ansible-prs-auto'
+
+- project:
+ name: ceph-ansible-prs
+ builder_labels: 'vagrant && libvirt && (smithi || braggi)'
+ distribution:
+ - centos
+ deployment:
+ - container
+ - non_container
+ scenario:
+ - lvm_osds
+ - collocation
+ - lvm_batch
+ - external_clients
+ - rbdmirror
+ jobs:
+ - 'ceph-ansible-prs-auto'
+
+- project:
+ name: ceph-ansible-prs-docker2podman
+ builder_labels: 'vagrant && libvirt && (smithi || braggi)'
+ distribution:
+ - centos
+ deployment:
+ - container
+ scenario:
+ - docker_to_podman
+ jobs:
+ - 'ceph-ansible-prs-common-trigger'
+
+- project:
+ name: ceph-ansible-prs-cephadm
+ builder_labels: 'vagrant && libvirt && (adami || braggi)'
+ distribution:
+ - centos
+ deployment:
+ - container
+ scenario:
+ - cephadm
+ - cephadm_adopt
+ jobs:
+ - 'ceph-ansible-prs-common-trigger'
+
+- project:
+ name: ceph-ansible-prs-purge-dashboard
+ builder_labels: 'vagrant && libvirt && (adami || braggi)'
+ distribution:
+ - centos
+ deployment:
+ - container
+ - non_container
+ scenario:
+ - purge_dashboard
+ jobs:
+ - 'ceph-ansible-prs-common-trigger'
+
+- project:
+ name: ceph-ansible-prs-common-trigger
+ builder_labels: 'vagrant && libvirt && (smithi || braggi)'
+ distribution:
+ - centos
+ deployment:
+ - container
+ - non_container
+ scenario:
+ - add_mdss
+ - add_mgrs
+ - add_mons
+ - add_osds
+ - add_rbdmirrors
+ - add_rgws
+ - rgw_multisite
+ - shrink_mon
+ - shrink_mgr
+ - shrink_osd_multiple
+ - shrink_osd_single
+ - shrink_osd_legacy
+ - shrink_rgw
+ - shrink_mds
+ - shrink_rbdmirror
+ - lvm_auto_discovery
+ - filestore_to_bluestore
+ jobs:
+ - 'ceph-ansible-prs-common-trigger'
+
+- job-template:
+ name: 'ceph-ansible-prs-{distribution}-{deployment}-{scenario}'
+ id: 'ceph-ansible-prs-auto'
+ node: '{builder_labels}'
+ concurrent: true
+ defaults: global
+ display-name: 'ceph-ansible: Pull Requests [{distribution}-{deployment}-{scenario}]'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph-ansible
+ - build-discarder:
+ days-to-keep: 90
+ num-to-keep: -1
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ cancel-builds-on-update: true
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ skip-build-phrase: '^jenkins do not test.*|.*\[skip ci\].*'
+ trigger-phrase: '^jenkins test {distribution}-{deployment}-{scenario}|jenkins test all.*'
+ black-list-labels:
+ - draft
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "Testing: {distribution}-{deployment}-{scenario}"
+ started-status: "Running: {distribution}-{deployment}-{scenario}"
+ success-status: "OK - {distribution}-{deployment}-{scenario}"
+ failure-status: "FAIL - {distribution}-{deployment}-{scenario}"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-ansible.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+
+ builders:
+ - inject:
+ properties-content: |
+ DISTRIBUTION={distribution}
+ DEPLOYMENT={deployment}
+ SCENARIO={scenario}
+ - conditional-step:
+ condition-kind: shell
+ condition-command: |
+ #!/bin/bash
+ # Returns 1 if only .rst and README files were modified
+ echo "Checking if only rst and READMEs were modified"
+ git show HEAD | grep -qo ^Merge:
+ if [ $? -eq 0 ]; then
+ git diff --name-only $(git show HEAD | grep ^Merge: | cut -d ':' -f2) | grep -v '\.rst\|README'
+ if [ $? -eq 1 ]; then
+ echo "Only docs were modified. Skipping the rest of the job."
+ exit 1
+ fi
+ fi
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/build_utils.sh
+ - ../../build/teardown
+
+ - archive:
+ artifacts: 'logs/**'
+ allow-empty: true
+ latest-only: false
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: ceph-ansible-upstream-ci
+ username: DOCKER_HUB_USERNAME
+ password: DOCKER_HUB_PASSWORD
+
+- job-template:
+ name: 'ceph-ansible-prs-{distribution}-{deployment}-{scenario}'
+ id: 'ceph-ansible-prs-common-trigger'
+ node: '{builder_labels}'
+ concurrent: true
+ defaults: global
+ display-name: 'ceph-ansible: Pull Requests [{distribution}-{deployment}-{scenario}]'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph-ansible
+ - build-discarder:
+ days-to-keep: 90
+ num-to-keep: -1
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ cancel-builds-on-update: true
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ skip-build-phrase: '^jenkins do not test.*|.*\[skip ci\].*'
+ trigger-phrase: '^jenkins test {distribution}-{deployment}-{scenario}|jenkins test all.*'
+ only-trigger-phrase: true
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "Testing: {distribution}-{deployment}-{scenario}"
+ started-status: "Running: {distribution}-{deployment}-{scenario}"
+ success-status: "OK - {distribution}-{deployment}-{scenario}"
+ failure-status: "FAIL - {distribution}-{deployment}-{scenario}"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-ansible.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+
+ builders:
+ - inject:
+ properties-content: |
+ DISTRIBUTION={distribution}
+ DEPLOYMENT={deployment}
+ SCENARIO={scenario}
+ - conditional-step:
+ condition-kind: shell
+ condition-command: |
+ #!/bin/bash
+ # Returns 1 if only .rst and README files were modified
+ echo "Checking if only rst and READMEs were modified"
+ git show HEAD | grep -qo ^Merge:
+ if [ $? -eq 0 ]; then
+ git diff --name-only $(git show HEAD | grep ^Merge: | cut -d ':' -f2) | grep -v '\.rst\|README'
+ if [ $? -eq 1 ]; then
+ echo "Only docs were modified. Skipping the rest of the job."
+ exit 1
+ fi
+ fi
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/build_utils.sh
+ - ../../build/teardown
+ - naginator:
+ max-failed-builds: 3
+ regular-expression: "not yet ready for SSH"
+ - archive:
+ artifacts: 'logs/**'
+ allow-empty: true
+ latest-only: false
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: ceph-ansible-upstream-ci
+ username: DOCKER_HUB_USERNAME
+ password: DOCKER_HUB_PASSWORD
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+# Sanity-check:
+[ -z "$GIT_BRANCH" ] && echo Missing GIT_BRANCH variable && exit 1
+[ -z "$JOB_NAME" ] && echo Missing JOB_NAME variable && exit 1
+
+# Strip "-rpm" off the job name to get our package's name
+PACKAGE=${JOB_NAME%-rpm}
+
+sudo yum -y install epel-release
+sudo yum -y install fedpkg mock
+
+# Attempt the build. If it fails, print the mock logs to STDOUT.
+make rpm || ( tail -n +1 {root,build}.log && exit 1 )
+
+# Chacra time
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
+
+BRANCH=`branch_slash_filter $GIT_BRANCH`
+
+## Upload the created RPMs to chacra
+chacra_endpoint="${PACKAGE}/${BRANCH}/${GIT_COMMIT}/centos/7"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+ls *.rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/noarch/
+
+# start repo creation
+$VENV/chacractl repo update ${chacra_endpoint}
+
+echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
--- /dev/null
+2.jenkins.ceph.com
--- /dev/null
+- job:
+ name: ceph-ansible-rpm
+ node: 'centos8 && x86_64 && small && !sepia'
+ project-type: freestyle
+ defaults: global
+ disabled: false
+ display-name: 'ceph-ansible: RPMs'
+ description: 'Build RPMs for every ceph-ansible Git branch'
+ concurrent: true
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-ansible
+ discard-old-builds: true
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-ansible
+ browser: auto
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
--- /dev/null
+#!/bin/bash
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+set_centos_python3_version "python3.9"
+install_python_packages $TEMPVENV "pkgs[@]" "pip==22.0.4"
+
+# XXX this might not be needed
+source $VENV/activate
+
+WORKDIR=$(mktemp -td tox.XXXXXXXXXX)
+
+prune_stale_vagrant_running_vms
+delete_libvirt_vms
+clear_libvirt_networks
+restart_libvirt_services
+update_vagrant_boxes
+
+# In the same logic, clean fact cache
+rm -rf $HOME/ansible/facts/*
+
+start_tox $TEMPVENV
--- /dev/null
+#!/bin/bash
+
+cd $WORKSPACE/tests
+
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+# the method exists in scripts/build_utils.sh
+teardown_vagrant_tests $VENV
+
+# clean fact cache
+rm -rf $HOME/ansible/facts/*
--- /dev/null
+2.jenkins.ceph.com
--- /dev/null
+
+- job:
+ name: 'ceph-ansible-scenario'
+ node: vagrant&&libvirt
+ concurrent: true
+ defaults: global
+ display-name: 'ceph-ansible: individual scenario testing'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-ansible
+
+ parameters:
+ - string:
+ name: SCENARIO
+ description: "A full scenario name for ceph-ansible testing, like jewel-ansible2.2-purge_cluster"
+ - string:
+ name: BRANCH
+ description: "The ceph-ansible branch to test against"
+ default: "main"
+ - string:
+ name: CEPH_DEV_BRANCH
+ description: "The ceph dev branch to test against if using a dev-* scenario"
+ default: "main"
+ - string:
+ name: CEPH_DEV_SHA1
+ description: "The ceph sha1 to test against if using a dev-* scenario"
+ default: "latest"
+ - string:
+ name: CEPH_DOCKER_REGISTRY
+ description: "The docker registry used for containerized scenarios"
+ default: "docker.io"
+ - string:
+ name: CEPH_DOCKER_IMAGE
+ description: "The docker image used for containerized scenarios"
+ default: "ceph/daemon"
+ - string:
+ name: CEPH_DOCKER_IMAGE_TAG
+ description: "The docker image tag used for containerized scenarios"
+ default: "latest"
+ - string:
+ name: RELEASE
+ description: "The ceph release version used"
+ default: "dev"
+ - string:
+ name: DEPLOYMENT
+ description: "Type of deployment: container or non_container"
+ default: "non_container"
+ - string:
+ name: DISTRIBUTION
+ description: "The distribution used (ubuntu or centos)"
+ default: "centos"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-ansible.git
+ branches:
+ - $BRANCH
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/teardown
+
+ - archive:
+ artifacts: 'logs/**'
+ allow-empty: true
+ latest-only: false
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: ceph-ansible-upstream-ci
+ username: DOCKER_HUB_USERNAME
+ password: DOCKER_HUB_PASSWORD
--- /dev/null
+#!/bin/bash
+use_percentage=$(df -m ~ | grep -v Filesystem | awk '{ print $5 }' | cut -d '%' -f1)
+if [ $use_percentage -gt 90 ]; then
+ rm -rf ~/.ccache
+ for dir in $(ls ~/build/workspace/); do
+ # Used "${var}" instead of ${var+x} here because I also want to check if the string is empty
+ if [ -z "${dir}" ] || [ -z "${JOB_NAME}" ]; then
+ echo "Either \$dir or \$JOB_NAME aren't set. Not cleaning up job directories."
+ else
+ if [ "$dir" != "$JOB_NAME" ]; then
+ rm -rf ~/build/workspace/$dir
+ fi
+ fi
+ done
+fi
--- /dev/null
+- project:
+ name: ceph-api-nightly
+ ceph_branch:
+ - main
+ - tentacle
+ - squid
+ - reef
+ test_suite:
+ - backend:
+ test_suite_script: run-backend-api-tests.sh
+ test_deps_script: install-backend-api-test-deps.sh
+ - e2e:
+ test_suite_script: run-frontend-e2e-tests.sh
+ test_deps_script: install-e2e-test-deps.sh
+ jobs:
+ - '{name}-{ceph_branch}-{test_suite}'
+
+- job-template:
+ name: '{name}-{ceph_branch}-{test_suite}'
+ display-name: '{name}-{ceph_branch}-{test_suite}'
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ node: huge && bionic && x86_64
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 300
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph/
+ - rebuild:
+ auto-rebuild: true
+ - inject:
+ properties-content: |
+ TERM=xterm
+ ceph_build: "export FOR_MAKE_CHECK=1; timeout 2h ./src/script/run-make.sh --cmake-args '-DWITH_TESTS=OFF -DENABLE_GIT_VERSION=OFF'"
+
+ triggers:
+ - timed: '@midnight'
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.git
+ branches:
+ - '{ceph_branch}'
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ shallow-clone: true
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw-escape:
+ - ../../build/cleanup
+ - shell: "export NPROC=$(nproc); {ceph_build}"
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/dashboard/{test_deps_script}
+ - shell: |
+ export CYPRESS_ARGS="--record --key $CYPRESS_RECORD_KEY --tag $JOB_NAME" COMMIT_INFO_MESSAGE="$JOB_NAME"
+ export APPLITOOLS_BATCH_ID="${{JOB_NAME}}_${{BUILD_TAG}}"
+ export APPLITOOLS_BATCH_NAME="Nightly-${{GIT_BRANCH#*/}}"
+ export APPLITOOLS_BRANCH_NAME="${{GIT_BRANCH#*/}}"
+ mkdir -p .applitools
+ echo "$APPLITOOLS_BATCH_ID" > .applitools/BATCH_ID
+ cd src/pybind/mgr/dashboard; timeout 2h ./{test_suite_script}
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: cd-cypress-record-key
+ variable: CYPRESS_RECORD_KEY
+ - text:
+ credential-id: cd-applitools-api-key
+ variable: APPLITOOLS_API_KEY
+ - raw:
+ xml: |
+ <com.applitools.jenkins.ApplitoolsBuildWrapper plugin="applitools-eyes@1.13">
+ <serverURL>https://eyes.applitools.com</serverURL>
+ <notifyByCompletion>true</notifyByCompletion>
+ <applitoolsApiKey/>
+ </com.applitools.jenkins.ApplitoolsBuildWrapper>
+
+ - ansicolor
+
+ publishers:
+ - archive:
+ artifacts: 'build/out/*.log, build/run/1/out/*.log, build/run/2/out/*.log'
+ allow-empty: true
+ latest-only: false
--- /dev/null
+#!/bin/bash
+
+set -e
+
+"$WORKSPACE/scripts/setup_uv.sh"
+PATH=$PATH:$HOME/.local/bin
+uv venv
+
+pkgs=( "ansible" "ansible-core" "jenkins-job-builder>=6.4.3" "urllib3" "pyopenssl" "ndg-httpsclient" "pyasn1" "xmltodict" )
+VENV=./.venv/bin
+uv pip install "${pkgs[@]}"
+
+command -v dpkg >/dev/null && (
+ dpkg -l shellcheck >/dev/null 2>&1 || sudo apt install -y shellcheck
+)
+command -v dnf >/dev/null && (
+ rpm -q shellcheck >/dev/null 2>&1 || sudo dnf install -y shellcheck
+)
+rm -rf xml
+# Test every definition if available in the current repository and update the jobs
+# if they do define one (they should always define their definitions)
+find . -maxdepth 1 -path ./.git -prune -o -type d -print | while read -r dir; do
+ definitions_dir="$dir/config/definitions"
+ if [ -d "$definitions_dir" ]; then
+ echo "found definitions directory: $definitions_dir"
+
+ # Test the definitions
+ $VENV/jenkins-jobs test "$definitions_dir" --config-xml -o ./xml > /dev/null
+ fi
+done
+
+find ./xml -name '*.xml' | while read -r path; do
+ uv run "$WORKSPACE/scripts/shellcheck_job.py" -v "$path"
+done
+
+# install ansible-galaxy roles for playbook syntax check
+for reqs in $WORKSPACE/ansible/requirements/*; do
+ $VENV/ansible-galaxy install -r $reqs -p $WORKSPACE/ansible/roles --force
+done
+
+# To avoid moving everything into examples, including stuff that is not relevant
+# as an example, we copy them on the fly here
+cp -r $WORKSPACE/ansible/vars $WORKSPACE/ansible/examples/
+cp -r $WORKSPACE/ansible/roles $WORKSPACE/ansible/examples/
+cp -r $WORKSPACE/ansible/files $WORKSPACE/ansible/examples/
+cp -r $WORKSPACE/ansible/library $WORKSPACE/ansible/examples/
+cp -r $WORKSPACE/ansible/templates $WORKSPACE/ansible/examples/
+cp $WORKSPACE/ansible/release.yml $WORKSPACE/ansible/examples/
+
+
+# Syntax-check each Ansible playbook
+for playbook in $WORKSPACE/ansible/examples/*.yml; do
+ $VENV/ansible-playbook -i '127.0.0.1,' $playbook --syntax-check
+done
--- /dev/null
+- job:
+ name: ceph-build-pull-requests
+ node: huge
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ display-name: 'ceph-build: Pull Requests'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-build
+
+ parameters:
+ - string:
+ name: sha1
+ description: "commit id or a refname, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ org-list:
+ - ceph
+ trigger-phrase: '.*retest.*'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: false
+ auto-close-on-fail: false
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/${{ghprbPullId}}/*:refs/remotes/origin/pr/${{ghprbPullId}}/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
--- /dev/null
+#!/bin/bash
+set -ex
+
+build_debs ${VENV} ${vers} ${debian_version}
--- /dev/null
+#!/bin/bash
+# vim: ts=4 sw=4 expandtab
+set -ex
+
+# create a release directory for ceph-build tools
+mkdir -p release
+cp -a dist release/${vers}
+
+echo "Building RPMs"
+
+# The below contents ported from /srv/ceph-build/build_rpms.sh ::
+# $bindir/build_rpms.sh ./release $vers
+#
+
+releasedir="./release"
+cephver=$vers
+
+cd $releasedir/$cephver || exit 1
+
+# This is needed because the 'version' this job gets from upstream contains chars
+# that are not legal for an RPM file. These are already converted in the spec file whic
+# is what is consumed to create the RPM binary. Parse these values there so that they can
+# be reported as part of the build metadata
+RPM_RELEASE=`grep Release ceph.spec | sed 's/Release:[ \t]*//g' | cut -d '%' -f 1`
+RPM_VERSION=`grep Version ceph.spec | sed 's/Version:[ \t]*//g'`
+PACKAGE_MANAGER_VERSION="$RPM_VERSION-$RPM_RELEASE"
+
+BUILDAREA=$(setup_rpm_build_area ./rpm/$dist)
+build_rpms $BUILDAREA "${CEPH_EXTRA_RPMBUILD_ARGS}"
+
+# Make sure we execute at the top level directory
+cd "$WORKSPACE"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+if [ "$THROWAWAY" = false ] ; then
+ # push binaries to chacra
+ find release/${vers}/rpm/*/SRPMS | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/source
+ find release/${vers}/rpm/*/RPMS/* | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}
+ # extract cephadm if it exists
+ if [ -f ${BUILDAREA}/RPMS/noarch/cephadm-*.rpm ] ; then
+ rpm2cpio ${BUILDAREA}/RPMS/noarch/cephadm-*.rpm | cpio -i --to-stdout *sbin/cephadm > cephadm
+ echo cephadm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ fi
+ # write json file with build info
+ cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$vers",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+ chacra_repo_endpoint="${chacra_endpoint}/flavors/${FLAVOR}"
+ # post the json to repo-extra json to chacra
+ curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY https://chacra.ceph.com/repos/${chacra_repo_endpoint}/extra/
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_repo_endpoint}
+fi
+
+# unlike ceph-dev-*, ceph-build can't really build containers inline; the containers need
+# to be built from signed packages, and the signing is a semi-manual process when a build
+# is vetted. See the Ceph Release Process documentation on docs.ceph.com.
--- /dev/null
+#!/bin/bash -ex
+
+# note: the failed_build_status call relies on normalized variable names that
+# are infered by the builds themselves. If the build fails before these are
+# set, they will be posted with empty values
+BRANCH=`branch_slash_filter $BRANCH`
+
+# update shaman with the failed build status
+failed_build_status "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
--- /dev/null
+#!/bin/bash
+
+set -ex
+HOST=$(hostname --short)
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " KEYID=${KEYID}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BUILD SOURCE=$COPYARTIFACT_BUILD_NUMBER_CEPH_SETUP"
+echo "*****"
+env
+echo "*****"
+
+if test $(id -u) != 0 ; then
+ SUDO=sudo
+fi
+export LC_ALL=C # the following is vulnerable to i18n
+
+$SUDO apt-get install -y lsb-release
+
+BRANCH=`branch_slash_filter $BRANCH`
+
+cd $WORKSPACE
+
+mv ceph-build/ansible/ceph/dist .
+rm -rf ceph-build
+
+BPTAG=`get_bptag $DIST`
+
+chacra_ref="$BRANCH"
+vers=`cat ./dist/version`
+
+# We used to detect the $distro variable by inspecting at the host, but this is
+# not accurate because we are using pbuilder and just ubuntu to build
+# everything. That would cause POSTing binaries to incorrect chacra endpoints
+# like project/ref/ubuntu/jessie/.
+distro=""
+case $DIST in
+ bookworm|bullseye|buster|stretch|jessie|wheezy)
+ distro="debian"
+ ;;
+ *)
+ distro="ubuntu"
+ ;;
+esac
+
+debian_version=${vers}-1
+
+bpvers=`gen_debian_version $debian_version $DIST`
+
+# Normalize variables across rpm/deb builds
+NORMAL_DISTRO=$distro
+NORMAL_DISTRO_VERSION=$DIST
+NORMAL_ARCH=$ARCH
+
+# create build status in shaman
+update_build_status "started" "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+chacra_url=https://chacra.ceph.com/
+make_chacractl_config
+
+FLAVOR="default"
+
+# look for a specific package to tell if we can avoid the build
+chacra_endpoint="ceph/${chacra_ref}/${SHA1}/${distro}/${DIST}/${ARCH}/flavors/${FLAVOR}"
+chacra_repo_endpoint="ceph/${chacra_ref}/${SHA1}/${distro}/${DIST}/flavors/${FLAVOR}"
+DEB_ARCH=`dpkg-architecture | grep DEB_BUILD_ARCH\= | cut -d '=' -f 2`
+chacra_check_url="${chacra_endpoint}/librados2_${bpvers}_${DEB_ARCH}.deb"
+
+if [ "$THROWAWAY" = false ] ; then
+ # this exists in scripts/build_utils.sh
+ # TODO if this exits we need to post to shaman a success
+ check_binary_existence $VENV $chacra_check_url
+fi
--- /dev/null
+#!/bin/sh -x
+# This file will set the tgz images needed for pbuilder on a given host. It has
+# some hard-coded values like `/srv/debian-base` because it gets built every
+# time this file is executed - completely ephemeral. If a Debian host will use
+# pbuilder, then it will need this. Since it is not idempotent it makes
+# everything a bit slower. ## FIXME ##
+
+set -e
+
+# Only run when we are a Debian or Debian-based distro
+if test -f /etc/redhat-release ; then
+ exit 0
+fi
+
+setup_pbuilder use_gcc
--- /dev/null
+#!/bin/bash
+
+set -ex
+HOST=$(hostname --short)
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " KEYID=${KEYID}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BUILD SOURCE=$COPYARTIFACT_BUILD_NUMBER_CEPH_SETUP"
+echo "*****"
+env
+echo "*****"
+
+if test $(id -u) != 0 ; then
+ SUDO=sudo
+fi
+export LC_ALL=C # the following is vulnerable to i18n
+
+mv ceph-build/ansible/ceph/dist .
+rm -rf ceph-build
+
+# unpack the tar.gz that contains the debian dir
+cd dist
+tar xzf *.orig.tar.gz
+cd $(basename *.orig.tar.gz .orig.tar.gz | sed s/_/-/)
+pwd
+
+get_rpm_dist
+setup_rpm_build_deps
+
+if [[ $DISTRO == "centos" && "$RELEASE" =~ 8|9 ]] ;
+then
+ podman login -u $CONTAINER_REPO_USERNAME -p $CONTAINER_REPO_PASSWORD $CONTAINER_REPO_HOSTNAME/$CONTAINER_REPO_ORGANIZATION
+fi
+
+
+BRANCH=`branch_slash_filter $BRANCH`
+
+if [[ ! -f /etc/redhat-release && ! -f /usr/bin/zypper ]] ; then
+ exit 0
+fi
+
+cd $WORKSPACE
+
+# Normalize variables across rpm/deb builds
+NORMAL_DISTRO=$DISTRO
+NORMAL_DISTRO_VERSION=$RELEASE
+NORMAL_ARCH=$ARCH
+
+# create build status in shaman
+update_build_status "started" "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# create the .chacractl config file using global variables
+make_chacractl_config
+
+dist=$DIST
+[ -z "$dist" ] && echo no dist && exit 1
+echo dist $dist
+
+vers=`cat ./dist/version`
+chacra_ref="$BRANCH"
+
+FLAVOR="default"
+
+chacra_endpoint="ceph/${chacra_ref}/${SHA1}/${DISTRO}/${RELEASE}"
+chacra_check_url="${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}/librados2-${vers}-0.${DIST}.${ARCH}.rpm"
+
+
+if [ "$THROWAWAY" = false ] ; then
+ # this exists in scripts/build_utils.sh
+ # TODO if this exits we need to post to shaman a success
+ check_binary_existence $VENV $chacra_check_url
+fi
--- /dev/null
+#!/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+if test -f /etc/redhat-release ; then
+ exit 0
+fi
--- /dev/null
+#!/bin/bash
+set -ex
+
+# only do work if we are a RPM distro
+if [[ ! -f /etc/redhat-release && ! -f /usr/bin/zypper ]] ; then
+ exit 0
+fi
--- /dev/null
+- job:
+ name: ceph-build
+ project-type: matrix
+ defaults: global
+ display-name: 'ceph-build'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph
+ execution-strategy:
+ combination-filter: |
+ DIST == AVAILABLE_DIST && ARCH == AVAILABLE_ARCH &&
+ (ARCH == "x86_64" || (ARCH == "arm64" && ["bionic", "focal", "jammy", "noble", "centos9"].contains(DIST)))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - gigantic
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - arm64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - trusty
+ - xenial
+ - bionic
+ - focal
+ - jammy
+ - noble
+ - centos7
+ - centos8
+ - centos9
+ - jessie
+ - stretch
+ - buster
+ - bullseye
+ - bookworm
+ - precise
+ - centos6
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+ builders:
+ - conditional-step:
+ condition-kind: or
+ condition-operands:
+ - condition-kind: and
+ condition-operands:
+ - condition-kind: regex-match
+ regex: (reef|squid|tentacle)
+ label: '${{BRANCH}}'
+ - condition-kind: regex-match
+ regex: (focal|jammy|noble|centos9|buster|bullseye|bookworm)
+ label: '${{DIST}}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ rm -rf dist
+ rm -rf venv
+ rm -rf release
+ - copyartifact:
+ project: ceph-setup
+ filter: 'ceph-build/ansible/ceph/dist/**'
+ which-build: multijob-build
+ - inject:
+ properties-file: ${{WORKSPACE}}/ceph-build/ansible/ceph/dist/sha1
+ - inject:
+ properties-file: ${{WORKSPACE}}/ceph-build/ansible/ceph/dist/other_envvars
+ # debian build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_deb
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup_deb
+ - ../../build/setup_pbuilder
+ - ../../build/build_deb
+ - ../../../scripts/status_completed
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_rpm
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup_rpm
+ - ../../build/build_rpm
+ - ../../../scripts/status_completed
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - inject:
+ properties-file: ${{WORKSPACE}}/build_info
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/failure
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
+ - username-password-separated:
+ credential-id: quay.ceph.io-ceph-prerelease
+ username: CONTAINER_REPO_USERNAME
+ password: CONTAINER_REPO_PASSWORD
--- /dev/null
+- job:
+ name: ceph-cbt-lint
+ display-name: 'ceph-cbt: lint tests'
+ node: python3
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+
+ properties:
+ - github:
+ url: https://github.com/ceph/cbt/
+ - build-discarder:
+ days-to-keep: 7
+ num-to-keep: 30
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ discard-old-builds: true
+
+ parameters:
+ - string:
+ name: ghprbPullId
+ description: "the GitHub pull id, like '72' in 'cbt/pull/72'"
+
+ triggers:
+ - github-pull-request:
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ cancel-builds-on-update: true
+ only-trigger-phrase: false
+ trigger-phrase: 'jenkins test cbt lint'
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "ceph-cbt tox testing"
+ started-status: "ceph-cbt tox running"
+ success-status: "ceph-cbt tox OK"
+ failure-status: "ceph-cbt tox failed"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/cbt
+ branches:
+ - origin/pr/${{ghprbPullId}}/merge
+ refspec: +refs/pull/${{ghprbPullId}}/*:refs/remotes/origin/pr/${{ghprbPullId}}/*
+ timeout: 20
+ shallow-clone: true
+ wipe-workspace: true
+
+ builders:
+ - shell: |
+ virtualenv -q --python python3 venv
+ . venv/bin/activate
+ pip install tox
+ pip install git+https://github.com/ceph/githubcheck.git
+ sha1=$(git rev-parse refs/remotes/origin/pr/${{ghprbPullId}}/head)
+ tox -e pep8 | github-check \
+ --lint \
+ --lint-tox-dir=. \
+ --lint-preamble=pep8:flake8 \
+ --owner "ceph" \
+ --repo "cbt" \
+ --pkey-file $GITHUB_CHECK_PKEY_PEM \
+ --app-id "62865" \
+ --install-id "8465036" \
+ --name "cbt-lint" \
+ --sha $sha1 \
+ --external-id $BUILD_ID \
+ --details-url $BUILD_URL \
+ --title cbt-lint
+
+ wrappers:
+ - credentials-binding:
+ - file:
+ credential-id: cephacheck.2020-04-29.private-key.pem
+ variable: GITHUB_CHECK_PKEY_PEM
--- /dev/null
+- project:
+ name: ceph-dashboard-cephadm-e2e-nightly
+ ceph_branch:
+ - main
+ - tentacle
+ - squid
+ - reef
+ jobs:
+ - '{name}-{ceph_branch}'
+
+- job-template:
+ name: '{name}-{ceph_branch}'
+ display-name: '{name}-{ceph_branch}'
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ node: huge && focal && x86_64
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ workspace: /home/jenkins-build/build/workspace/{name}/ceph
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 300
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph/
+ - rebuild:
+ auto-rebuild: true
+ - inject:
+ properties-content: |
+ TERM=xterm
+
+ triggers:
+ - timed: '@midnight'
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.git
+ branches:
+ - '{ceph_branch}'
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ shallow-clone: true
+ wipe-workspace: true
+
+ - git:
+ url: https://github.com/ceph/ceph-build.git
+ branches:
+ - main
+ basedir: ceph-build
+
+ builders:
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/dashboard/install-e2e-test-deps.sh
+ - ../../../scripts/dashboard/install-cephadm-e2e-deps.sh
+ - shell: |
+ export CYPRESS_ARGS="--record --key $CYPRESS_RECORD_KEY --tag $JOB_NAME" COMMIT_INFO_MESSAGE="$JOB_NAME"
+ timeout 7200 ./src/pybind/mgr/dashboard/ci/cephadm/run-cephadm-e2e-tests.sh
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: cd-cypress-record-key
+ variable: CYPRESS_RECORD_KEY
+ - ansicolor
+
+ publishers:
+ - archive:
+ artifacts: 'logs/**'
+ allow-empty: true
+ latest-only: false
+
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - SUCCESS
+ - UNSTABLE
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell: "ceph-build/ceph-dashboard-cephadm-e2e/build/cleanup"
--- /dev/null
+#!/usr/bin/env bash
+set +x
+echo "Starting cleanup..."
+kcli delete plan -y ceph || true
+kcli delete network ceph-dashboard -y
+kcli delete pool ceph-dashboard -y
+sudo rm -rf ${HOME}/.kcli
+docker container prune -f
+echo "Cleanup completed."
--- /dev/null
+- job:
+ name: ceph-dashboard-cephadm-e2e
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ node: huge && focal && x86_64
+ display-name: 'ceph: Dashboard + Cephadm E2E'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 300
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph/
+ - rebuild:
+ auto-rebuild: true
+ - inject:
+ properties-content: |
+ TERM=xterm
+
+ parameters:
+ - string:
+ name: sha1
+ description: "commit id or a refname, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ cancel-builds-on-update: true
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ white-list-labels:
+ - cephadm
+ - dashboard
+ black-list-target-branches:
+ - luminous
+ - mimic
+ - nautilus
+ trigger-phrase: 'jenkins test dashboard cephadm'
+ skip-build-phrase: '^jenkins do not test.*'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "ceph dashboard cephadm e2e tests"
+ started-status: "running ceph dashboard cephadm e2e tests"
+ success-status: "ceph dashboard cephadm e2e tests succeeded"
+ failure-status: "ceph dashboard cephadm e2e tests failed"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.git
+ branches:
+ - origin/pr/${{ghprbPullId}}/merge
+ refspec: +refs/pull/${{ghprbPullId}}/*:refs/remotes/origin/pr/${{ghprbPullId}}/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ shallow-clone: true
+ wipe-workspace: true
+
+ - git:
+ url: https://github.com/ceph/ceph-build.git
+ branches:
+ - main
+ basedir: ceph-build
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/dashboard/install-e2e-test-deps.sh
+ - ../../../scripts/dashboard/install-cephadm-e2e-deps.sh
+ - shell: |
+ export CYPRESS_ARGS="--record --key $CYPRESS_RECORD_KEY --tag $ghprbTargetBranch" COMMIT_INFO_MESSAGE="$ghprbPullTitle"
+ export NVM_DIR="$HOME/.nvm"
+ [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
+ [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"
+
+ # use the node version from dashboard/frontend
+ nvm use "$(cat src/pybind/mgr/dashboard/frontend/.nvmrc)"
+
+ timeout 7200 ./src/pybind/mgr/dashboard/ci/cephadm/run-cephadm-e2e-tests.sh
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: cd-cypress-record-key
+ variable: CYPRESS_RECORD_KEY
+ - ansicolor
+
+ publishers:
+ - archive:
+ artifacts: 'logs/**'
+ allow-empty: true
+ latest-only: false
+
+ - junit:
+ results: 'src/pybind/mgr/dashboard/frontend/cypress/reports/results-*.xml'
+ allow-empty: true
+
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - SUCCESS
+ - UNSTABLE
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell: "${{WORKSPACE}}/ceph-build/ceph-dashboard-cephadm-e2e/build/cleanup"
--- /dev/null
+- job:
+ name: ceph-dashboard-pull-requests
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ node: huge && bionic && x86_64
+ display-name: 'ceph: dashboard Pull Requests'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 300
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph/
+ - rebuild:
+ auto-rebuild: true
+ - inject:
+ properties-content: |
+ TERM=xterm
+
+ parameters:
+ - string:
+ name: sha1
+ description: "commit id or a refname, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ cancel-builds-on-update: true
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ white-list-labels:
+ - dashboard
+ black-list-target-branches:
+ - luminous
+ trigger-phrase: 'jenkins test dashboard'
+ skip-build-phrase: '^jenkins do not test.*'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "ceph dashboard tests"
+ started-status: "running ceph dashboard tests"
+ success-status: "ceph dashboard tests succeeded"
+ failure-status: "ceph dashboard tests failed"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.git
+ branches:
+ - origin/pr/${{ghprbPullId}}/merge
+ refspec: +refs/pull/${{ghprbPullId}}/*:refs/remotes/origin/pr/${{ghprbPullId}}/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ shallow-clone: true
+ wipe-workspace: true
+
+ builders:
+ - shell: "export FOR_MAKE_CHECK=1; timeout 2h ./src/script/run-make.sh --cmake-args '-DWITH_TESTS=OFF -DENABLE_GIT_VERSION=OFF'"
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/dashboard/install-e2e-test-deps.sh
+ - shell: |
+ export CYPRESS_ARGS="--record --key $CYPRESS_RECORD_KEY --tag $ghprbTargetBranch" COMMIT_INFO_MESSAGE="$ghprbPullTitle"
+ export APPLITOOLS_BATCH_ID="PR-${{ghprbPullId}}_${{BUILD_TAG}}"
+ export APPLITOOLS_BATCH_NAME="PR-${{ghprbPullId}}"
+ export APPLITOOLS_BRANCH_NAME="$ghprbSourceBranch"
+ export APPLITOOLS_PARENT_BRANCH_NAME="$ghprbTargetBranch"
+ mkdir -p .applitools
+ echo "$APPLITOOLS_BATCH_ID" > .applitools/BATCH_ID
+ cd src/pybind/mgr/dashboard; timeout 7200 ./run-frontend-e2e-tests.sh
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: cd-cypress-record-key
+ variable: CYPRESS_RECORD_KEY
+ - text:
+ credential-id: cd-applitools-api-key
+ variable: APPLITOOLS_API_KEY
+ - raw:
+ xml: |
+ <com.applitools.jenkins.ApplitoolsBuildWrapper plugin="applitools-eyes@1.13">
+ <serverURL>https://eyes.applitools.com</serverURL>
+ <notifyByCompletion>true</notifyByCompletion>
+ <applitoolsApiKey/>
+ </com.applitools.jenkins.ApplitoolsBuildWrapper>
+ - ansicolor
+
+ publishers:
+ - archive:
+ artifacts: 'build/out/*.log, build/run/1/out/*.log, build/run/2/out/*.log'
+ allow-empty: true
+ latest-only: false
+
+ - junit:
+ results: 'src/pybind/mgr/dashboard/frontend/cypress/reports/results-*.xml'
+ allow-empty: true
--- /dev/null
+#!/bin/bash
+
+# This is the script that runs inside Jenkins.
+# http://jenkins.ceph.com/job/ceph-deploy/
+
+set -x
+set -e
+
+HOST=$(hostname --short)
+echo "Building on ${HOST}"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BRANCH=$BRANCH"
+echo " SHA1=$GIT_COMMIT"
+
+# FIXME A very naive way to just list the RPM $DIST that we currently support.
+# We should be a bit more lenient to allow any rhel/centos/sles/suse
+rpm_dists="rhel el7 centos7 el8 centos8 centos"
+deb_dists="xenial bionic focal stretch buster"
+
+# A helper to match an item in a list of items, like python's `if item in list`
+listcontains() {
+ for word in $2; do
+ [[ $word = $1 ]] && return 0
+ done
+ return 1
+}
+
+if listcontains $DIST "$rpm_dists"
+then
+ # Tag tree and update version number in change log and
+ # in setup.py before building.
+
+ # this exists in scripts/build_utils.sh
+ get_rpm_dist
+
+ REPO=rpm-repo
+ BUILDAREA=./rpmbuild
+ DIST=el6
+ RPM_BUILD=$(lsb_release -s -c)
+
+ [ "$TEST" = true ] && chacra_ref="test" || chacra_ref="$BRANCH"
+ chacra_endpoint="ceph-deploy/${chacra_ref}/${GIT_COMMIT}/${DISTRO}/${DISTRO_VERSION}"
+
+ # this exists in scripts/build_utils.sh for ceph-deploy, binaries have
+ # no architecture so we POST them to 'noarch' for rpms
+ check_binary_existence $VENV $chacra_endpoint/noarch
+
+ if [ ! -e setup.py ] ; then
+ echo "Are we in the right directory"
+ exit 1
+ fi
+
+ # Create Tarball
+ if [ "$RELEASE" = 7 ]; then
+ python setup.py sdist --formats=bztar
+ else
+ python3 setup.py sdist --formats=bztar
+ fi
+
+ # Build RPM
+ mkdir -p rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS}
+ BUILDAREA=`readlink -fn ${BUILDAREA}` ### rpm wants absolute path
+ cp ceph-deploy.spec ${BUILDAREA}/SPECS
+ cp dist/*.tar.bz2 ${BUILDAREA}/SOURCES
+ echo "buildarea is: ${BUILDAREA}"
+ rpmbuild -ba --define "_topdir ${BUILDAREA}" --define "_unpackaged_files_terminate_build 0" ${BUILDAREA}/SPECS/ceph-deploy.spec
+
+ [ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+ find ${BUILDAREA}/SRPMS | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/source
+ find ${BUILDAREA}/RPMS/* | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/noarch
+
+ exit 0
+
+elif listcontains $DIST "$deb_dists"
+then
+
+ # Tag tree and update version number in change log and
+ # in setup.py before building.
+
+ REPO=debian-repo
+ COMPONENT=main
+ DEB_DIST="sid xenial bionic focal stretch buster"
+ DEB_BUILD=$(lsb_release -s -c)
+ #XXX only releases until we fix this
+ RELEASE=1
+ DISTRO=""
+ case $DIST in
+ jessie|buster)
+ DISTRO="debian"
+ ;;
+ *)
+ DISTRO="ubuntu"
+ ;;
+ esac
+
+ [ "$TEST" = true ] && chacra_ref="test" || chacra_ref="$BRANCH"
+ # ceph-deploy isn't architecture dependant, so we use 'all' as architecture and
+ # uses 'universal' to signal this is not specific to any distro version
+ chacra_endpoint="ceph-deploy/${chacra_ref}/${GIT_COMMIT}/${DISTRO}/universal/all"
+
+ # this exists in scripts/build_utils.sh
+ check_binary_existence $VENV $chacra_endpoint
+
+ if [ ! -d debian ] ; then
+ echo "Are we in the right directory"
+ exit 1
+ fi
+
+ # Apply backport tag if release build
+ if [ $RELEASE -eq 1 ] ; then
+ DEB_VERSION=$(dpkg-parsechangelog | sed -rne 's,^Version: (.*),\1, p')
+ BP_VERSION=${DEB_VERSION}${BPTAG}
+ dch -D $DIST --force-distribution -b -v "$BP_VERSION" "$comment"
+ dpkg-source -b .
+ fi
+
+ # Build Package
+ echo "Building for dist: $DEB_BUILD"
+ # we no longer sign the .dsc or .changes files (done by default with
+ # the `-k$KEYID` flag), so explicitly tell the tool not to sign them
+ dpkg-buildpackage -uc -us
+ if [ $? -ne 0 ] ; then
+ echo "Build failed"
+ exit 2
+ fi
+
+ [ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+ # push binaries to chacra
+ # the binaries are created in one directory up from $WORKSPACE
+ find ../ | egrep "*\.(changes|deb|dsc|gz)$" | egrep -v "(Packages|Sources|Contents)" | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}
+
+ echo "Done"
+
+else
+ echo "Can't determine build host type, I suck. Sorry."
+ exit 4
+fi
--- /dev/null
+#!/bin/bash
+
+# This is the script that runs inside Jenkins.
+# http://jenkins.ceph.com/job/ceph-deploy/
+
+set -x
+set -e
+
+# ensure dependencies are installed for RPM hosts, because they are required and
+# we are not building with a contained environment like we do on DEB builds
+
+# TODO: use Mock to build ceph-deploy rpm's and avoid this
+
+if test -f /etc/redhat-release ; then
+ get_rpm_dist
+ if [ "$RELEASE" = 7 ]; then
+ rpm_deps="python-devel python-virtualenv python-mock python-tox pytest"
+ else
+ rpm_deps="python3-devel python3-virtualenv python3-mock python3-tox python3-pytest"
+ fi
+ sudo yum install -y $rpm_deps
+fi
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
--- /dev/null
+- job:
+ name: ceph-deploy-build
+ node: small && trusty
+ project-type: matrix
+ defaults: global
+ display-name: 'ceph-deploy-build'
+ concurrent: true
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-deploy.git
+ branches:
+ - $BRANCH
+ browser: auto
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ axes:
+ - axis:
+ type: label-expression
+ name: ARCH
+ values:
+ - x86_64
+ - axis:
+ type: label-expression
+ name: DIST
+ values:
+ - bionic
+ - centos7
+ - centos8
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# create the docs build with tox
+$VENV/tox -rv -e docs
+
+# publish docs to http://docs.ceph.com/docs/ceph-deploy
+rsync -auv --delete .tox/docs/tmp/html/* /var/ceph-deploy/docs/
--- /dev/null
+- job:
+ name: ceph-deploy-docs
+ node: docs
+ project-type: freestyle
+ defaults: global
+ display-name: 'ceph-deploy: docs build'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-deploy
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-deploy
+ branches:
+ - main
+ browser: auto
+ skip-tag: true
+ timeout: 20
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
--- /dev/null
+cd $WORKSPACE/ceph-deploy && $VENV/tox -rv
--- /dev/null
+#!/bin/bash
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "ansible" "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+
+# run ansible to get this current host to meet our requirements, specifying
+# a local connection and 'localhost' as the host where to execute
+cd "$WORKSPACE/ceph-build/ceph-deploy-pull-requests/setup/playbooks"
+$VENV/ansible-playbook -i "localhost," -c local setup.yml
--- /dev/null
+# multiple scm requires definition of each scm with `name` so that they can be
+# referenced later in `job`
+# reference: http://docs.openstack.org/infra/jenkins-job-builder/scm.html
+- scm:
+ name: ceph-deploy
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-deploy.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+ basedir: "ceph-deploy"
+
+- scm:
+ name: ceph-build
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build.git
+ browser-url: https://github.com/ceph/ceph-build
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+ basedir: "ceph-build"
+
+
+- job:
+ name: ceph-deploy-pull-requests
+ node: python3
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ display-name: 'ceph-deploy: Pull Requests'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-deploy/
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ org-list:
+ - ceph
+ trigger-phrase: ''
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: false
+ auto-close-on-fail: false
+
+ scm:
+ - ceph-deploy
+ - ceph-build
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build
--- /dev/null
+---
+
+- hosts: localhost
+ user: jenkins-build
+ become: yes
+
+ tasks:
+ - include: tasks/ubuntu.yml
+ when: ansible_distribution == "Ubuntu"
+
+ # TODO: maybe add RPM or Debian handling?
--- /dev/null
+---
+ - name: "update apt repo"
+ action: apt update_cache=yes
+
+ - name: install python requirements
+ action: apt pkg={{ item }}
+ with_items:
+ - python-software-properties
+ - python-dev
+ - python2.7
+ - python3.5
+
+ - name: install pip
+ action: easy_install name=pip
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+if [ "$TAG" = false ] ; then
+ echo "Assuming tagging process has succeeded before because TAG was set to false"
+ exit 0
+fi
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "ansible" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# run ansible to do all the tagging and release specifying
+# a local connection and 'localhost' as the host where to execute
+cd "$WORKSPACE/ceph-build/ansible/"
+$VENV/ansible-playbook -i "localhost," -c local release.yml --extra-vars="version=$VERSION branch=$BRANCH release=stable clean=true project=ceph-deploy"
--- /dev/null
+- scm:
+ name: ceph-build
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build.git
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+ basedir: "ceph-build"
+ branches:
+ - origin/main
+
+- job:
+ name: ceph-deploy-tag
+ description: "This job clones ceph-deploy and sets the right version from the tag, pushing back to ceph-deploy.git"
+ display-name: 'ceph-deploy-tag'
+ node: 'trusty&&small'
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 25
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-deploy
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: "main"
+ - string:
+ name: VERSION
+ description: "The version for release, e.g. 1.5.30"
+ scm:
+ - ceph-build
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - ssh-agent-credentials:
+ # "jenkins-build" SSH key, needed so we can push to
+ # ceph-deploy.git
+ user: 'jenkins-build'
--- /dev/null
+- job:
+ name: ceph-deploy
+ project-type: multijob
+ defaults: global
+ display-name: 'ceph-deploy'
+ concurrent: true
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch or tag to build. Defaults to main"
+ default: "main"
+
+ - bool:
+ name: TEST
+ description: "
+If this is unchecked, then the builds will be pushed to chacra with the correct ref. This is the default.
+
+If this is checked, then the builds will be pushed to chacra under the 'test' ref."
+
+ - bool:
+ name: TAG
+ description: "When this is checked, Jenkins will remove the previous tag and recreate it again, changing the control files and committing again. When this is unchecked, Jenkins will not do any commit or tag operations. If you've already created the private tag separately, then leave this unchecked.
+Defaults to checked."
+ default: true
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: VERSION
+ description: "The version for release, e.g. 0.94.4"
+
+ builders:
+ - multijob:
+ name: 'ceph-deploy tag phase'
+ condition: SUCCESSFUL
+ projects:
+ - name: ceph-deploy-tag
+ current-parameters: true
+ exposed-scm: false
+
+ - multijob:
+ name: 'ceph-deploy build phase'
+ condition: SUCCESSFUL
+ projects:
+ - name: ceph-deploy-build
+ current-parameters: true
+ exposed-scm: false
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
--- /dev/null
+#!/bin/bash
+set -ex
+
+build_debs $VENV ${vers} ${debian_version}
--- /dev/null
+#!/bin/bash
+set -ex
+
+# We need Ubuntu Jammy to cross-compile Ceph for Windows.
+# "DIST" will be set to "windows", so we're currently overriding it with
+# a hardcoded value.
+tmp_pbuild_script=$(mktemp /tmp/build_mingw_pbuild.XXXXXX)
+cat << EOF > $tmp_pbuild_script
+#!/bin/sh
+# Used by the build script
+apt-get install -y sudo git automake wget
+
+cd /mnt/ceph
+CMAKE_BUILD_TYPE=Release BUILD_ZIP=1 CLEAN_BUILD=1 timeout 3h ./win32_build.sh
+EOF
+chmod a+x $tmp_pbuild_script
+sudo pbuilder execute \
+ --bindmounts "$(pwd):/mnt/ceph" \
+ --distribution "jammy" \
+ --basetgz $basedir/jammy.tgz \
+ -- $tmp_pbuild_script
+rm $tmp_pbuild_script
+
+if [ "$THROWAWAY" = false ]; then
+ # push binaries to chacra
+ chacra_binary="$VENV/chacractl binary"
+ chacra_create="$chacra_binary create"
+ [ "$FORCE" = true ] && chacra_binary="$chacra_binary --force"
+
+ find build -name "*.zip" |
+ $chacra_create ${chacra_binary_endpoint}
+
+ # write json file with build info
+ cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$vers",
+ "package_manager_version":"",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+ # post the json to repo-extra json to chacra
+ curl -X POST \
+ -H "Content-Type:application/json" \
+ --data "@$WORKSPACE/repo-extra.json" \
+ -u $CHACRACTL_USER:$CHACRACTL_KEY \
+ ${chacra_url}repos/${chacra_repo_endpoint}/extra/
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_repo_endpoint}
+
+ echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_repo_endpoint}/
+fi
+
+# pbuilder will leave root-owned files in shared workspaces
+sudo chown -R jenkins-build ${WORKSPACE}/dist
--- /dev/null
+#!/bin/bash
+set -ex
+
+case $RELEASE_BRANCH in
+tentacle)
+ OBSREPO="openSUSE_Leap_15.3"
+ ;;
+squid)
+ OBSREPO="openSUSE_Leap_15.3"
+ ;;
+reef)
+ OBSREPO="openSUSE_Leap_15.3"
+ ;;
+*)
+ echo Not supported release '$RELEASE_BRANCH' by openSUSE
+ exit 1
+ ;;
+esac
+
+OBSPROJ="filesystems:ceph:$RELEASE_BRANCH:upstream"
+OBSARCH="x86_64"
+BUILDHOME=$HOME/osc/$OBSREPO-$OBSARCH/home/abuild
+
+rm -rf $OBSPROJ
+osc co $OBSPROJ
+
+rm $OBSPROJ/ceph/ceph-*.tar.bz2
+rm $OBSPROJ/ceph/ceph.spec
+
+cp -a dist/ceph-*.tar.bz2 $OBSPROJ/ceph/.
+cp -a dist/ceph.spec $OBSPROJ/ceph/.
+cp -a dist/rpm/*.patch $OBSPROJ/ceph/. || true
+
+echo "Building RPMs"
+
+(
+ cd $OBSPROJ/ceph
+ osc build --trust-all-projects --clean $OBSREPO $OBSARCH
+)
+
+
+RPM_RELEASE=$(grep Release $OBSPROJ/ceph/ceph.spec | sed 's/Release:[ \t]*//g' | cut -d '%' -f 1)
+RPM_VERSION=$(grep Version $OBSPROJ/ceph/ceph.spec | sed 's/Version:[ \t]*//g')
+PACKAGE_MANAGER_VERSION="$RPM_VERSION-$RPM_RELEASE"
+
+
+chacra_binary="$VENV/chacractl binary"
+[ "$FORCE" = true ] && chacra_binary="$chacra_binary --force"
+
+chacra_create="$chacra_binary create"
+if [ "$THROWAWAY" = false ] ; then
+ # push binaries to chacra
+ find $BUILDHOME/rpmbuild/SRPMS | grep "\.rpm$" |
+ $chacra_create ${chacra_endpoint}/source/flavors/${FLAVOR}
+ find $BUILDHOME/rpmbuild/RPMS | grep "\.rpm$" |
+ $chacra_create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ # write json file with build info
+ cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$vers",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+ chacra_repo_endpoint="${chacra_endpoint}/flavors/${FLAVOR}"
+ # post the json to repo-extra json to chacra
+ curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_repo_endpoint}/extra/
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_repo_endpoint}
+
+ echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}/flavors/${FLAVOR}/
+fi
--- /dev/null
+#!/bin/bash
+# vim: ts=4 sw=4 expandtab
+set -ex
+
+# set to "true" or "false" so that both string comparisons
+# and 'if $CI_CONTAINER' work as expected. Conventions vary across the
+# set of shell scripts and repos involved.
+CI_CONTAINER=${CI_CONTAINER:-false}
+
+maybe_reset_ci_container
+
+# create a release directory for ceph-build tools
+mkdir -p release
+cp -a dist release/${vers}
+
+echo "Building RPMs"
+
+# The below contents ported from /srv/ceph-build/build_rpms.sh ::
+# $bindir/build_rpms.sh ./release $vers
+#
+
+releasedir="./release"
+cephver=$vers
+raw_version=`echo $vers | cut -d '-' -f 1`
+
+cd $releasedir/$cephver || exit 1
+
+# modify the spec file so that it understands we are dealing with a different directory
+sed -i "s/^%setup.*/%setup -q -n %{name}-$vers/" ceph.spec
+# it is entirely possible that `%setup` is not even used, but rather, autosetup
+sed -i "s/^%autosetup.*/%autosetup -p1 -n %{name}-$vers/" ceph.spec
+# This is a fallback to the spec rules that may have altered sections that want
+# to force a non-sha1 naming. This is only needed in development binary
+# building.
+sed -i "s/%{name}-%{version}/ceph-$vers/" ceph.spec
+
+# This is needed because the 'version' this job gets from upstream contains chars
+# that are not legal for an RPM file. These are already converted in the spec file whic
+# is what is consumed to create the RPM binary. Parse these values there so that they can
+# be reported as part of the build metadata
+RPM_RELEASE=`grep Release ceph.spec | sed 's/Release:[ \t]*//g' | cut -d '%' -f 1`
+RPM_VERSION=`grep Version ceph.spec | sed 's/Version:[ \t]*//g'`
+PACKAGE_MANAGER_VERSION="$RPM_VERSION-$RPM_RELEASE"
+
+BUILDAREA=$(setup_rpm_build_area ./rpm/$dist)
+build_rpms $BUILDAREA "${CEPH_EXTRA_RPMBUILD_ARGS}"
+build_ceph_release_rpm $BUILDAREA true
+
+# Make sure we execute at the top level directory
+cd "$WORKSPACE"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+if [ "$THROWAWAY" = false ] ; then
+ # push binaries to chacra
+ find release/${vers}/rpm/*/SRPMS | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/source/flavors/${FLAVOR}
+ find release/${vers}/rpm/*/RPMS/* | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ # extract cephadm if it exists
+ if [ -f ${BUILDAREA}/RPMS/noarch/cephadm-*.rpm ] ; then
+ rpm2cpio ${BUILDAREA}/RPMS/noarch/cephadm-*.rpm | cpio -i --to-stdout *sbin/cephadm > cephadm
+ echo cephadm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ rpm2cpio ${BUILDAREA}/RPMS/noarch/cephadm-*.rpm | cpio -i --to-stdout *sbin/cephadm > cephadm
+ fi
+ echo cephadm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ # write json file with build info
+ cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$vers",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+ chacra_repo_endpoint="${chacra_endpoint}/flavors/${FLAVOR}"
+ # post the json to repo-extra json to chacra
+ curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_repo_endpoint}/extra/
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_repo_endpoint}
+
+ echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}/flavors/${FLAVOR}/
+fi
--- /dev/null
+#!/bin/bash -ex
+
+# note: the failed_build_status call relies on normalized variable names that
+# are infered by the builds themselves. If the build fails before these are
+# set, they will be posted with empty values
+BRANCH=`branch_slash_filter $BRANCH`
+
+# update shaman with the failed build status
+failed_build_status "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
--- /dev/null
+#!/bin/bash
+
+set -ex
+HOST=$(hostname --short)
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " KEYID=${KEYID}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BUILD SOURCE=$COPYARTIFACT_BUILD_NUMBER_CEPH_SETUP"
+echo "*****"
+env
+echo "*****"
+
+if test $(id -u) != 0 ; then
+ SUDO=sudo
+fi
+export LC_ALL=C # the following is vulnerable to i18n
+
+$SUDO apt-get install -y lsb-release
+
+BRANCH=`branch_slash_filter $BRANCH`
+
+cd $WORKSPACE
+
+BPTAG=`get_bptag $DIST`
+
+chacra_ref="$BRANCH"
+vers=`cat ./dist/version`
+
+# We used to detect the $distro variable by inspecting at the host, but this is
+# not accurate because we are using pbuilder and just ubuntu to build
+# everything. That would cause POSTing binaries to incorrect chacra endpoints
+# like project/ref/ubuntu/jessie/.
+distro=""
+case $DIST in
+ jessie|wheezy)
+ distro="debian"
+ ;;
+ *)
+ distro="ubuntu"
+ ;;
+esac
+
+debian_version=${vers}-1
+
+bpvers=`gen_debian_version $debian_version $DIST`
+
+# Normalize variables across rpm/deb builds
+NORMAL_DISTRO=$distro
+NORMAL_DISTRO_VERSION=$DIST
+NORMAL_ARCH=$ARCH
+
+# create build status in shaman
+update_build_status "started" "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
+
+# look for a specific package to tell if we can avoid the build
+chacra_endpoint="ceph/${chacra_ref}/${SHA1}/${distro}/${DIST}/${ARCH}/flavors/${FLAVOR}"
+chacra_repo_endpoint="ceph/${chacra_ref}/${SHA1}/${distro}/${DIST}/flavors/${FLAVOR}"
+DEB_ARCH=`dpkg-architecture | grep DEB_BUILD_ARCH\= | cut -d '=' -f 2`
+chacra_check_url="${chacra_endpoint}/librados2_${bpvers}_${DEB_ARCH}.deb"
+
+if [ "$THROWAWAY" = false ] ; then
+ # this exists in scripts/build_utils.sh
+ # TODO if this exits we need to post to shaman a success
+ check_binary_existence $VENV $chacra_check_url
+fi
--- /dev/null
+#!/bin/bash
+
+set -ex
+HOST=$(hostname --short)
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " KEYID=${KEYID}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BUILD SOURCE=$COPYARTIFACT_BUILD_NUMBER_CEPH_SETUP"
+echo "*****"
+env
+echo "*****"
+
+if test $(id -u) != 0 ; then
+ SUDO=sudo
+fi
+export LC_ALL=C # the following is vulnerable to i18n
+
+BRANCH=`branch_slash_filter $BRANCH`
+
+cd ${WORKSPACE}/dist
+vers=$(cat version)
+raw_version=`echo $vers | cut -d '-' -f 1`
+RELEASE_BRANCH=$(release_from_version $raw_version)
+
+# unpack the tar.gz that contains the source
+tar xzf *.orig.tar.gz
+cd $(basename *.orig.tar.gz .orig.tar.gz | sed s/_/-/)
+pwd
+
+raw_version_major=$(echo $vers | cut -d '.' -f 1)
+if [ 0${raw_version_major} -lt 16 ]; then
+ echo The following Ceph release does not support Windows: $RELEASE_BRANCH
+ exit 1
+fi
+
+# Normalize variables across rpm/deb builds
+NORMAL_DISTRO=$DIST
+NORMAL_DISTRO_VERSION="1809"
+NORMAL_ARCH=$ARCH
+
+# create build status in shaman
+update_build_status "started" "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
+
+FLAVOR="default"
+
+chacra_ref="$BRANCH"
+chacra_endpoint="ceph/${chacra_ref}/${SHA1}/${DIST}/${NORMAL_DISTRO_VERSION}"
+chacra_binary_endpoint="${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}"
+chacra_repo_endpoint="${chacra_endpoint}/flavors/${FLAVOR}"
+chacra_check_url="${chacra_binary_endpoint}/ceph.zip"
+
+if [ "$THROWAWAY" = false ] ; then
+ # this exists in scripts/build_utils.sh
+ # TODO if this exits we need to post to shaman a success
+ check_binary_existence $VENV $chacra_check_url
+fi
+
+# We need Ubuntu Jammy to cross-compile Ceph for Windows.
+# "DIST" will be set to "windows", so we're currently overriding it with
+# a hardcoded value.
+DIST="jammy"
+setup_pbuilder use_gcc
+DIST="$NORMAL_DISTRO"
--- /dev/null
+#!/bin/bash
+
+set -ex
+HOST=$(hostname --short)
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " KEYID=${KEYID}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BUILD SOURCE=$COPYARTIFACT_BUILD_NUMBER_CEPH_SETUP"
+echo "*****"
+env
+echo "*****"
+
+#DIR=/tmp/install-deps.$$
+#trap "rm -fr $DIR" EXIT
+#mkdir -p $DIR
+if test $(id -u) != 0 ; then
+ SUDO=sudo
+fi
+export LC_ALL=C # the following is vulnerable to i18n
+
+cd dist
+ORIGTAR=(*.orig.tar.gz)
+ORIGDIR=${ORIGTAR%.orig.tar.gz}
+ORIGDIR=${ORIGDIR//_/-}
+tar xzf $ORIGTAR
+cd $ORIGDIR
+pwd
+
+BRANCH=`branch_slash_filter $BRANCH`
+
+cd $WORKSPACE
+
+vers=$(cat ./dist/version)
+raw_version=`echo $vers | cut -d '-' -f 1`
+
+RELEASE_BRANCH=$(release_from_version $raw_version)
+case $RELEASE_BRANCH in
+tentacle)
+ DISTRO=opensuse
+ RELEASE="15.3"
+ ;;
+squid)
+ DISTRO=opensuse
+ RELEASE="15.3"
+ ;;
+reef)
+ DISTRO=opensuse
+ RELEASE="15.3"
+ ;;
+*)
+ echo Not supported release '$RELEASE_BRANCH' by openSUSE
+ exit 1
+ ;;
+esac
+
+DIST=leap${RELEASE%%.*}
+
+NORMAL_DISTRO=$DISTRO
+NORMAL_DISTRO_VERSION=$RELEASE
+NORMAL_ARCH=$ARCH
+
+# create build status in shaman
+update_build_status "started" "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
+
+dist=$DIST
+[ -z "$dist" ] && echo no dist && exit 1
+echo dist $dist
+
+chacra_ref="$BRANCH"
+chacra_endpoint="ceph/${chacra_ref}/${SHA1}/${DISTRO}/${RELEASE}"
+chacra_check_url="${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}/librados2-${vers}-0.${DIST}.${ARCH}.rpm"
+
+
+if [ "$THROWAWAY" = false ] ; then
+ # this exists in scripts/build_utils.sh
+ # TODO if this exits we need to post to shaman a success
+ check_binary_existence $VENV $chacra_check_url
+fi
--- /dev/null
+#!/bin/sh -x
+# This file will set the tgz images needed for pbuilder on a given host. It has
+# some hard-coded values like `/srv/debian-base` because it gets built every
+# time this file is executed - completely ephemeral. If a Debian host will use
+# pbuilder, then it will need this. Since it is not idempotent it makes
+# everything a bit slower. ## FIXME ##
+
+set -e
+
+# Only run when we are a Debian or Debian-based distro
+if test -f /etc/redhat-release ; then
+ exit 0
+fi
+
+setup_pbuilder use_gcc
--- /dev/null
+#!/bin/bash
+
+set -ex
+HOST=$(hostname --short)
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " KEYID=${KEYID}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BUILD SOURCE=$COPYARTIFACT_BUILD_NUMBER_CEPH_SETUP"
+echo "*****"
+env
+echo "*****"
+
+if test $(id -u) != 0 ; then
+ SUDO=sudo
+fi
+export LC_ALL=C # the following is vulnerable to i18n
+
+$SUDO yum install -y yum-utils
+
+get_rpm_dist
+
+BRANCH=`branch_slash_filter $BRANCH`
+
+if [[ ! -f /etc/redhat-release && ! -f /usr/bin/zypper ]] ; then
+ exit 0
+fi
+
+# Normalize variables across rpm/deb builds
+NORMAL_DISTRO=$DISTRO
+NORMAL_DISTRO_VERSION=$RELEASE
+NORMAL_ARCH=$ARCH
+
+# create build status in shaman
+update_build_status "started" "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+# unpack the tar.gz that contains the debian dir
+cd dist
+tar xzf *.orig.tar.gz
+cd $(basename *.orig.tar.gz .orig.tar.gz | sed s/_/-/)
+pwd
+
+setup_rpm_build_deps
+
+if [[ $CI_CONTAINER == "true" && $DISTRO == "centos" && "$RELEASE" =~ 8|9 ]] ;
+then
+ podman login -u $CONTAINER_REPO_USERNAME -p $CONTAINER_REPO_PASSWORD $CONTAINER_REPO_HOSTNAME/$CONTAINER_REPO_ORGANIZATION
+fi
+
+cd $WORKSPACE
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
+
+dist=$DIST
+[ -z "$dist" ] && echo no dist && exit 1
+echo dist $dist
+
+vers=`cat ./dist/version`
+chacra_ref="$BRANCH"
+
+chacra_endpoint="ceph/${chacra_ref}/${SHA1}/${DISTRO}/${RELEASE}"
+chacra_check_url="${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}/librados2-${vers}-0.${DIST}.${ARCH}.rpm"
+
+
+if [ "$THROWAWAY" = false ] ; then
+ # this exists in scripts/build_utils.sh
+ # TODO if this exits we need to post to shaman a success
+ check_binary_existence $VENV $chacra_check_url
+fi
--- /dev/null
+#!/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+( source /etc/os-release
+ case $ID in
+ debian|ubuntu)
+ exit 0
+ ;;
+ *)
+ exit 1
+ ;;
+ esac) || exit 0
+
+if [ "${DIST}" == "windows" ]; then
+ exit 0
+fi
--- /dev/null
+#!/bin/bash
+set -ex
+
+# We're currently using pbuilder.
+( source /etc/os-release
+ case $ID in
+ ubuntu)
+ exit 0
+ ;;
+ *)
+ exit 1
+ ;;
+ esac) || exit 0
+
+if [ "${DIST}" != "windows" ]; then
+ exit 0
+fi
--- /dev/null
+#!/bin/bash
+set -ex
+
+# only do work if we are a SUSE distro
+( source /etc/os-release
+ case $ID in
+ opensuse*|suse|sles)
+ exit 0
+ ;;
+ *)
+ exit 1
+ ;;
+ esac) || exit 0
+
+if [ "${DIST}" == "windows" ]; then
+ exit 0
+fi
--- /dev/null
+#!/bin/bash
+set -ex
+
+# only do work if we are a RPM distro
+( source /etc/os-release
+ case $ID in
+ centos|rhel|fedora)
+ exit 0
+ ;;
+ *)
+ exit 1
+ ;;
+ esac) || exit 0
+
+if [ "${DIST}" == "windows" ]; then
+ exit 0
+fi
--- /dev/null
+- job:
+ name: ceph-dev-build
+ node: built-in
+ project-type: matrix
+ defaults: global
+ display-name: 'ceph-dev-build'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph
+ - build-discarder:
+ days-to-keep: 14
+ artifact-days-to-keep: 14
+
+ execution-strategy:
+ combination-filter: |
+ DIST == AVAILABLE_DIST && ARCH == AVAILABLE_ARCH &&
+ (ARCH == "x86_64" || (ARCH == "arm64" && ["xenial", "bionic", "centos7", "centos8", "centos9"].contains(DIST)))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - gigantic
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - arm64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - trusty
+ - xenial
+ - bionic
+ - focal
+ - jammy
+ - noble
+ - centos7
+ - centos8
+ - centos9
+ - jessie
+ - precise
+ - centos6
+ - leap15
+ - windows
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ rm -rf dist
+ rm -rf venv
+ rm -rf release
+ - copyartifact:
+ project: ceph-dev-setup
+ filter: 'dist/**'
+ which-build: multijob-build
+ - inject:
+ properties-file: ${{WORKSPACE}}/dist/sha1
+ - inject:
+ properties-file: ${{WORKSPACE}}/dist/branch
+ - inject:
+ properties-file: ${{WORKSPACE}}/dist/other_envvars
+ # debian build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_deb
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup_deb
+ - ../../build/setup_pbuilder
+ - ../../build/build_deb
+ - ../../../scripts/status_completed
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_rpm
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup_rpm
+ - ../../build/build_rpm
+ - ../../../scripts/build_container
+ - ../../../scripts/status_completed
+ # osc build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_osc
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup_osc
+ - ../../build/build_osc
+ - ../../../scripts/status_completed
+ # mingw build scripts (targeting Windows)
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_mingw
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup_mingw
+ - ../../build/build_mingw
+ - ../../../scripts/status_completed
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - inject:
+ properties-file: ${{WORKSPACE}}/build_info
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/failure
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
+ - username-password-separated:
+ credential-id: quay-ceph-io-ceph-ci
+ username: CONTAINER_REPO_USERNAME
+ password: CONTAINER_REPO_PASSWORD
+ - username-password-separated:
+ credential-id: dgalloway-docker-hub
+ username: DOCKER_HUB_USERNAME
+ password: DOCKER_HUB_PASSWORD
+ - build-name:
+ name: "#${{BUILD_NUMBER}} ${{BRANCH}}, ${{SHA1}}, ${{DISTROS}}, ${{FLAVOR}}"
--- /dev/null
+#!/bin/bash -ex
+
+# update shaman with the triggered build status. At this point there aren't any
+# architectures or distro information, so we just report this with the current
+# build information
+BRANCH=`branch_slash_filter ${GIT_BRANCH}`
+SHA1=${GIT_COMMIT}
+
+update_build_status "queued" "ceph"
--- /dev/null
+- job:
+ name: 'ceph-dev-cron'
+ node: built-in
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 20
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph
+ discard-old-builds: true
+
+ triggers:
+ - pollscm:
+ cron: |
+ TZ=Etc/UTC
+ H 14 * * *
+ H 20 * * *
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph
+ browser: auto
+ branches:
+ - origin/main
+ - origin/tentacle
+ - origin/squid
+ - origin/reef
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ # build reef on:
+ # default: jammy focal centos9 windows
+ - conditional-step:
+ condition-kind: regex-match
+ regex: .*reef.*
+ label: '${{GIT_BRANCH}}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/notify
+ - trigger-builds:
+ - project: 'ceph-dev'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=jammy focal centos9 windows
+ # build squid on:
+ # default: noble jammy centos9 windows
+ - conditional-step:
+ condition-kind: regex-match
+ regex: .*squid.*
+ label: '${{GIT_BRANCH}}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/notify
+ - trigger-builds:
+ - project: 'ceph-dev'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=noble jammy centos9 windows
+ # build tentacle on:
+ # default: noble jammy centos9 windows
+ # crimson: centos9
+ - conditional-step:
+ condition-kind: regex-match
+ regex: .*tentacle.*
+ label: '${{GIT_BRANCH}}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/notify
+ - trigger-builds:
+ - project: 'ceph-dev'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=noble jammy centos9 windows
+ - project: 'ceph-dev'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=centos9
+ FLAVOR=crimson-debug
+ ARCHS=x86_64
+ # build main on:
+ # default: noble jammy centos9 windows
+ # crimson-debug: centos9
+ # crimson-release: centos9
+ - conditional-step:
+ condition-kind: regex-match
+ regex: .*main.*
+ label: '${{GIT_BRANCH}}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/notify
+ - trigger-builds:
+ - project: 'ceph-dev'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=noble jammy centos9 windows
+ - project: 'ceph-dev'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=centos9
+ FLAVOR=crimson-debug
+ ARCHS=x86_64
+ - project: 'ceph-dev'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=centos9
+ FLAVOR=crimson-release
+ ARCHS=x86_64
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+#!/bin/bash
+set -ex
+
+build_debs ${VENV} ${vers} ${debian_version}
--- /dev/null
+#!/bin/bash
+set -ex
+
+# We need Ubuntu Jammy to cross-compile Ceph for Windows.
+# "DIST" will be set to "windows", so we're currently overriding it with
+# a hardcoded value.
+tmp_pbuild_script=$(mktemp /tmp/build_mingw_pbuild.XXXXXX)
+cat << EOF > $tmp_pbuild_script
+#!/bin/sh
+# Used by the build script
+apt-get install -y sudo git automake wget
+
+cd /mnt/ceph
+CMAKE_BUILD_TYPE=Release BUILD_ZIP=1 CLEAN_BUILD=1 timeout 3h ./win32_build.sh
+EOF
+chmod a+x $tmp_pbuild_script
+sudo pbuilder execute \
+ --bindmounts "$(pwd):/mnt/ceph" \
+ --distribution "jammy" \
+ --basetgz $basedir/jammy.tgz \
+ -- $tmp_pbuild_script
+rm $tmp_pbuild_script
+
+if [ "$THROWAWAY" = false ]; then
+ # push binaries to chacra
+ chacra_binary="$VENV/chacractl binary"
+ chacra_create="$chacra_binary create"
+ [ "$FORCE" = true ] && chacra_binary="$chacra_binary --force"
+
+ find build -name "*.zip" |
+ $chacra_create ${chacra_binary_endpoint}
+
+ # write json file with build info
+ cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$vers",
+ "package_manager_version":"",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+ # post the json to repo-extra json to chacra
+ curl -X POST \
+ -H "Content-Type:application/json" \
+ --data "@$WORKSPACE/repo-extra.json" \
+ -u $CHACRACTL_USER:$CHACRACTL_KEY \
+ ${chacra_url}repos/${chacra_repo_endpoint}/extra/
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_repo_endpoint}
+
+ echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_repo_endpoint}/
+fi
+
+# pbuilder will leave root-owned files in shared workspaces
+sudo chown -R jenkins-build ${WORKSPACE}/dist
--- /dev/null
+../../ceph-dev-build/build/build_osc
\ No newline at end of file
--- /dev/null
+#!/bin/bash
+# vim: ts=4 sw=4 expandtab
+
+# PS4 is expanded as the prefix for -x traces
+PS4="\$(date) :: "
+set -ex
+
+
+# set to "true" or "false" so that both string comparisons
+# and 'if $CI_CONTAINER' work as expected. Conventions vary across the
+# set of shell scripts and repos involved.
+CI_CONTAINER=${CI_CONTAINER:-false}
+
+maybe_reset_ci_container
+
+# create a release directory for ceph-build tools
+mkdir -p release
+cp -a dist release/${vers}
+
+echo "Building RPMs"
+
+# The below contents ported from /srv/ceph-build/build_rpms.sh ::
+# $bindir/build_rpms.sh ./release $vers
+#
+
+releasedir="./release"
+cephver=$vers
+raw_version=`echo $vers | cut -d '-' -f 1`
+
+cd $releasedir/$cephver || exit 1
+
+# modify the spec file so that it understands we are dealing with a different directory
+sed -i "s/^%setup.*/%setup -q -n %{name}-$vers/" ceph.spec
+# it is entirely possible that `%setup` is not even used, but rather, autosetup
+sed -i "s/^%autosetup.*/%autosetup -p1 -n %{name}-$vers/" ceph.spec
+# This is a fallback to the spec rules that may have altered sections that want
+# to force a non-sha1 naming. This is only needed in development binary
+# building.
+sed -i "s/%{name}-%{version}/ceph-$vers/" ceph.spec
+
+# This is needed because the 'version' this job gets from upstream contains chars
+# that are not legal for an RPM file. These are already converted in the spec file whic
+# is what is consumed to create the RPM binary. Parse these values there so that they can
+# be reported as part of the build metadata
+RPM_RELEASE=`grep Release ceph.spec | sed 's/Release:[ \t]*//g' | cut -d '%' -f 1`
+RPM_VERSION=`grep Version ceph.spec | sed 's/Version:[ \t]*//g'`
+PACKAGE_MANAGER_VERSION="$RPM_VERSION-$RPM_RELEASE"
+
+BUILDAREA=$(setup_rpm_build_area ./rpm/$dist)
+build_rpms ${BUILDAREA} "${CEPH_EXTRA_RPMBUILD_ARGS}"
+[ "$SCCACHE" = true ] && sccache -s
+build_ceph_release_rpm ${BUILDAREA} true
+
+# Make sure we execute at the top level directory
+cd "$WORKSPACE"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+if [ "$THROWAWAY" = false ] ; then
+ # push binaries to chacra
+ find release/${vers}/rpm/*/SRPMS | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/source/flavors/${FLAVOR}
+ find release/${vers}/rpm/*/RPMS/* | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ # extract cephadm if it exists
+ if [ -f ${BUILDAREA}/RPMS/noarch/cephadm-*.rpm ] ; then
+ rpm2cpio ${BUILDAREA}/RPMS/noarch/cephadm-*.rpm | cpio -i --to-stdout *sbin/cephadm > cephadm
+ echo cephadm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ fi
+ # write json file with build info
+ cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$vers",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+ chacra_repo_endpoint="${chacra_endpoint}/flavors/${FLAVOR}"
+ # post the json to repo-extra json to chacra
+ curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_repo_endpoint}/extra/
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_repo_endpoint}
+
+ echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}/flavors/${FLAVOR}/
+fi
--- /dev/null
+#!/bin/bash -ex
+
+# note: the failed_build_status call relies on normalized variable names that
+# are infered by the builds themselves. If the build fails before these are
+# set, they will be posted with empty values
+BRANCH=`branch_slash_filter $BRANCH`
+
+# update shaman with the failed build status
+failed_build_status "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
--- /dev/null
+#!/bin/bash
+
+set -ex
+HOST=$(hostname --short)
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " KEYID=${KEYID}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BUILD SOURCE=$COPYARTIFACT_BUILD_NUMBER_CEPH_SETUP"
+echo "*****"
+env
+echo "*****"
+
+if test $(id -u) != 0 ; then
+ SUDO=sudo
+fi
+export LC_ALL=C # the following is vulnerable to i18n
+
+$SUDO apt-get install -y lsb-release
+
+BRANCH=`branch_slash_filter $BRANCH`
+
+cd $WORKSPACE
+
+BPTAG=`get_bptag $DIST`
+
+chacra_ref="$BRANCH"
+vers=`cat ./dist/version`
+
+# We used to detect the $distro variable by inspecting at the host, but this is
+# not accurate because we are using pbuilder and just ubuntu to build
+# everything. That would cause POSTing binaries to incorrect chacra endpoints
+# like project/ref/ubuntu/jessie/.
+distro=""
+case $DIST in
+ jessie|wheezy)
+ distro="debian"
+ ;;
+ *)
+ distro="ubuntu"
+ ;;
+esac
+
+debian_version=${vers}-1
+
+bpvers=`gen_debian_version $debian_version $DIST`
+
+# Normalize variables across rpm/deb builds
+NORMAL_DISTRO=$distro
+NORMAL_DISTRO_VERSION=$DIST
+NORMAL_ARCH=$ARCH
+
+# create build status in shaman
+update_build_status "started" "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
+
+# look for a specific package to tell if we can avoid the build
+chacra_endpoint="ceph/${chacra_ref}/${SHA1}/${distro}/${DIST}/${ARCH}/flavors/${FLAVOR}"
+chacra_repo_endpoint="ceph/${chacra_ref}/${SHA1}/${distro}/${DIST}/flavors/${FLAVOR}"
+DEB_ARCH=`dpkg-architecture | grep DEB_BUILD_ARCH\= | cut -d '=' -f 2`
+chacra_check_url="${chacra_endpoint}/librados2_${bpvers}_${DEB_ARCH}.deb"
+
+if [ "$THROWAWAY" = false ] ; then
+ # this exists in scripts/build_utils.sh
+ # TODO if this exits we need to post to shaman a success
+ check_binary_existence $VENV $chacra_check_url
+fi
--- /dev/null
+#!/bin/bash
+
+set -ex
+HOST=$(hostname --short)
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " KEYID=${KEYID}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BUILD SOURCE=$COPYARTIFACT_BUILD_NUMBER_CEPH_SETUP"
+echo "*****"
+env
+echo "*****"
+
+if test $(id -u) != 0 ; then
+ SUDO=sudo
+fi
+export LC_ALL=C # the following is vulnerable to i18n
+
+BRANCH=`branch_slash_filter $BRANCH`
+
+cd ${WORKSPACE}/dist
+vers=$(cat version)
+raw_version=`echo $vers | cut -d '-' -f 1`
+RELEASE_BRANCH=$(release_from_version $raw_version)
+
+# unpack the tar.gz that contains the source
+tar xzf *.orig.tar.gz
+cd $(basename *.orig.tar.gz .orig.tar.gz | sed s/_/-/)
+pwd
+
+raw_version_major=$(echo $vers | cut -d '.' -f 1)
+if [ 0${raw_version_major} -lt 16 ]; then
+ echo The following Ceph release does not support Windows: $RELEASE_BRANCH
+ exit 1
+fi
+
+# Normalize variables across rpm/deb builds
+NORMAL_DISTRO=$DIST
+NORMAL_DISTRO_VERSION="1809"
+NORMAL_ARCH=$ARCH
+
+# create build status in shaman
+update_build_status "started" "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
+
+FLAVOR="default"
+
+chacra_ref="$BRANCH"
+chacra_endpoint="ceph/${chacra_ref}/${SHA1}/${DIST}/${NORMAL_DISTRO_VERSION}"
+chacra_binary_endpoint="${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}"
+chacra_repo_endpoint="${chacra_endpoint}/flavors/${FLAVOR}"
+chacra_check_url="${chacra_binary_endpoint}/ceph.zip"
+
+if [ "$THROWAWAY" = false ] ; then
+ # this exists in scripts/build_utils.sh
+ # TODO if this exits we need to post to shaman a success
+ check_binary_existence $VENV $chacra_check_url
+fi
+
+# We need Ubuntu Jammy to cross-compile Ceph for Windows.
+# "DIST" will be set to "windows", so we're currently overriding it with
+# a hardcoded value.
+DIST="jammy"
+setup_pbuilder use_gcc
+DIST="$NORMAL_DISTRO"
--- /dev/null
+../../ceph-dev-build/build/setup_osc
\ No newline at end of file
--- /dev/null
+#!/bin/sh -x
+# This file will set the tgz images needed for pbuilder on a given host. It has
+# some hard-coded values like `/srv/debian-base` because it gets built every
+# time this file is executed - completely ephemeral. If a Debian host will use
+# pbuilder, then it will need this. Since it is not idempotent it makes
+# everything a bit slower. ## FIXME ##
+
+set -e
+
+# Only run when we are a Debian or Debian-based distro
+if test -f /etc/redhat-release ; then
+ exit 0
+fi
+
+setup_pbuilder use_gcc
+
+if [ "$SCCACHE" = true ] ; then
+ setup_pbuilderrc
+ setup_sccache_pbuilder_hook
+fi
--- /dev/null
+#!/bin/bash
+
+set -ex
+HOST=$(hostname --short)
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " KEYID=${KEYID}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BUILD SOURCE=$COPYARTIFACT_BUILD_NUMBER_CEPH_SETUP"
+echo "*****"
+env
+echo "*****"
+
+if test $(id -u) != 0 ; then
+ SUDO=sudo
+fi
+export LC_ALL=C # the following is vulnerable to i18n
+
+$SUDO yum install -y yum-utils
+
+get_rpm_dist
+
+BRANCH=`branch_slash_filter $BRANCH`
+
+if [[ ! -f /etc/redhat-release && ! -f /usr/bin/zypper ]] ; then
+ exit 0
+fi
+
+# Normalize variables across rpm/deb builds
+NORMAL_DISTRO=$DISTRO
+NORMAL_DISTRO_VERSION=$RELEASE
+NORMAL_ARCH=$ARCH
+
+# create build status in shaman
+update_build_status "started" "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+# unpack the tar.gz that contains the debian dir
+cd dist
+tar xzf *.orig.tar.gz
+cd $(basename *.orig.tar.gz .orig.tar.gz | sed s/_/-/)
+pwd
+
+setup_rpm_build_deps
+
+cd $WORKSPACE
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
+
+dist=$DIST
+[ -z "$dist" ] && echo no dist && exit 1
+echo dist $dist
+
+vers=`cat ./dist/version`
+chacra_ref="$BRANCH"
+
+chacra_endpoint="ceph/${chacra_ref}/${SHA1}/${DISTRO}/${RELEASE}"
+chacra_check_url="${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}/librados2-${vers}-0.${DIST}.${ARCH}.rpm"
+
+
+if [ "$THROWAWAY" = false ] ; then
+ # this exists in scripts/build_utils.sh
+ # TODO if this exits we need to post to shaman a success
+ check_binary_existence $VENV $chacra_check_url
+fi
+
+if [ "$SCCACHE" = true ] ; then
+ write_sccache_conf
+ write_aws_credentials
+ install_sccache
+fi
--- /dev/null
+../../ceph-dev-build/build/validate_deb
\ No newline at end of file
--- /dev/null
+#!/bin/bash
+set -ex
+
+# We're currently using pbuilder.
+( source /etc/os-release
+ case $ID in
+ ubuntu)
+ exit 0
+ ;;
+ *)
+ exit 1
+ ;;
+ esac) || exit 0
+
+if [ "${DIST}" != "windows" ]; then
+ exit 0
+fi
--- /dev/null
+../../ceph-dev-build/build/validate_osc
\ No newline at end of file
--- /dev/null
+../../ceph-dev-build/build/validate_rpm
\ No newline at end of file
--- /dev/null
+- job:
+ name: ceph-dev-new-build
+ node: built-in
+ project-type: matrix
+ defaults: global
+ display-name: 'ceph-dev-new-build'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph-ci
+ - build-discarder:
+ days-to-keep: 90
+ artifact-days-to-keep: 90
+
+ parameters:
+ - copyartifact-build-selector:
+ name: SETUP_BUILD_ID
+ which-build: multijob-build
+
+ execution-strategy:
+ combination-filter: |
+ DIST == AVAILABLE_DIST && ARCH == AVAILABLE_ARCH &&
+ (ARCH == "x86_64" || (ARCH == "arm64" && ["xenial", "bionic", "centos7", "centos8", "centos9"].contains(DIST)))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - gigantic
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - arm64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - trusty
+ - xenial
+ - bionic
+ - focal
+ - jammy
+ - noble
+ - jessie
+ - precise
+ - centos6
+ - centos7
+ - centos8
+ - centos9
+ - leap15
+ - windows
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ ls -la
+ sudo rm -rf dist
+ rm -rf venv
+ sudo rm -rf release
+ - shell: !include-raw-verbatim:
+ - ../../../scripts/setup_container_runtime.sh
+ - copyartifact:
+ project: ceph-dev-new-setup
+ filter: 'dist/**'
+ which-build: build-param
+ param: SETUP_BUILD_ID
+ - inject:
+ properties-file: ${{WORKSPACE}}/dist/sha1
+ - inject:
+ properties-file: ${{WORKSPACE}}/dist/branch
+ - inject:
+ properties-file: ${{WORKSPACE}}/dist/other_envvars
+ # debian build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_deb
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup_deb
+ - ../../build/setup_pbuilder
+ - ../../build/build_deb
+ - ../../../scripts/status_completed
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_rpm
+ - ../../../scripts/build_utils.sh
+ - ../../../scripts/setup_sccache.sh
+ - ../../build/setup_rpm
+ - ../../build/build_rpm
+ - ../../../scripts/build_container
+ - ../../../scripts/status_completed
+ # osc build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_osc
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup_osc
+ - ../../build/build_osc
+ - ../../../scripts/status_completed
+ # mingw build scripts (targeting Windows)
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_mingw
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup_mingw
+ - ../../build/build_mingw
+ - ../../../scripts/status_completed
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - inject:
+ properties-file: ${{WORKSPACE}}/build_info
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/failure
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
+ - username-password-separated:
+ credential-id: quay-ceph-io-ceph-ci
+ username: CONTAINER_REPO_USERNAME
+ password: CONTAINER_REPO_PASSWORD
+ - username-password-separated:
+ credential-id: dgalloway-docker-hub
+ username: DOCKER_HUB_USERNAME
+ password: DOCKER_HUB_PASSWORD
+ - username-password-separated:
+ credential-id: ibm-cloud-sccache-bucket
+ username: AWS_ACCESS_KEY_ID
+ password: AWS_SECRET_ACCESS_KEY
+ - build-name:
+ name: "#${{BUILD_NUMBER}} ${{BRANCH}}, ${{SHA1}}, ${{DISTROS}}, ${{FLAVOR}}"
--- /dev/null
+#!/bin/bash -ex
+# -*- mode:sh; tab-width:4; sh-basic-offset:4; indent-tabs-mode:nil -*-
+# vim: softtabstop=4 shiftwidth=4 expandtab
+
+# Since this job is now pulling from ceph-ci.git, there aren't any tags as they
+# exist in ceph.git and the Ceph's versioning scheme wants to have them to
+# construct the actual version. This doesn't happen when building from ceph.git
+git fetch --tags https://github.com/ceph/ceph.git
+
+BRANCH=`branch_slash_filter $BRANCH`
+SHA1=${GIT_COMMIT}
+
+# split on '/' to get just 'wip-mybranch' when input is like: origin/wip-mybranch
+
+HOST=$(hostname --short)
+echo "Building on ${HOST}"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BRANCH=$BRANCH"
+echo " SHA1=$GIT_COMMIT"
+
+if [ -x "$BRANCH" ] ; then
+ echo "No git branch was supplied"
+ exit 1
+fi
+
+echo "Building version $(git describe --abbrev=8) Branch $BRANCH"
+
+rm -rf dist
+rm -rf release
+
+# fix version/release. Hack needed only for the spec
+# file for rc candidates.
+#export force=force
+#sed -i 's/^Version:.*/Version: 0.80/' ceph.spec.in
+#sed -i 's/^Release:.*/Release: rc1%{?dist}/' ceph.spec.in
+#sed -i 's/^Source0:.*/Source0: http:\/\/ceph.com\/download\/%{name}-%{version}-rc1.tar.bz2/' ceph.spec.in
+#sed -i 's/^%setup.*/%setup -q -n %{name}-%{version}-rc1/' ceph.spec.in
+
+
+# run submodule updates regardless
+echo "Running submodule update ..."
+git submodule update --init --quiet
+
+# When using autotools/autoconf it is possible to see output from `git diff`
+# since some macros can be copied over to the ceph source, triggering this
+# check. This is why this check now is done just before running autogen.sh
+# which calls `aclocal -I m4 --install` that copies a system version of
+# ltsugar.m4 that can be different from the one included in the ceph source
+# tree.
+if git diff --quiet ; then
+ echo repository is clean
+else
+ echo
+ echo "**** REPOSITORY IS DIRTY ****"
+ echo
+ git diff
+ if [ "$force" != "force" ]; then
+ echo "add 'force' argument if you really want to continue."
+ exit 1
+ fi
+ echo "forcing."
+fi
+
+# This is a dev release, enable some debug cmake configs. Note: it has been
+# this way since at least 35e1a715. It's difficult to tell when or even if ceph
+# was ever properly built with debugging configurations for QA as there are
+# corresponding changes in ceph with the switch to cmake which makes this
+# challenging to evaluate.
+#
+# It's likely that it was wrongly assumed that cmake would set the build type
+# to Debug because the ".git" directory would be present. This is not the case
+# because the "make-dist" script (executed below) creates a git tarball that is
+# used for the actual untar/build. See also:
+#
+# https://github.com/ceph/ceph/pull/53800
+#
+# Addendum and possibly temporary restriction: only enable these for branches
+# ending in "-debug".
+if [[ "$BRANCH" == *-debug ]]; then
+ CEPH_EXTRA_CMAKE_ARGS+=" -DCMAKE_BUILD_TYPE=Debug -DWITH_CEPH_DEBUG_MUTEX=ON"
+ printf 'Added debug cmake configs to branch %s. CEPH_EXTRA_CMAKE_ARGS: %s\n' "$BRANCH" "$CEPH_EXTRA_CMAKE_ARGS"
+else
+ printf 'No cmake debug options added to branch %s.\n' "$BRANCH"
+fi
+
+ceph_build_args_from_flavor ${FLAVOR}
+
+mkdir -p release
+
+# Contents below used to come from /srv/release_tarball.sh and
+# was called like::
+#
+# $bindir/release_tarball.sh release release/version
+
+releasedir='release'
+versionfile='release/version'
+
+cephver=`git describe --abbrev=8 --match "v*" | sed s/^v//`
+echo current version $cephver
+
+srcdir=`pwd`
+
+setup_container_runtime
+if command -v podman; then
+ PODMAN=podman
+elif [[ "$(groups)" =~ .*\ docker\ .* ]]; then
+ PODMAN=docker
+else
+ PODMAN="sudo docker"
+fi
+
+if [ -d "$releasedir/$cephver" ]; then
+ echo "$releasedir/$cephver already exists; reuse that release tarball"
+else
+ # Create a container image to provide debian-specific utilities, so that this job can run on any container-capable host
+ printf "FROM ubuntu:24.04\nRUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y dpkg-dev devscripts && apt-get clean && rm -rf /var/lib/apt/lists/*" | $PODMAN build -t ubuntu_builder -
+ $PODMAN run --rm -v $PWD:/ceph:z ubuntu_builder:latest bash -c "cd /ceph && dch -v $cephver-1 'autobuilder'"
+
+ # declare an associative array to map file extensions to tar flags
+ declare -A compression=( ["bz2"]="j" ["gz"]="z" ["xz"]="J" )
+ for cmp in "${!compression[@]}"; do
+ rm -f ceph-*.tar.$cmp
+ done
+ echo building tarball
+ ./make-dist $cephver
+ for cmp in "${!compression[@]}"; do
+ extension="tar.$cmp"
+ vers=$(ls ceph-*.$extension | cut -c 6- | sed "s/.$extension//" || true)
+ flag="${compression[$cmp]}"
+ extract_flags="${flag}xf"
+ compress_flags="${flag}cf"
+ if [ "$vers" != "" ]; then break; fi
+ done
+ echo tarball vers $vers
+
+ echo extracting
+ mkdir -p $releasedir/$cephver/rpm
+ cp rpm/*.patch $releasedir/$cephver/rpm || true
+ cd $releasedir/$cephver
+
+ tar $extract_flags $srcdir/ceph-$vers.$extension
+
+ [ "$vers" != "$cephver" ] && mv ceph-$vers ceph-$cephver
+
+ tar zcf ceph_$cephver.orig.tar.gz ceph-$cephver
+ cp -a ceph_$cephver.orig.tar.gz ceph-$cephver.tar.gz
+
+ tar jcf ceph-$cephver.tar.bz2 ceph-$cephver
+
+ # copy debian dir, too. Prevent errors with `true` when using cmake
+ cp -a $srcdir/debian debian || true
+ cd $srcdir
+
+ # copy in spec file, too. If using cmake, the spec file
+ # will already exist.
+ cp ceph.spec $releasedir/$cephver || true
+fi
+
+
+if [ -n "$versionfile" ]; then
+ echo $cephver > $versionfile
+ echo "wrote $cephver to $versionfile"
+fi
+
+vers=`cat release/version`
+
+
+(
+ cd release/$vers
+ mkdir -p ceph-$vers/debian
+ cp -r debian/* ceph-$vers/debian/
+ $PODMAN run --rm -v $PWD:/ceph:z ubuntu_builder:latest bash -c "cd /ceph && dpkg-source -b ceph-$vers"
+)
+
+mkdir -p dist
+# Debian Source Files
+mv release/$vers/*.dsc dist/.
+mv release/$vers/*.diff.gz dist/. || true
+mv release/$vers/*.orig.tar.gz dist/.
+# RPM Source Files
+mkdir -p dist/rpm/
+mv release/$vers/rpm/*.patch dist/rpm/ || true
+mv release/$vers/ceph.spec dist/.
+mv release/$vers/*.tar.* dist/.
+# Parameters
+mv release/version dist/.
+
+
+if [ "$DWZ" = false ] ; then
+ CEPH_EXTRA_RPMBUILD_ARGS="${CEPH_EXTRA_RPMBUILD_ARGS} --without dwz"
+fi
+
+if [ "$SCCACHE" = true ] ; then
+ CEPH_EXTRA_RPMBUILD_ARGS="${CEPH_EXTRA_RPMBUILD_ARGS} --with sccache"
+fi
+write_dist_files
--- /dev/null
+#!/bin/bash -ex
+
+# update shaman with the failed build status. At this point there aren't any
+# architectures or distro information, so we just report this with the current
+# (ceph-dev-setup) build information that includes log and build urls
+BRANCH=`branch_slash_filter $BRANCH`
+SHA1=${GIT_COMMIT}
+
+failed_build_status "ceph"
--- /dev/null
+- job:
+ name: ceph-dev-new-setup
+ description: "This job step checks out the branch and builds the tarballs, diffs, and dsc that are passed to the ceph-dev-build step.\r\n\r\nNotes:\r\nJob needs to run on a releatively recent debian system. The Restrict where run feature is used to specifiy an appropriate label.\r\nThe clear workspace before checkout box for the git plugin is used."
+ node: huge && !arm64
+ display-name: 'ceph-dev-new-setup'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 100
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: 50
+ - github:
+ url: https://github.com/ceph/ceph-ci
+ - copyartifact:
+ projects: ceph-dev-new-build,ceph-dev-new,ceph-dev-pipeline
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+
+ scm:
+ - git:
+ url: git@github.com:ceph/ceph-ci.git
+ # Use the SSH key attached to the ceph-jenkins GitHub account.
+ credentials-id: 'jenkins-build'
+ branches:
+ - $BRANCH
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/setup_container_runtime.sh
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ publishers:
+ - archive:
+ artifacts: 'dist/**'
+ allow-empty: false
+ latest-only: false
+
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/failure
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+#!/bin/bash -ex
+
+# update shaman with the triggered build status. At this point there aren't any
+# architectures or distro information, so we just report this with the current
+# build information
+BRANCH=`branch_slash_filter ${GIT_BRANCH}`
+SHA1=${GIT_COMMIT}
+
+update_build_status "queued" "ceph"
+
--- /dev/null
+- job:
+ name: ceph-dev-new-trigger
+ node: built-in
+ disabled: true
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ num-to-keep: 100
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-ci
+ discard-old-builds: true
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-ci
+ browser: auto
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+ choosing-strategy: ancestry
+ maximum-age: 7
+
+ builders:
+ # build reef on:
+ # default: jammy focal centos9 windows
+ - conditional-step:
+ condition-kind: regex-match
+ regex: .*reef.*
+ label: '${{GIT_BRANCH}}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/notify
+ - trigger-builds:
+ - project: 'ceph-dev-new'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=jammy focal centos9 windows
+ # build squid on:
+ # default: noble jammy centos9 windows
+ - conditional-step:
+ condition-kind: regex-match
+ regex: .*squid.*
+ label: '${{GIT_BRANCH}}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/notify
+ - trigger-builds:
+ - project: 'ceph-dev-new'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=noble jammy centos9 windows
+ # build tentacle on:
+ # default: noble jammy centos9 windows
+ # crimson: centos9
+ - conditional-step:
+ condition-kind: regex-match
+ regex: .*tentacle.*
+ label: '${{GIT_BRANCH}}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/notify
+ - trigger-builds:
+ - project: 'ceph-dev-new'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=noble jammy centos9 windows
+ - project: 'ceph-dev-new'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=centos9
+ FLAVOR=crimson-debug
+ ARCHS=x86_64
+ # If no release name is found in branch, build on all possible distro/flavor combos (except xenial, bionic, focal).
+ # regex matching and 'on-evaluation-failure: run' doesn't work here so triple negative it is.
+ - conditional-step:
+ condition-kind: shell
+ condition-command: |
+ echo "${{GIT_BRANCH}}" | grep -v '\(reef\|squid\|tentacle\|centos9-only\|crimson-only\)'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/notify
+ - trigger-builds:
+ - project: 'ceph-dev-new'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=noble jammy centos9 windows
+ - trigger-builds:
+ - project: 'ceph-dev-new'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=centos9
+ FLAVOR=crimson-debug
+ ARCHS=x86_64
+ # build only centos9, no crimson
+ - conditional-step:
+ condition-kind: regex-match
+ regex: .*centos9-only.*
+ label: '${{GIT_BRANCH}}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/notify
+ - trigger-builds:
+ - project: 'ceph-dev-new'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=centos9
+ ARCHS=x86_64
+ # Build only the `crimson` flavour, don't waste resources on the default one.
+ # Useful for the crimson's bug-hunt at Sepia
+ # crimson-debug: centos9
+ # crimson-release: centos9
+ - conditional-step:
+ condition-kind: regex-match
+ regex: .*crimson-only.*
+ label: '${{GIT_BRANCH}}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/notify
+ - trigger-builds:
+ - project: 'ceph-dev-new'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=centos9
+ FLAVOR=crimson-debug
+ ARCHS=x86_64
+ - trigger-builds:
+ - project: 'ceph-dev-new'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
+ DISTROS=centos9
+ FLAVOR=crimson-release
+ ARCHS=x86_64
+ # sccache
+ - conditional-step:
+ condition-kind: regex-match
+ regex: .*sccache.*
+ label: '${{GIT_BRANCH}}'
+ on-evaluation-failure: dont-run
+ steps:
+ - shell: echo skipping
+
+ wrappers:
+ - build-name:
+ name: "#${{BUILD_NUMBER}} ${{GIT_BRANCH}}"
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+- job:
+ name: ceph-dev-new
+ description: 'This job builds branches from https://github.com/ceph/ceph-ci for testing purposes.'
+ node: built-in
+ project-type: multijob
+ defaults: global
+ concurrent: true
+ display-name: 'ceph-dev-new'
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 25
+ artifact-days-to-keep: 25
+ artifact-num-to-keep: 25
+ - github:
+ url: https://github.com/ceph/ceph-ci
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: main
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos9, centos8, centos7, centos6, noble, jammy, focal, bionic, xenial, trusty, precise, wheezy, jessie, and windows"
+ default: "noble jammy centos9 windows"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64, and arm64"
+ default: "x86_64 arm64"
+
+ - bool:
+ name: THROWAWAY
+ description: "Default: False. When True it will not POST binaries to chacra. Artifacts will not be around for long. Useful to test builds."
+ default: false
+
+ - bool:
+ name: FORCE
+ description: "If this is unchecked, then then nothing is built or pushed if they already exist in chacra. This is the default. If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: CEPH_BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ - choice:
+ name: FLAVOR
+ choices:
+ - default
+ - crimson-debug
+ - crimson-release
+ default: "default"
+ description: "Type of Ceph build, choices are: crimson-debug, crimson-release, default. Defaults to: 'default'"
+
+ - string:
+ name: CI_CONTAINER
+ description: 'Build container with development release of Ceph. Note: this must be "false" or "true" so that it can execute a command or satisfy a string comparison'
+ default: "true"
+
+ - string:
+ name: CONTAINER_REPO_HOSTNAME
+ description: "For CI_CONTAINER: Name of container repo server (i.e. 'quay.io')"
+ default: "quay-quay-quay.apps.os.sepia.ceph.com"
+
+ - string:
+ name: CONTAINER_REPO_ORGANIZATION
+ description: "For CI_CONTAINER: Name of container repo organization (i.e. 'ceph-ci')"
+ default: "ceph-ci"
+
+ - bool:
+ name: DWZ
+ description: "Use dwz to make debuginfo packages smaller"
+ default: true
+
+ - bool:
+ name: SCCACHE
+ description: "Use sccache"
+ default: false
+
+ builders:
+ - multijob:
+ name: 'ceph dev setup phase'
+ condition: SUCCESSFUL
+ projects:
+ - name: ceph-dev-new-setup
+ current-parameters: true
+ exposed-scm: false
+ - copyartifact:
+ project: ceph-dev-new-setup
+ filter: dist/sha1
+ which-build: multijob-build
+ - inject:
+ properties-file: ${{WORKSPACE}}/dist/sha1
+ - copyartifact:
+ project: ceph-dev-new-setup
+ filter: dist/branch
+ which-build: multijob-build
+ - inject:
+ properties-file: ${{WORKSPACE}}/dist/branch
+ - multijob:
+ name: 'ceph dev build phase'
+ condition: SUCCESSFUL
+ projects:
+ - name: ceph-dev-new-build
+ current-parameters: true
+ exposed-scm: false
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - build-name:
+ name: "#${{BUILD_NUMBER}} ${{BRANCH}}, ${{SHA1}}, ${{DISTROS}}, ${{FLAVOR}}"
--- /dev/null
+JOB = "ceph-dev-pipeline"
+VALID_PARAMETERS = [
+ "CEPH_BUILD_BRANCH",
+ "ARCHS",
+ "CI_COMPILE",
+ "CI_CONTAINER",
+ "CI_PIPELINE",
+ "DISTROS",
+ "DWZ",
+ "FLAVOR",
+ "SCCACHE",
+]
+def params = []
+
+pipeline {
+ agent any
+ stages {
+ stage("Prepare parameters") {
+ steps {
+ script {
+ def trailer = sh(
+ script: "echo \"$head_commit\" | git interpret-trailers --parse",
+ returnStdout: true,
+ )
+ println("trailer: ${trailer}")
+ def paramsMap = [:]
+ for (item in trailer.split("\n")) {
+ def matcher = item =~ /(.+): (.+)/
+ if (matcher.matches()) {
+ key = matcher[0][1].replace("-", "_").toUpperCase()
+ value = matcher[0][2]
+ paramsMap[key] = value
+ }
+ }
+ def branch = env.ref.replace("refs/heads/", "")
+ params.push(string(name: "BRANCH", value: branch))
+ println("Looking for parameters: ${VALID_PARAMETERS}")
+ for (key in VALID_PARAMETERS) {
+ value = paramsMap[key]
+ if ( value ) {
+ params.push(string(name: key, value: value))
+ println("${key}=${value}")
+ }
+ }
+ }
+ }
+ }
+ stage("Trigger job") {
+ steps {
+ script {
+ build(
+ job: JOB,
+ parameters: params
+ )
+ }
+ }
+ }
+ }
+}
--- /dev/null
+- job:
+ name: ceph-dev-pipeline-trigger
+ project-type: pipeline
+ quiet-period: 1
+ concurrent: true
+ pipeline-scm:
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build
+ branches:
+ - main
+ shallow-clone: true
+ submodule:
+ disable: true
+ wipe-workspace: true
+ script-path: ceph-dev-pipeline-trigger/build/Jenkinsfile
+ lightweight-checkout: true
+ do-not-fetch-tags: true
+
+ triggers:
+ - generic-webhook-trigger:
+ token: ceph-dev-pipeline-trigger
+ token-credential-id: pipeline-trigger-token
+ print-contrib-var: true
+ header-params:
+ - key: X_GitHub_Event
+ value: ""
+ post-content-params:
+ - type: JSONPath
+ key: head_commit
+ value: $.head_commit.message
+ - type: JSONPath
+ key: ref
+ value: $.ref
+ - type: JSONPath
+ key: pusher
+ value: $.pusher.name
+ regex-filter-text: $head_commit
+ regex-filter-expression: "(?i)CI-PIPELINE: true"
+ cause: "Push to $ref by $pusher"
--- /dev/null
+import groovy.transform.Field
+
+@Field String ceph_build_repo = "https://github.com/ceph/ceph-build"
+@Field String ceph_build_branch = "main"
+@Field String base_node_label = "gigantic"
+@Field Map ubuntu_releases = [
+ "noble": "24.04",
+ "jammy": "22.04",
+ "focal": "20.04",
+]
+@Field Map debian_releases = [
+ "bookworm": "12",
+ "bullseye": "11",
+]
+@Field Map build_matrix = [:]
+@Field List container_distros = [
+ 'centos9',
+ 'rocky10',
+]
+
+def get_os_info(dist) {
+ def os = [
+ "name": dist,
+ "version": dist,
+ "version_name": dist,
+ "pkg_type": "NONE",
+ ]
+ def matcher = dist =~ /^(centos|rhel|rocky|fedora)(\d+)/
+ if ( matcher.find() ) {
+ os.name = matcher.group(1)
+ os.version = os.version_name = matcher.group(2)
+ os.pkg_type = "rpm"
+ } else if ( debian_releases.keySet().contains(dist) ) {
+ os.name = "debian"
+ os.version = debian_releases[dist]
+ os.pkg_type = "deb"
+ } else if ( ubuntu_releases.keySet().contains(dist) ) {
+ os.name = "ubuntu"
+ os.version = ubuntu_releases[env.DIST]
+ os.pkg_type = "deb"
+ }
+ // We need to set matcher to null right after using it to avoid a java.io.NotSerializableException
+ matcher = null
+ return os
+}
+
+@Field String ceph_release_spec_template = '''
+Name: ceph-release
+Version: 1
+Release: 0%{?dist}
+Summary: Ceph Development repository configuration
+Group: System Environment/Base
+License: GPLv2
+URL: ${project_url}
+Source0: ceph.repo
+BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
+BuildArch: noarch
+
+%description
+This package contains the Ceph repository GPG key as well as configuration
+for yum and up2date.
+
+%prep
+
+%setup -q -c -T
+install -pm 644 %{SOURCE0} .
+
+%build
+
+%install
+rm -rf %{buildroot}
+%if 0%{defined suse_version}
+install -dm 755 %{buildroot}/%{_sysconfdir}/zypp
+install -dm 755 %{buildroot}/%{_sysconfdir}/zypp/repos.d
+install -pm 644 %{SOURCE0} \
+ %{buildroot}/%{_sysconfdir}/zypp/repos.d
+%else
+install -dm 755 %{buildroot}/%{_sysconfdir}/yum.repos.d
+install -pm 644 %{SOURCE0} \
+ %{buildroot}/%{_sysconfdir}/yum.repos.d
+%endif
+
+%clean
+
+%post
+
+%postun
+
+%files
+%defattr(-,root,root,-)
+%if 0%{defined suse_version}
+/etc/zypp/repos.d/*
+%else
+/etc/yum.repos.d/*
+%endif
+
+%changelog
+* Mon Apr 28 2025 Zack Cerza <zack@cerza.org> 1-1
+'''
+
+@Field String ceph_release_repo_template = '''
+[Ceph]
+name=Ceph packages for \\$basearch
+baseurl=${repo_base_url}/\\$basearch
+enabled=1
+gpgcheck=0
+type=rpm-md
+gpgkey=https://download.ceph.com/keys/autobuild.asc
+
+[Ceph-noarch]
+name=Ceph noarch packages
+baseurl=${repo_base_url}/noarch
+enabled=1
+gpgcheck=0
+type=rpm-md
+gpgkey=https://download.ceph.com/keys/autobuild.asc
+
+[ceph-source]
+name=Ceph source packages
+baseurl=${repo_base_url}/SRPMS
+enabled=1
+gpgcheck=0
+type=rpm-md
+gpgkey=https://download.ceph.com/keys/autobuild.asc
+'''
+
+@NonCPS
+def get_ceph_release_spec_text(project_url) {
+ def engine = new groovy.text.SimpleTemplateEngine()
+ def template = engine.createTemplate(ceph_release_spec_template)
+ def text = template.make(["project_url": project_url])
+ return text.toString()
+}
+
+@NonCPS
+def get_ceph_release_repo_text(base_url) {
+ def engine = new groovy.text.SimpleTemplateEngine()
+ def template = engine.createTemplate(ceph_release_repo_template)
+ def text = template.make(["repo_base_url": base_url])
+ return text.toString()
+}
+
+pipeline {
+ agent any
+ stages {
+ stage("source distribution") {
+ steps {
+ script {
+ if ( ! env.SETUP_BUILD_ID ) {
+ def setup_build = build(
+ job: env.SETUP_JOB,
+ parameters: [
+ string(name: "BRANCH", value: env.BRANCH),
+ // Below are just for ceph-source-dist
+ string(name: "SHA1", value: env.SHA1),
+ string(name: "CEPH_REPO", value: env.CEPH_REPO),
+ string(name: "CEPH_BUILD_BRANCH", value: env.CEPH_BUILD_BRANCH),
+ // Below are only for actual releases
+ string(name: 'RELEASE_TYPE', value: env.RELEASE_TYPE ?: ''),
+ string(name: 'RELEASE_BUILD', value: env.RELEASE_BUILD ?: ''),
+ string(name: 'VERSION', value: env.VERSION ?: '')
+ ]
+ )
+ env.SETUP_BUILD_ID = setup_build.getNumber()
+ }
+ println "SETUP_BUILD_ID=${env.SETUP_BUILD_ID}"
+ env.SETUP_BUILD_URL = new URI([env.JENKINS_URL, "job", env.SETUP_JOB, env.SETUP_BUILD_ID].join("/")).normalize()
+ println "${env.SETUP_BUILD_URL}"
+ }
+ }
+ }
+ stage("parallel build") {
+ matrix {
+ agent {
+ label "(installed-os-centos9||installed-os-noble)&&${ARCH}&&${base_node_label}"
+ }
+ when {
+ beforeAgent true
+ allOf {
+ expression { env.DISTROS.contains(env.DIST) }
+ expression { env.ARCHS.contains(env.ARCH) }
+ expression { env.FLAVORS.contains(env.FLAVOR) }
+ anyOf {
+ environment name: "CI_COMPILE", value: "true"
+ allOf {
+ environment name: "CI_CONTAINER", value: "true"
+ environment name: "DIST", value: "centos9"
+ }
+ }
+ }
+ }
+ axes {
+ axis {
+ name 'DIST'
+ values 'centos9', 'centos10', 'rocky9', 'rocky10', 'focal', 'jammy', 'noble', 'bookworm'
+ }
+ axis {
+ name 'ARCH'
+ values 'x86_64', 'arm64'
+ }
+ axes {
+ name 'FLAVOR'
+ values 'default', 'crimson-release', 'crimson-debug'
+ }
+ }
+ // crimson is only supported on centos9 x86_64
+ excludes {
+ exclude {
+ axis {
+ name 'FLAVOR'
+ values 'crimson-release', 'crimson-debug'
+ }
+ axis {
+ name 'DIST'
+ notValues 'centos9'
+ }
+ }
+ exclude {
+ axis {
+ name 'FLAVOR'
+ values 'crimson-release', 'crimson-debug'
+ }
+ axis {
+ name 'ARCH'
+ notValues 'x86_64'
+ }
+ }
+ }
+ stages {
+ stage("node") {
+ steps {
+ script {
+ build_matrix["${DIST}_${ARCH}"] = env.CI_COMPILE.toBoolean()
+ println("Building: DIST=${env.DIST} ARCH=${env.ARCH} FLAVOR=${env.FLAVOR}")
+ def node_shortname = env.NODE_NAME.split('\\+')[-1]
+ def node_url = new URI([env.JENKINS_URL, "computer", env.NODE_NAME].join("/")).normalize()
+ println("Node: ${node_shortname}")
+ println("${node_url}")
+ }
+ sh './scripts/setup_container_runtime.sh'
+ sh "cat /etc/os-release"
+ }
+ }
+ stage("checkout ceph-build") {
+ steps {
+ checkout scmGit(
+ branches: [[name: env.CEPH_BUILD_BRANCH]],
+ userRemoteConfigs: [[url: ceph_build_repo]],
+ extensions: [
+ [$class: 'CleanBeforeCheckout']
+ ],
+ )
+ }
+ }
+ stage("copy artifacts") {
+ steps {
+ script {
+ def artifact_filter = "dist/sha1,dist/version,dist/other_envvars,dist/ceph-*.tar.bz2"
+ def os = get_os_info(env.DIST)
+ if ( env.CI_COMPILE && os.pkg_type == "deb" ) {
+ artifact_filter += ",dist/ceph_*.diff.gz,dist/ceph_*.dsc"
+ }
+ println artifact_filter
+ copyArtifacts(
+ projectName: env.SETUP_JOB,
+ selector: specific(buildNumber: env.SETUP_BUILD_ID),
+ filter: artifact_filter,
+ )
+ }
+ sh 'sudo journalctl --show-cursor -n 0 --no-pager | tail -n1 | cut -d\" \" -f3 > $WORKSPACE/cursor'
+ script {
+ def sha1_trimmed = env.SHA1.trim().toLowerCase()
+ def sha1_props = readProperties file: "${WORKSPACE}/dist/sha1"
+ sha1_from_artifact = sha1_props.SHA1.trim().toLowerCase()
+ if ( env.SHA1 && sha1_from_artifact != sha1_trimmed ) {
+ error message: "SHA1 from artifact (${sha1_from_artifact}) does not match parameter value (${sha1_trimmed})"
+ } else if ( ! env.SHA1 ) {
+ env.SHA1 = sha1_from_artifact
+ }
+ println "SHA1=${sha1_trimmed}"
+ env.VERSION = readFile(file: "${WORKSPACE}/dist/version").trim()
+ script {
+ // In a release build, dist/other_envvars contains CEPH_REPO, and chacra_url
+ // as written during ceph-source-dist but we don't to be able to define
+ // ceph-releases.git or chacra.ceph.com as parameters for ceph-dev-pipeline.
+ def props = readProperties file: "${WORKSPACE}/dist/other_envvars"
+ for (p in props) {
+ env."${p.key}" = p.value
+ }
+ }
+ def branch_ui_value = env.BRANCH
+ def sha1_ui_value = env.SHA1
+ if (env.CEPH_REPO?.find(/https?:\/\/github.com\//)) {
+ // If this is a release build, link to ceph-release.git's $BRANCH-release branch
+ def suffix = (env.RELEASE_BUILD?.trim() == "true") ? "-release" : ""
+
+ def branch_url = "${env.CEPH_REPO}/tree/${env.BRANCH}${suffix}"
+ branch_ui_value = "<a href=\"${branch_url}\">${env.BRANCH}${suffix}</a>"
+ def commit_url = "${env.CEPH_REPO}/commit/${env.SHA1}"
+ sha1_ui_value = "<a href=\"${commit_url}\">${env.SHA1}</a>"
+ }
+ def shaman_url = "https://shaman.ceph.com/builds/ceph/${env.BRANCH}/${env.SHA1}"
+ def build_description = """\
+ BRANCH=${branch_ui_value}<br />
+ SHA1=${sha1_ui_value}<br />
+ VERSION=${env.VERSION}<br />
+ DISTROS=${env.DISTROS}<br />
+ ARCHS=${env.ARCHS}<br />
+ FLAVORS=${env.FLAVORS}<br />
+ <a href="${env.SETUP_BUILD_URL}">SETUP_BUILD_ID=${env.SETUP_BUILD_ID}</a><br />
+ <a href="${shaman_url}">shaman builds for this branch+commit</a>
+ """.stripIndent()
+ buildDescription build_description
+ }
+ sh "sha256sum dist/*"
+ sh "cat dist/sha1 dist/version"
+ sh '''#!/bin/bash
+ set -ex
+ cd dist
+ mkdir ceph
+ tar --strip-components=1 -C ceph -xjf ceph-${VERSION}.tar.bz2 ceph-${VERSION}/{container,ceph.spec,ceph.spec.in,debian,Dockerfile.build,do_cmake.sh,install-deps.sh,run-make-check.sh,make-debs.sh,make-dist,make-srpm.sh,src/script}
+ '''
+ }
+ }
+ stage("check for built packages") {
+ when {
+ environment name: 'THROWAWAY', value: 'false'
+ expression { return build_matrix["${DIST}_${ARCH}"] == true }
+ }
+ environment {
+ CHACRACTL_KEY = credentials('chacractl-key')
+ SHAMAN_API_KEY = credentials('shaman-api-key')
+ }
+ steps {
+ script {
+ sh './scripts/setup_chacractl.sh'
+ def chacra_url = sh(
+ script: '''grep url ~/.chacractl | cut -d'"' -f2''',
+ returnStdout: true,
+ ).trim()
+ def os = get_os_info(env.DIST)
+ def chacra_endpoint = "ceph/${env.BRANCH}/${env.SHA1}/${os.name}/${os.version_name}/${env.ARCH}/flavors/${env.FLAVOR}/"
+ def chacractl_rc = sh(
+ script: "$HOME/.local/bin/chacractl exists binaries/${chacra_endpoint}",
+ returnStatus: true,
+ )
+ if ( chacractl_rc == 0 && env.FORCE != "true" ) {
+ println("Skipping compilation since chacra already has artifacts. To override, use THROWAWAY=true (to skip this check) or FORCE=true (to re-upload artifacts).")
+ build_matrix["${DIST}_${ARCH}"] = false
+ }
+ }
+ }
+ }
+ stage("builder container") {
+ environment {
+ CONTAINER_REPO_CREDS = credentials('quay-ceph-io-ceph-ci')
+ DOCKER_HUB_CREDS = credentials('dgalloway-docker-hub')
+ }
+ when {
+ expression { return build_matrix["${DIST}_${ARCH}"] == true }
+ }
+ steps {
+ script {
+ env.CEPH_BUILDER_IMAGE = "${env.CONTAINER_REPO_HOSTNAME}/${env.CONTAINER_REPO_ORGANIZATION}/ceph-build"
+ sh '''#!/bin/bash
+ set -ex
+ podman login -u ${CONTAINER_REPO_CREDS_USR} -p ${CONTAINER_REPO_CREDS_PSW} ${CONTAINER_REPO_HOSTNAME}/${CONTAINER_REPO_ORGANIZATION}
+ podman login -u ${DOCKER_HUB_CREDS_USR} -p ${DOCKER_HUB_CREDS_PSW} docker.io
+ '''
+ def ceph_builder_tag_short = "${env.BRANCH}.${env.DIST}.${ARCH}.${FLAVOR}"
+ def ceph_builder_tag = "${env.SHA1[0..6]}.${ceph_builder_tag_short}"
+ sh """#!/bin/bash -ex
+ podman pull ${env.CEPH_BUILDER_IMAGE}:${ceph_builder_tag} || \
+ podman pull ${env.CEPH_BUILDER_IMAGE}:${ceph_builder_tag_short} || \
+ true
+ """
+ sh """#!/bin/bash
+ set -ex
+ echo > .env
+ [[ $FLAVOR == crimson* ]] && echo "WITH_CRIMSON=true" >> .env || true
+ cd dist/ceph
+ python3 src/script/build-with-container.py --env-file=${env.WORKSPACE}/.env --image-repo=${env.CEPH_BUILDER_IMAGE} --tag=${ceph_builder_tag} --image-variant=packages -d ${DIST} -e build-container
+ podman tag ${env.CEPH_BUILDER_IMAGE}:${ceph_builder_tag} ${env.CEPH_BUILDER_IMAGE}:${ceph_builder_tag_short}
+ """
+ sh """#!/bin/bash -ex
+ podman push ${env.CEPH_BUILDER_IMAGE}:${ceph_builder_tag_short}
+ podman push ${env.CEPH_BUILDER_IMAGE}:${ceph_builder_tag}
+ """
+ }
+ }
+ }
+ stage("build") {
+ environment {
+ SHAMAN_API_KEY = credentials('shaman-api-key')
+ SCCACHE_BUCKET_CREDS = credentials('ibm-cloud-sccache-bucket')
+ }
+ when {
+ expression { return build_matrix["${DIST}_${ARCH}"] == true }
+ }
+ steps {
+ script {
+ def os = get_os_info(env.DIST)
+ sh "./scripts/update_shaman.sh started ceph ${os.name} ${os.version_name} $ARCH"
+ env.AWS_ACCESS_KEY_ID = env.SCCACHE_BUCKET_CREDS_USR
+ env.AWS_SECRET_ACCESS_KEY = env.SCCACHE_BUCKET_CREDS_PSW
+ def ceph_builder_tag = "${env.SHA1[0..6]}.${env.BRANCH}.${env.DIST}.${ARCH}.${FLAVOR}"
+ def bwc_command_base = "python3 src/script/build-with-container.py --image-repo=${env.CEPH_BUILDER_IMAGE} --tag=${ceph_builder_tag} -d ${DIST} --image-variant=packages --ceph-version ${env.VERSION}"
+ def bwc_command = bwc_command_base
+ def bwc_cmd_sccache_flags = ""
+ if ( env.DWZ == "false" ) {
+ sh '''#!/bin/bash
+ echo "DWZ=$DWZ" >> .env
+ '''
+ bwc_cmd_sccache_flags = "--env-file=${env.WORKSPACE}/.env";
+ }
+ if ( env.SCCACHE == "true" ) {
+ sh '''#!/bin/bash
+ echo "SCCACHE=$SCCACHE" >> .env
+ echo "SCCACHE_CONF=/ceph/sccache.conf" >> .env
+ echo "SCCACHE_ERROR_LOG=/ceph/sccache_log.txt" >> .env
+ echo "SCCACHE_LOG=debug" >> .env
+ echo "AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID" >> .env
+ echo "AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY" >> .env
+ echo "CEPH_BUILD_NORMALIZE_PATHS=true" >> .env
+ '''
+ // TODO: un-hardcode this
+ writeFile(
+ file: "dist/ceph/sccache.conf",
+ text: """\
+ [cache.s3]
+ bucket = "ceph-sccache"
+ endpoint = "s3.us-south.cloud-object-storage.appdomain.cloud"
+ use_ssl = true
+ key_prefix = ""
+ server_side_encryption = false
+ no_credentials = false
+ region = "auto"
+ """
+ )
+ bwc_cmd_sccache_flags = "--env-file=${env.WORKSPACE}/.env";
+ }
+ def ceph_extra_cmake_args = "";
+ def deb_build_profiles = "";
+ switch (env.FLAVOR) {
+ case "default":
+ ceph_extra_cmake_args += " -DALLOCATOR=tcmalloc"
+ ceph_extra_cmake_args += " -DWITH_SYSTEM_BOOST=OFF -DWITH_BOOST_VALGRIND=ON"
+ if (os.version_name == "focal") {
+ ceph_extra_cmake_args += " -DWITH_STATIC_LIBSTDCXX=ON"
+ }
+ break
+ case "crimson-debug":
+ deb_build_profiles = "pkg.ceph.crimson"
+ ceph_extra_cmake_args += " -DCMAKE_BUILD_TYPE=Debug"
+ break
+ case "crimson-release":
+ deb_build_profiles = "pkg.ceph.crimson";
+ break
+ default:
+ println "FLAVOR=${env.FLAVOR} is invalid"
+ assert false
+ }
+ bwc_command = "${bwc_command} ${bwc_cmd_sccache_flags}"
+ if ( os.pkg_type == "deb" ) {
+ def sccache_flag = "-DWITH_SCCACHE=ON"
+ if ( env.SCCACHE == "true" && ! ceph_extra_cmake_args.contains(sccache_flag) ) {
+ ceph_extra_cmake_args += " ${sccache_flag}"
+ }
+ if ( deb_build_profiles ) {
+ sh """#!/bin/bash
+ echo "DEB_BUILD_PROFILES=${deb_build_profiles}" >> .env
+ """
+ }
+ bwc_command = "${bwc_command} -e debs"
+ } else if ( env.DIST =~ /^(centos|rhel|rocky|fedora).*/ ) {
+ def rpmbuild_args = ""
+ if ( env.SCCACHE == "true" ) rpmbuild_args += " -R--with=sccache"
+ if ( env.DWZ == "false" ) rpmbuild_args += " -R--without=dwz"
+ if ( env.FLAVOR == "default" ) rpmbuild_args += " -R--with=tcmalloc"
+ if ( env.FLAVOR.startsWith("crimson") ) rpmbuild_args += " -R--with=crimson"
+ bwc_command = "${bwc_command}${rpmbuild_args} -e rpm"
+ } else if ( env.DIST =~ /suse|sles/ ) {
+ throw new Exception("bwc not implemented for ${env.DIST}")
+ } else if ( env.DIST =~ /windows/ ) {
+ throw new Exception("bwc not implemented for ${env.DIST}")
+ } else {
+ throw new Exception("DIST '${env.DIST}' is invalid!")
+ }
+ sh """#!/bin/bash
+ echo "CEPH_EXTRA_CMAKE_ARGS=${ceph_extra_cmake_args}" >> .env
+ """
+ sh """#!/bin/bash -ex
+ cd dist/ceph
+ ln ../ceph-${env.VERSION}.tar.bz2 .
+ ${bwc_command}
+ """
+ if ( os.pkg_type == "deb" ) {
+ sh """#!/bin/bash -ex
+ cd dist/ceph
+ ${bwc_command_base} -e custom -- "dpkg-deb --fsys-tarfile /ceph/debs/*/pool/main/c/ceph/cephadm_${VERSION}*.deb | tar -x -f - --strip-components=3 ./usr/sbin/cephadm"
+ ln ./cephadm ../../
+ """
+ } else if ( env.DIST =~ /^(centos|rhel|rocky|fedora).*/ ) {
+ sh """#!/bin/bash -ex
+ cd dist/ceph
+ ${bwc_command_base} -e custom -- "rpm2cpio /ceph/rpmbuild/RPMS/noarch/cephadm-*.rpm | cpio -i --to-stdout *sbin/cephadm > cephadm"
+ ln ./cephadm ../../
+ """
+ }
+ }
+ }
+ post {
+ always {
+ script {
+ if (fileExists('dist/ceph/sccache_log.txt')) {
+ sh """
+ if [ -f "${env.WORKSPACE}/dist/ceph/sccache_log.txt" ]; then
+ ln dist/ceph/sccache_log.txt sccache_log_${env.DIST}_${env.ARCH}_${env.FLAVOR}.txt
+ fi
+ """
+ // The below is to work around an issue where Jenkins would recurse into
+ // dist/ceph/qa and get stuck trying to follow the .qa symlinks. This was
+ // discovered by seeing many messages about "too many levels of symbolic links"
+ // in the system journal.
+ sh "find ${env.WORKSPACE}/dist/ceph/ -name .qa -exec rm {} \\;"
+ archiveArtifacts(
+ artifacts: 'sccache_log*.txt',
+ allowEmptyArchive: true,
+ fingerprint: true,
+ )
+ }
+ }
+ }
+ unsuccessful {
+ script {
+ def os = get_os_info(env.DIST)
+ sh "./scripts/update_shaman.sh failed ceph ${os.name} ${os.version_name} $ARCH"
+ }
+ }
+ }
+ }
+ stage("upload packages") {
+ environment {
+ CHACRACTL_KEY = credentials('chacractl-key')
+ SHAMAN_API_KEY = credentials('shaman-api-key')
+ }
+ when {
+ expression { return build_matrix["${DIST}_${ARCH}"] == true }
+ }
+ steps {
+ script {
+ def chacra_url = sh(
+ script: '''grep url ~/.chacractl | cut -d'"' -f2''',
+ returnStdout: true,
+ ).trim()
+ def os = get_os_info(env.DIST)
+ if ( os.pkg_type == "rpm" ) {
+ sh """#!/bin/bash
+ set -ex
+ cd ./dist/ceph
+ mkdir -p ./rpmbuild/SRPMS/
+ ln ceph-*.src.rpm ./rpmbuild/SRPMS/
+ """
+ // Push packages to chacra.ceph.com under the 'test' ref if ceph-release-pipeline's TEST=true
+ env.SHA1 = env.TEST?.toBoolean() ? 'test' : env.SHA1
+ def spec_text = get_ceph_release_spec_text("${chacra_url}r/ceph/${env.BRANCH}/${env.SHA1}/${os.name}/${os.version_name}/flavors/${env.FLAVOR}/")
+ writeFile(
+ file: "dist/ceph/rpmbuild/SPECS/ceph-release.spec",
+ text: spec_text,
+ )
+ def repo_text = get_ceph_release_repo_text("${chacra_url}/r/ceph/${env.BRANCH}/${env.SHA1}/${os.name}/${os.version_name}/flavors/${env.FLAVOR}")
+ writeFile(
+ file: "dist/ceph/rpmbuild/SOURCES/ceph.repo",
+ text: repo_text,
+ )
+ def ceph_builder_tag = "${env.SHA1[0..6]}.${env.BRANCH}.${env.DIST}.${ARCH}.${FLAVOR}"
+ def bwc_command_base = "python3 src/script/build-with-container.py --image-repo=${env.CEPH_BUILDER_IMAGE} --tag=${ceph_builder_tag} -d ${DIST} --image-variant=packages --ceph-version ${env.VERSION}"
+ bwc_command = "${bwc_command_base} -e custom -- rpmbuild -bb --define \\'_topdir /ceph/rpmbuild\\' /ceph/rpmbuild/SPECS/ceph-release.spec"
+ sh """#!/bin/bash
+ set -ex
+ cd $WORKSPACE/dist/ceph
+ ${bwc_command}
+ """
+ }
+ sh """#!/bin/bash
+ export CHACRA_URL="${chacra_url}"
+ export OS_NAME="${os.name}"
+ export OS_VERSION="${os.version}"
+ export OS_VERSION_NAME="${os.version_name}"
+ export OS_PKG_TYPE="${os.pkg_type}"
+ if [ "$THROWAWAY" != "true" ]; then ./scripts/chacra_upload.sh; fi
+ """
+ }
+ }
+ post {
+ success {
+ script {
+ def os = get_os_info(env.DIST)
+ sh "./scripts/update_shaman.sh completed ceph ${os.name} ${os.version_name} $ARCH"
+ }
+ }
+ unsuccessful {
+ script {
+ def os = get_os_info(env.DIST)
+ sh "./scripts/update_shaman.sh failed ceph ${os.name} ${os.version_name} $ARCH"
+ }
+ }
+ }
+ }
+ stage("container") {
+ when {
+ expression { env.CI_CONTAINER == 'true' && container_distros.contains(env.DIST) }
+ }
+ environment {
+ CONTAINER_REPO_CREDS = credentials('quay-ceph-io-ceph-ci')
+ }
+ steps {
+ script {
+ env.CONTAINER_REPO_USERNAME = env.CONTAINER_REPO_CREDS_USR
+ env.CONTAINER_REPO_PASSWORD = env.CONTAINER_REPO_CREDS_PSW
+ def os = get_os_info(env.DIST)
+ cephver = env.VERSION.trim()
+ sh """#!/bin/bash
+ export DISTRO=${os.name}
+ export RELEASE=${os.version}
+ export cephver=${cephver}
+ ./scripts/build_container
+ """
+ }
+ }
+ }
+ }
+ post {
+ success {
+ sh 'echo success: $DIST $ARCH $FLAVOR'
+ }
+ unsuccessful {
+ sh 'echo failure: $DIST $ARCH $FLAVOR'
+ }
+ always {
+ script {
+ sh 'hostname'
+ sh 'test -r ${WORKSPACE}/cursor && sudo journalctl -k -c $(cat ${WORKSPACE}/cursor)'
+ sh 'podman unshare chown -R 0:0 ${WORKSPACE}/'
+ }
+ }
+ }
+ }
+ }
+ }
+}
--- /dev/null
+- job:
+ name: ceph-dev-pipeline
+ description: ceph-dev-pipeline
+ project-type: pipeline
+ quiet-period: 1
+ concurrent: true
+ pipeline-scm:
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build
+ branches:
+ - ${{CEPH_BUILD_BRANCH}}
+ shallow-clone: true
+ submodule:
+ disable: true
+ wipe-workspace: true
+ script-path: ceph-dev-pipeline/build/Jenkinsfile
+ lightweight-checkout: true
+ do-not-fetch-tags: true
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: main
+
+ - string:
+ name: SHA1
+ description: "The specific commit to build"
+
+ - choice:
+ name: CEPH_REPO
+ choices:
+ - https://github.com/ceph/ceph-ci
+ - https://github.com/ceph/ceph
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos9, centos8, noble, jammy, focal, and windows"
+ default: "centos9 noble jammy"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64 and arm64"
+ default: "x86_64 arm64"
+
+ - string:
+ name: FLAVORS
+ description: "A list of flavors to build. Available options are: default, crimson-release, crimson-debug"
+ default: "default"
+
+ - bool:
+ name: CI_COMPILE
+ description: "Whether to compile and build packages"
+ default: true
+
+ - bool:
+ name: THROWAWAY
+ description: "Whether to push any binaries to Chacra"
+ default: false
+
+ - bool:
+ name: FORCE
+ description: "Whether to push new binaries to Chacra if some are already present"
+ default: false
+
+ - choice:
+ name: FLAVOR
+ choices:
+ - default
+ - crimson-debug
+ - crimson-release
+ default: "default"
+ description: "Type of Ceph build, choices are: crimson-debug, crimson-release, default. Defaults to: 'default'"
+
+ - bool:
+ name: CI_CONTAINER
+ description: "Whether to build and push container images"
+ default: true
+
+ - string:
+ name: CONTAINER_REPO_HOSTNAME
+ description: "FQDN of container repo server (e.g. 'quay.io')"
+ default: "quay-quay-quay.apps.os.sepia.ceph.com"
+
+ - string:
+ name: CONTAINER_REPO_ORGANIZATION
+ description: "Name of container repo organization (e.g. 'ceph-ci')"
+ default: "ceph-ci"
+
+ - bool:
+ name: DWZ
+ description: "Use dwz to make debuginfo packages smaller"
+ default: false
+
+ - bool:
+ name: SCCACHE
+ description: "Use sccache to speed up compilation"
+ default: true
+
+ - string:
+ name: SETUP_BUILD_ID
+ description: "Use the source distribution from this ceph-dev-new-setup build instead of creating a new one"
+ default: ""
+
+ - choice:
+ name: SETUP_JOB
+ choices:
+ - ceph-source-dist
+ - ceph-dev-new-setup
+
+ - string:
+ name: CEPH_BUILD_BRANCH
+ description: "Use the Jenkinsfile from this ceph-build branch"
+ default: main
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+#!/bin/bash -ex
+
+BRANCH=`branch_slash_filter $BRANCH`
+SHA1=${GIT_COMMIT}
+
+# split on '/' to get just 'wip-mybranch' when input is like: origin/wip-mybranch
+
+HOST=$(hostname --short)
+echo "Building on ${HOST}"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BRANCH=$BRANCH"
+echo " SHA1=$GIT_COMMIT"
+
+if [ -x "$BRANCH" ] ; then
+ echo "No git branch was supplied"
+ exit 1
+fi
+
+echo "Building version $(git describe --abbrev=8) Branch $BRANCH"
+
+rm -rf dist
+rm -rf release
+
+# fix version/release. Hack needed only for the spec
+# file for rc candidates.
+#export force=force
+#sed -i 's/^Version:.*/Version: 0.80/' ceph.spec.in
+#sed -i 's/^Release:.*/Release: rc1%{?dist}/' ceph.spec.in
+#sed -i 's/^Source0:.*/Source0: http:\/\/ceph.com\/download\/%{name}-%{version}-rc1.tar.bz2/' ceph.spec.in
+#sed -i 's/^%setup.*/%setup -q -n %{name}-%{version}-rc1/' ceph.spec.in
+
+# run submodule updates regardless
+echo "Running submodule update ..."
+git submodule update --init --quiet
+
+# export args for building optional packages
+ceph_build_args_from_flavor ${FLAVOR}
+
+# When using autotools/autoconf it is possible to see output from `git diff`
+# since some macros can be copied over to the ceph source, triggering this
+# check. This is why this check now is done just before running autogen.sh
+# which calls `aclocal -I m4 --install` that copies a system version of
+# ltsugar.m4 that can be different from the one included in the ceph source
+# tree.
+if git diff --quiet ; then
+ echo repository is clean
+else
+ echo
+ echo "**** REPOSITORY IS DIRTY ****"
+ echo
+ git diff
+ if [ "$force" != "force" ]; then
+ echo "add 'force' argument if you really want to continue."
+ exit 1
+ fi
+ echo "forcing."
+fi
+
+mkdir -p release
+
+# Contents below used to come from /srv/release_tarball.sh and
+# was called like::
+#
+# $bindir/release_tarball.sh release release/version
+
+releasedir='release'
+versionfile='release/version'
+
+cephver=`git describe --abbrev=8 --match "v*" | sed s/^v//`
+echo current version $cephver
+
+srcdir=`pwd`
+
+if [ -d "$releasedir/$cephver" ]; then
+ echo "$releasedir/$cephver already exists; reuse that release tarball"
+else
+ dch -v $cephver-1 'autobuilder'
+
+ echo building tarball
+ rm ceph-*.tar.gz || true
+ rm ceph-*.tar.bz2 || true
+
+ ./make-dist $cephver
+ vers=`ls ceph-*.tar.bz2 | cut -c 6- | sed 's/.tar.bz2//'`
+ extension="tar.bz2"
+ extract_flags="jxf"
+ compress_flags="jcf"
+
+ echo tarball vers $vers
+
+ echo extracting
+ mkdir -p $releasedir/$cephver/rpm
+ cp rpm/*.patch $releasedir/$cephver/rpm || true
+ cd $releasedir/$cephver
+
+ tar $extract_flags $srcdir/ceph-$vers.$extension
+
+ [ "$vers" != "$cephver" ] && mv ceph-$vers ceph-$cephver
+
+ tar zcf ceph_$cephver.orig.tar.gz ceph-$cephver
+ cp -a ceph_$cephver.orig.tar.gz ceph-$cephver.tar.gz
+
+ tar jcf ceph-$cephver.tar.bz2 ceph-$cephver
+
+ # copy debian dir, too. Prevent errors with `true` when using cmake
+ cp -a $srcdir/debian debian || true
+ cd $srcdir
+
+ # copy in spec file, too. If using cmake, the spec file
+ # will already exist.
+ cp ceph.spec $releasedir/$cephver || true
+fi
+
+
+if [ -n "$versionfile" ]; then
+ echo $cephver > $versionfile
+ echo "wrote $cephver to $versionfile"
+fi
+
+vers=`cat release/version`
+
+
+(
+ cd release/$vers
+ mkdir -p ceph-$vers/debian
+ cp -r debian/* ceph-$vers/debian/
+ dpkg-source -b ceph-$vers
+)
+
+mkdir -p dist
+# Debian Source Files
+mv release/$vers/*.dsc dist/.
+mv release/$vers/*.diff.gz dist/. || true
+mv release/$vers/*.orig.tar.gz dist/.
+# RPM Source Files
+mkdir -p dist/rpm/
+mv release/$vers/rpm/*.patch dist/rpm/ || true
+mv release/$vers/ceph.spec dist/.
+mv release/$vers/*.tar.* dist/.
+# Parameters
+mv release/version dist/.
+
+write_dist_files
--- /dev/null
+#!/bin/bash -ex
+
+# update shaman with the failed build status. At this point there aren't any
+# architectures or distro information, so we just report this with the current
+# (ceph-dev-setup) build information that includes log and build urls
+BRANCH=`branch_slash_filter $BRANCH`
+SHA1=${GIT_COMMIT}
+
+failed_build_status "ceph"
--- /dev/null
+- job:
+ name: ceph-dev-setup
+ description: "This job step checks out the branch and builds the tarballs, diffs, and dsc that are passed to the ceph-dev-build step.\r\n\r\nNotes:\r\nJob needs to run on a releatively recent debian system. The Restrict where run feature is used to specifiy an appropriate label.\r\nThe clear workspace before checkout box for the git plugin is used."
+ node: huge && bionic && !arm64
+ display-name: 'ceph-dev-setup'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ properties:
+ - build-discarder:
+ num-to-keep: 100
+ artifact-num-to-keep: 50
+ - github:
+ url: https://github.com/ceph/ceph
+ - copyartifact:
+ projects: ceph-dev-build,ceph-dev,preserve-adarsha-test
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+
+ scm:
+ - git:
+ url: git@github.com:ceph/ceph.git
+ # Use the SSH key attached to the ceph-jenkins GitHub account.
+ credentials-id: 'jenkins-build'
+ branches:
+ - $BRANCH
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ publishers:
+ - archive:
+ artifacts: 'dist/**'
+ allow-empty: false
+ latest-only: false
+
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/failure
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+- job:
+ name: ceph-dev
+ description: 'This job builds branches from https://github.com/ceph/ceph for testing purposes.'
+ node: built-in
+ project-type: multijob
+ defaults: global
+ concurrent: true
+ display-name: 'ceph-dev'
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: 14
+ artifact-days-to-keep: 14
+ - github:
+ url: https://github.com/ceph/ceph
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: main
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos9, centos8, centos7, centos6, noble, jammy, focal, bionic, xenial, trusty, precise, wheezy, jessie, and windows"
+ default: "noble jammy focal bionic centos7 centos8 centos9"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64, and arm64"
+ default: "x86_64 arm64"
+
+ - bool:
+ name: THROWAWAY
+ description: "
+Default: False. When True it will not POST binaries to chacra. Artifacts will not be around for long. Useful to test builds."
+ default: false
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: CEPH_BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ - choice:
+ name: FLAVOR
+ choices:
+ - default
+ - crimson-debug
+ - crimson-release
+ default: "default"
+ description: "Type of Ceph build, choices are: crimson-debug, crimson-release, default. Defaults to: 'default'"
+
+ - bool:
+ name: CI_CONTAINER
+ description: 'Build container with development release of Ceph?'
+ default: true
+
+ - string:
+ name: CONTAINER_REPO_HOSTNAME
+ description: "For CI_CONTAINER: Name of container repo server (i.e. 'quay.io')"
+ default: "quay.ceph.io"
+
+ - string:
+ name: CONTAINER_REPO_ORGANIZATION
+ description: "For CI_CONTAINER: Name of container repo organization (i.e. 'ceph-ci')"
+ default: "ceph-ci"
+
+ builders:
+ - multijob:
+ name: 'ceph dev setup phase'
+ condition: SUCCESSFUL
+ projects:
+ - name: ceph-dev-setup
+ current-parameters: true
+ exposed-scm: false
+ - copyartifact:
+ project: ceph-dev-setup
+ filter: dist/sha1
+ which-build: multijob-build
+ - inject:
+ properties-file: ${{WORKSPACE}}/dist/sha1
+ - copyartifact:
+ project: ceph-dev-setup
+ filter: dist/branch
+ which-build: multijob-build
+ - inject:
+ properties-file: ${{WORKSPACE}}/dist/branch
+ - multijob:
+ name: 'ceph dev build phase'
+ condition: SUCCESSFUL
+ projects:
+ - name: ceph-dev-build
+ current-parameters: true
+ exposed-scm: false
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - build-name:
+ name: "#${{BUILD_NUMBER}} ${{BRANCH}}, ${{SHA1}}, ${{DISTROS}}, ${{FLAVOR}}"
--- /dev/null
+- job:
+ name: ceph-devstack
+ description: Run ceph-devstack
+ project-type: pipeline
+ concurrent: false
+ pipeline-scm:
+ scm:
+ - git:
+ url: https://github.com/zmc/ceph-devstack
+ branches:
+ - origin/${{CEPH_DEVSTACK_BRANCH}}
+ parameters:
+ - string:
+ name: CEPH_DEVSTACK_BRANCH
+ default: "main"
+ - string:
+ name: TEUTHOLOGY_BRANCH
+ default: "main"
+ - string:
+ name: TEUTHOLOGY_CEPH_BRANCH
+ default: "main"
+ - string:
+ name: TEUTHOLOGY_CEPH_REPO
+ default: "https://github.com/ceph/ceph.git"
+ - string:
+ name: TEUTHOLOGY_SUITE
+ default: "teuthology:no-ceph"
+ - string:
+ name: TEUTHOLOGY_SUITE_BRANCH
+ default: "main"
+ - string:
+ name: TEUTHOLOGY_SUITE_REPO
+ default: "https://github.com/ceph/ceph.git"
+
+ triggers:
+ - github-pull-request:
+ admin-list:
+ - zmc
+ - dmick
+ - kamoltat
+ - amathuria
+ org-list:
+ - ceph
+ trigger-phrase: 'jenkins test.*|jenkins retest.*'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: false
+ auto-close-on-fail: false
--- /dev/null
+#!/bin/bash
+# vim: ts=4 sw=4 expandtab
+
+set -ex
+
+# did any doc/ files change?
+# If $GIT_COMMIT is a merge commit (it is usually, because that's how
+# we manage ceph, with PRs that include merges): noting the three dots,
+# this git diff invocation compares the common ancestor between first parent
+# (original state of the branch) to the second parent (the branch being merged
+# in) i.e., same as git diff $(git merge-base p1 p2) p2 and outputs filenames
+# only (--name-only).
+#
+# Skip this optimization for non-merge commits
+
+if git rev-parse --verify ${GIT_COMMIT}^2; then
+ # enclosing doublequotes prevent newlines from disappearing, for grep below
+ files="$(git diff --name-only ${GIT_COMMIT}^1...${GIT_COMMIT}^2)"
+ echo -e "changed files:\n$files"
+ if ! (echo "$files" | grep -sq '^doc/'); then
+ echo "No doc files changed, skipping build"
+ exit 0
+ fi
+fi
+
+./admin/build-doc
+
+REV="$(git rev-parse HEAD)"
+OUTDIR="docs.raw/sha1/$REV"
+mkdir -p $OUTDIR
+
+cp -a build-doc/output/html/* $OUTDIR
+
+# Log this $OUTDIR's sha1
+printf '%s\n' "$REV" >"$OUTDIR/sha1"
+
+# Symlink the branch name
+BRANCH=${GIT_BRANCH#*/}
+mkdir -p docs.raw/ref/
+ln -s ../sha1/$REV "docs.raw/ref/$BRANCH"
+
+# Publish this sha1's contents first:
+rsync -a -v docs.raw/sha1/$REV /var/docs.raw/sha1/
+# Now point the ref symlink at the newly-uploaded sha1.
+rsync -a -v docs.raw/ref/$BRANCH /var/docs.raw/ref/
--- /dev/null
+- job:
+ name: ceph-docs
+ node: docs
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: 1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph
+ discard-old-builds: true
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph
+ browser: auto
+ # The default is to build and publish every branch.
+ # Uncomment this for testing:
+ #branches:
+ # - 'origin/main'
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw-verbatim: ../../build/build
--- /dev/null
+#!/bin/bash -ex
+
+CONTAINER_VERSION=${CONTAINER_VERSION:-9.4.12}
+CONTAINER=ceph/ceph-grafana:${CONTAINER_VERSION}
+sudo dnf install -y podman
+sudo podman login quay.io -u ${CONTAINER_REPO_USERNAME} -p ${CONTAINER_REPO_PASSWORD}
+
+for repohost in quay.io; do
+ sudo podman rmi -f grafana:${CONTAINER_VERSION}-combined ${repohost}/${CONTAINER}-x86_64 ${repohost}/${CONTAINER}-aarch64 || true
+
+ sudo podman pull ${repohost}/${CONTAINER}-x86_64
+ sudo podman pull ${repohost}/${CONTAINER}-aarch64
+ sudo podman manifest create grafana:${CONTAINER_VERSION}-combined
+ sudo podman manifest add grafana:${CONTAINER_VERSION}-combined ${repohost}/${CONTAINER}-x86_64
+ sudo podman manifest add grafana:${CONTAINER_VERSION}-combined ${repohost}/${CONTAINER}-aarch64
+
+ sudo podman manifest push grafana:${CONTAINER_VERSION}-combined ${repohost}/${CONTAINER}
+
+ sudo podman rmi -f grafana:${CONTAINER_VERSION}-combined ${repohost}/${CONTAINER}-x86_64 ${repohost}/${CONTAINER}-aarch64 || true
+done
--- /dev/null
+- job:
+ name: ceph-grafana-trigger
+ description: 'Build a combined container image "manifest list" for ceph-grafana that includes both architectures'
+ node: centos8
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ display-name: 'ceph-grafana-trigger'
+ block-downstream: true
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 10
+ artifact-num-to-keep: 10
+ - github:
+ url: https://github.com/ceph/ceph
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: main
+ - string:
+ name: CONTAINER_VERSION
+ description: "The version tag for the containers; will have -ARCH added to it. By convention, the version of Grafana"
+ default: "9.4.12"
+ axes:
+ - axis:
+ type: label-expression
+ name: ARCH
+ values:
+ - x86_64
+ - arm64
+ builders:
+ - trigger-builds:
+ - project: ceph-grafana
+ predefined-parameters: ARCH=x86_64
+ current-parameters: true
+ block: true
+ - trigger-builds:
+ - project: ceph-grafana
+ predefined-parameters: ARCH=arm64
+ current-parameters: true
+ block: true
+ - shell:
+ !include-raw-verbatim: ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: ceph-container-quay-io
+ username: CONTAINER_REPO_USERNAME
+ password: CONTAINER_REPO_PASSWORD
+ - build-name:
+ name: "#${{BUILD_NUMBER}} ${{BRANCH}}"
--- /dev/null
+sudo yum install -y buildah
+cd monitoring/grafana/build
+make build
+make push
+make clean
--- /dev/null
+- job:
+ name: ceph-grafana
+ description: 'Builds the ceph-grafana container.'
+ project-type: freestyle
+ concurrent: true
+ display-name: 'ceph-grafana'
+ properties:
+ - groovy-label:
+ script: return ARCH + '&¢os8'
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 25
+ artifact-days-to-keep: 25
+ artifact-num-to-keep: 25
+ - github:
+ url: https://github.com/ceph/ceph
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph
+ branches:
+ - $BRANCH
+ wipe-workspace: true
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: main
+ - string:
+ name: ARCH
+ description: "Architecture to build for. Available options are: x86_64, arm64"
+ default: "x86_64"
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: ceph-container-quay-io
+ username: CONTAINER_REPO_USERNAME
+ password: CONTAINER_REPO_PASSWORD
+ - build-name:
+ name: "#${{BUILD_NUMBER}} ${{BRANCH}}, ${{ARCH}}"
--- /dev/null
+#!/bin/bash
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+cd "$WORKSPACE/ceph-iscsi-cli"
+
+$VENV/tox -rv -e flake8
--- /dev/null
+- scm:
+ name: ceph-iscsi-cli
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-iscsi-cli.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+ basedir: "ceph-iscsi-cli"
+
+- job:
+ name: ceph-iscsi-cli-flake8
+ description: Runs Flake8 tests for ceph-iscsi-cli on each GitHub PR
+ project-type: freestyle
+ node: python3
+ block-downstream: false
+ block-upstream: false
+ defaults: global
+ display-name: 'ceph-iscsi-cli: Flake8'
+ quiet-period: 5
+ retry-count: 3
+
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: 15
+ artifact-num-to-keep: 15
+ - github:
+ url: https://github.com/ceph/ceph-iscsi-cli/
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ admin-list:
+ - dillaman
+ org-list:
+ - ceph
+ trigger-phrase: 'jenkins flake8'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "Flake8"
+
+ scm:
+ - ceph-iscsi-cli
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
--- /dev/null
+- job:
+ name: ceph-iscsi-cli-trigger
+ node: built-in
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: 1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-iscsi-cli
+ discard-old-builds: true
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-iscsi-cli.git
+ branches:
+ - 'origin/main*'
+ - 'origin/wip*'
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - trigger-builds:
+ - project: 'ceph-iscsi-cli'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+if test "$DISTRO" != "debian" -a "$DISTRO" != "ubuntu"; then
+ exit 0
+fi
+
+exit 1
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+PROJECT=ceph-iscsi-cli
+BRANCH=`branch_slash_filter $BRANCH`
+
+# Only do actual work when we are an RPM distro
+if test "$DISTRO" != "fedora" -a "$DISTRO" != "centos" -a "$DISTRO" != "rhel"; then
+ exit 0
+fi
+
+# Install the dependencies
+sudo yum install -y mock
+
+## Get some basic information about the system and the repository
+# Get version
+get_rpm_dist
+VERSION="$(git describe --abbrev=0 --tags HEAD)"
+REVISION="$(git describe --tags HEAD | cut -d - -f 2- | sed 's/-/./')"
+if [ "$VERSION" = "$REVISION" ]; then
+ REVISION="1"
+fi
+
+# Create dummy dist tar
+tar cf dist/${PROJECT}-${VERSION}.tar.gz \
+ --exclude .git --exclude dist \
+ --transform "s,^,${PROJECT}-${VERSION}/," *
+tar tfv dist/${PROJECT}-${VERSION}.tar.gz
+
+# Update spec version
+sed -i "s/^Version:.*$/Version:\t${VERSION}/g" $WORKSPACE/${PROJECT}.spec
+sed -i "s/^Release:.*$/Release:\t${REVISION}%{?dist}/g" $WORKSPACE/${PROJECT}.spec
+# for debugging
+cat $WORKSPACE/${PROJECT}.spec
+
+# Update setup.py version
+sed -i "s/version=\"[^\"]*\"/version=\"${VERSION}\"/g" $WORKSPACE/setup.py
+# for debugging
+cat $WORKSPACE/setup.py
+
+## Create the source rpm
+echo "Building SRPM"
+rpmbuild \
+ --define "_sourcedir $WORKSPACE/dist" \
+ --define "_specdir $WORKSPACE/dist" \
+ --define "_builddir $WORKSPACE/dist" \
+ --define "_srcrpmdir $WORKSPACE/dist/SRPMS" \
+ --define "_rpmdir $WORKSPACE/dist/RPMS" \
+ --nodeps -bs $WORKSPACE/${PROJECT}.spec
+SRPM=$(readlink -f $WORKSPACE/dist/SRPMS/*.src.rpm)
+
+## Build the binaries with mock
+echo "Building RPMs"
+sudo mock --verbose -r ${MOCK_TARGET}-${RELEASE}-${ARCH} --scrub=all
+sudo mock --verbose -r ${MOCK_TARGET}-${RELEASE}-${ARCH} --resultdir=$WORKSPACE/dist/RPMS/ ${SRPM} || ( tail -n +1 $WORKSPACE/dist/RPMS/{root,build}.log && exit 1 )
+
+## Upload the created RPMs to chacra
+chacra_endpoint="${PROJECT}/${BRANCH}/${GIT_COMMIT}/${DISTRO}/${RELEASE}"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+find $WORKSPACE/dist/RPMS/ | egrep "\.noarch\.rpm" | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/noarch/
+PACKAGE_MANAGER_VERSION=$(rpm --queryformat '%{VERSION}-%{RELEASE}\n' -qp $(find $WORKSPACE/dist/RPMS/ | egrep "\.noarch\.rpm" | head -1))
+
+# write json file with build info
+cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$VERSION",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+# post the json to repo-extra json to chacra
+curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_endpoint}/extra/
+
+# start repo creation
+$VENV/chacractl repo update ${chacra_endpoint}
+
+echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
--- /dev/null
+#! /usr/bin/bash
+#
+# Ceph distributed storage system
+#
+# Copyright (C) 2016 Red Hat <contact@redhat.com>
+#
+# Author: Boris Ranto <branto@redhat.com>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+set -ex
+
+# Make sure we execute at the top level directory before we do anything
+cd $WORKSPACE
+
+# This will set the DISTRO and MOCK_TARGET variables
+get_distro_and_target
+
+# Perform a clean-up
+git clean -fxd
+
+# Make sure the dist directory is clean
+rm -rf dist
+mkdir -p dist
+
+# Print some basic system info
+HOST=$(hostname --short)
+echo "Building on $(hostname) with the following env"
+echo "*****"
+env
+echo "*****"
+
+export LC_ALL=C # the following is vulnerable to i18n
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -f -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
--- /dev/null
+#!/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+if test -f /etc/redhat-release ; then
+ exit 0
+fi
--- /dev/null
+#!/bin/bash
+set -ex
+
+# only do work if we are a RPM distro
+if [[ ! -f /etc/redhat-release && ! -f /usr/bin/zypper ]] ; then
+ exit 0
+fi
--- /dev/null
+- job:
+ name: ceph-iscsi-cli
+ project-type: matrix
+ defaults: global
+ display-name: 'ceph-iscsi-cli'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: "main"
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos7, centos6, bionic, xenial, trusty-pbuilder, precise, wheezy, and jessie"
+ default: "centos7 xenial bionic"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64"
+ default: "x86_64"
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ execution-strategy:
+ combination-filter: DIST==AVAILABLE_DIST && ARCH==AVAILABLE_ARCH && (ARCH=="x86_64" || (ARCH == "arm64" && (DIST == "xenial" || DIST == "centos7")))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - small
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos7
+ - xenial
+ - bionic
+ - axis:
+ type: dynamic
+ name: DIST
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+ scm:
+ - git:
+ url: git@github.com:ceph/ceph-iscsi-cli.git
+ # Use the SSH key attached to the ceph-jenkins GitHub account.
+ credentials-id: 'jenkins-build'
+ branches:
+ - $BRANCH
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ rm -rf dist
+ rm -rf venv
+ rm -rf release
+ # debian build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_deb
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_deb
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_rpm
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_rpm
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+#!/bin/bash
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+cd "$WORKSPACE/ceph-iscsi-config"
+
+$VENV/tox -rv -e flake8
--- /dev/null
+- scm:
+ name: ceph-iscsi-config
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-iscsi-config.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+ basedir: "ceph-iscsi-config"
+
+- job:
+ name: ceph-iscsi-config-flake8
+ description: Runs Flake8 tests for ceph-iscsi-config on each GitHub PR
+ project-type: freestyle
+ node: python3
+ block-downstream: false
+ block-upstream: false
+ defaults: global
+ display-name: 'ceph-iscsi-config: Flake8'
+ quiet-period: 5
+ retry-count: 3
+
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: 15
+ artifact-num-to-keep: 15
+ - github:
+ url: https://github.com/ceph/ceph-iscsi-config/
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ admin-list:
+ - dillaman
+ org-list:
+ - ceph
+ trigger-phrase: 'jenkins flake8'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "Flake8"
+
+ scm:
+ - ceph-iscsi-config
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
--- /dev/null
+- job:
+ name: ceph-iscsi-config-trigger
+ node: built-in
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: 1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-iscsi-config
+ discard-old-builds: true
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-iscsi-config.git
+ branches:
+ - 'origin/main*'
+ - 'origin/wip*'
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - trigger-builds:
+ - project: 'ceph-iscsi-config'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+if test "$DISTRO" != "debian" -a "$DISTRO" != "ubuntu"; then
+ exit 0
+fi
+
+exit 1
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+PROJECT=ceph-iscsi-config
+BRANCH=`branch_slash_filter $BRANCH`
+
+# Only do actual work when we are an RPM distro
+if test "$DISTRO" != "fedora" -a "$DISTRO" != "centos" -a "$DISTRO" != "rhel"; then
+ exit 0
+fi
+
+# Install the dependencies
+sudo yum install -y mock
+
+## Get some basic information about the system and the repository
+# Get version
+get_rpm_dist
+VERSION="$(git describe --abbrev=0 --tags HEAD)"
+REVISION="$(git describe --tags HEAD | cut -d - -f 2- | sed 's/-/./')"
+if [ "$VERSION" = "$REVISION" ]; then
+ REVISION="1"
+fi
+
+# Create dummy dist tar
+tar cf dist/${PROJECT}-${VERSION}.tar.gz \
+ --exclude .git --exclude dist \
+ --transform "s,^,${PROJECT}-${VERSION}/," *
+tar tfv dist/${PROJECT}-${VERSION}.tar.gz
+
+# Update spec version
+sed -i "s/^Version:.*$/Version:\t${VERSION}/g" $WORKSPACE/${PROJECT}.spec
+sed -i "s/^Release:.*$/Release:\t${REVISION}%{?dist}/g" $WORKSPACE/${PROJECT}.spec
+# for debugging
+cat $WORKSPACE/${PROJECT}.spec
+
+# Update setup.py version
+sed -i "s/version=\"[^\"]*\"/version=\"${VERSION}\"/g" $WORKSPACE/setup.py
+# for debugging
+cat $WORKSPACE/setup.py
+
+## Create the source rpm
+echo "Building SRPM"
+rpmbuild \
+ --define "_sourcedir $WORKSPACE/dist" \
+ --define "_specdir $WORKSPACE/dist" \
+ --define "_builddir $WORKSPACE/dist" \
+ --define "_srcrpmdir $WORKSPACE/dist/SRPMS" \
+ --define "_rpmdir $WORKSPACE/dist/RPMS" \
+ --nodeps -bs $WORKSPACE/${PROJECT}.spec
+SRPM=$(readlink -f $WORKSPACE/dist/SRPMS/*.src.rpm)
+
+## Build the binaries with mock
+echo "Building RPMs"
+sudo mock --verbose -r ${MOCK_TARGET}-${RELEASE}-${ARCH} --scrub=all
+sudo mock --verbose -r ${MOCK_TARGET}-${RELEASE}-${ARCH} --resultdir=$WORKSPACE/dist/RPMS/ ${SRPM} || ( tail -n +1 $WORKSPACE/dist/RPMS/{root,build}.log && exit 1 )
+
+## Upload the created RPMs to chacra
+chacra_endpoint="${PROJECT}/${BRANCH}/${GIT_COMMIT}/${DISTRO}/${RELEASE}"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+find $WORKSPACE/dist/RPMS/ | egrep "\.noarch\.rpm" | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/noarch/
+PACKAGE_MANAGER_VERSION=$(rpm --queryformat '%{VERSION}-%{RELEASE}\n' -qp $(find $WORKSPACE/dist/RPMS/ | egrep "\.noarch\.rpm" | head -1))
+
+# write json file with build info
+cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$VERSION",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+# post the json to repo-extra json to chacra
+curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_endpoint}/extra/
+
+# start repo creation
+$VENV/chacractl repo update ${chacra_endpoint}
+
+echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
--- /dev/null
+#! /usr/bin/bash
+#
+# Ceph distributed storage system
+#
+# Copyright (C) 2016 Red Hat <contact@redhat.com>
+#
+# Author: Boris Ranto <branto@redhat.com>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+set -ex
+
+# Make sure we execute at the top level directory before we do anything
+cd $WORKSPACE
+
+# This will set the DISTRO and MOCK_TARGET variables
+get_distro_and_target
+
+# Perform a clean-up
+git clean -fxd
+
+# Make sure the dist directory is clean
+rm -rf dist
+mkdir -p dist
+
+# Print some basic system info
+HOST=$(hostname --short)
+echo "Building on $(hostname) with the following env"
+echo "*****"
+env
+echo "*****"
+
+export LC_ALL=C # the following is vulnerable to i18n
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -f -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
--- /dev/null
+#!/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+if test -f /etc/redhat-release ; then
+ exit 0
+fi
--- /dev/null
+#!/bin/bash
+set -ex
+
+# only do work if we are a RPM distro
+if [[ ! -f /etc/redhat-release && ! -f /usr/bin/zypper ]] ; then
+ exit 0
+fi
--- /dev/null
+- job:
+ name: ceph-iscsi-config
+ project-type: matrix
+ defaults: global
+ display-name: 'ceph-iscsi-config'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: "main"
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos7, centos6, bionic, xenial, trusty-pbuilder, precise, wheezy, and jessie"
+ default: "centos7 xenial bionic"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64"
+ default: "x86_64"
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ execution-strategy:
+ combination-filter: DIST==AVAILABLE_DIST && ARCH==AVAILABLE_ARCH && (ARCH=="x86_64" || (ARCH == "arm64" && (DIST == "xenial" || DIST == "centos7")))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - small
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos7
+ - xenial
+ - bionic
+ - axis:
+ type: dynamic
+ name: DIST
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+ scm:
+ - git:
+ url: git@github.com:ceph/ceph-iscsi-config.git
+ # Use the SSH key attached to the ceph-jenkins GitHub account.
+ credentials-id: 'jenkins-build'
+ branches:
+ - $BRANCH
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ rm -rf dist
+ rm -rf venv
+ rm -rf release
+ # debian build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_deb
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_deb
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_rpm
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_rpm
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+ceph-iscsi-stable
+=================
+This job is used to build and push RPMs to chacra.ceph.com so they can be synced, signed, then pushed to download.ceph.com.
+
+There are scripts in ``~/ceph-iscsi/bin`` on the signer box for pulling, signing, and pushing the RPMs.
+
+.. code::
+
+ # Example
+ cd /home/ubuntu/ceph-iscsi/bin
+ ./sync-pull 2 0784eb00a859501f90f2b1c92354ae7242d5be3d
+ ./sign-rpms
+ ./sync-push 2
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+# Install the dependencies
+sudo yum install -y mock
+
+# Loop through the projects and build RPMs
+# Some of this might not need to be repeated 3 times
+REPO_MAJOR_VERSION=0
+for project in $(ls -h | grep -v dist); do
+
+ PROJECT=$project
+ cd $WORKSPACE/$PROJECT
+
+ # Get some basic information about the system and the repository
+ get_rpm_dist
+ VERSION="$(git describe --abbrev=0 --tags HEAD)" # for ceph-iscsi, this will return the major version number (e.g., 2)
+ MAJOR_VERSION=$(echo $VERSION | cut -d '.' -f1)
+ if [ $MAJOR_VERSION -gt $REPO_MAJOR_VERSION ] ; then
+ REPO_MAJOR_VERSION=$MAJOR_VERSION
+ fi
+ REVISION="$(git describe --tags HEAD | cut -d - -f 2- | sed 's/-/./')"
+ if [ "$VERSION" = "$REVISION" ]; then
+ REVISION="1"
+ fi
+
+ # Create dummy dist tar
+ tar cf ../dist/${PROJECT}-${VERSION}.tar.gz \
+ --exclude .git --exclude dist \
+ --transform "s,^,${PROJECT}-${VERSION}/," *
+ tar tfv ../dist/${PROJECT}-${VERSION}.tar.gz
+
+ # Update spec version
+ sed -i "s/^Version:.*$/Version:\t${VERSION}/g" $WORKSPACE/$PROJECT/${PROJECT}.spec
+ sed -i "s/^Release:.*$/Release:\t${REVISION}%{?dist}/g" $WORKSPACE/$PROJECT/${PROJECT}.spec
+ # for debugging
+ cat $WORKSPACE/$PROJECT/${PROJECT}.spec
+
+ # Update setup.py version
+ sed -i "s/version=\"[^\"]*\"/version=\"${VERSION}\"/g" $WORKSPACE/$PROJECT/setup.py
+ # for debugging
+ cat $WORKSPACE/$PROJECT/setup.py
+
+ # Create the source rpm
+ echo "Building SRPM"
+ rpmbuild \
+ --define "_sourcedir $WORKSPACE/dist" \
+ --define "_specdir $WORKSPACE/dist" \
+ --define "_builddir $WORKSPACE/dist" \
+ --define "_srcrpmdir $WORKSPACE/dist/SRPMS" \
+ --define "_rpmdir $WORKSPACE/dist/RPMS" \
+ --nodeps -bs $WORKSPACE/$PROJECT/${PROJECT}.spec
+ SRPM=$(readlink -f $WORKSPACE/dist/SRPMS/*.src.rpm)
+
+ # Build the binaries with mock
+ echo "Building RPMs"
+ sudo mock --verbose -r ${MOCK_TARGET}-${RELEASE}-${ARCH} --scrub=all
+ sudo mock --verbose -r ${MOCK_TARGET}-${RELEASE}-${ARCH} --resultdir=$WORKSPACE/dist/RPMS/ ${SRPM} || ( tail -n +1 $WORKSPACE/dist/RPMS/{root,build}.log && exit 1 )
+done
+
+cd $WORKSPACE
+
+# The REPO_MAJOR_VERSION and GIT_COMMIT aren't really important here. We just feed it the last project's info so the CI works.
+chacra_endpoint="ceph-iscsi/${REPO_MAJOR_VERSION}/${GIT_COMMIT}/${DISTRO}/${RELEASE}"
+chacra_repo_endpoint="${chacra_endpoint}/flavors/default"
+
+# check to make sure ceph-iscsi-config package built
+if [ ! -f $WORKSPACE/dist/RPMS/ceph-iscsi-${CEPH_ISCSI_BRANCH}-1.el${RELEASE}.noarch.rpm ]; then
+ echo "ceph-iscsi rpm not built!"
+ exit 1
+fi
+
+# check to make sure ceph-iscsi-tools package built
+if [ ! -f $WORKSPACE/dist/RPMS/ceph-iscsi-tools-${CEPH_ISCSI_TOOLS_BRANCH}-1.el${RELEASE}.noarch.rpm ]; then
+ echo "ceph-iscsi-tools rpm not built!"
+ exit 1
+fi
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+if [ "$THROWAWAY" = false ] ; then
+ # push binaries to chacra
+ find $WORKSPACE/dist/SRPMS | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/source
+ find $WORKSPACE/dist/RPMS/ | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/noarch/
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_repo_endpoint}
+fi
+
+sudo rm -rf $WORKSPACE/dist
--- /dev/null
+#!/bin/bash -ex
+
+rm -f ~/.chacractl
+rm -rf $WORKSPACE/*
--- /dev/null
+#! /usr/bin/bash
+#
+# Ceph distributed storage system
+#
+# Copyright (C) 2016 Red Hat <contact@redhat.com>
+#
+# Author: Boris Ranto <branto@redhat.com>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+
+set -ex
+
+# Make sure we execute at the top level directory before we do anything
+cd $WORKSPACE
+
+# This will set the DISTRO and MOCK_TARGET variables.
+get_distro_and_target
+echo "DISTRO: $DISTRO MOCK_TARGET: $MOCK_TARGET"
+
+# Make sure the dist directory is clean
+rm -rf dist
+mkdir -p dist
+
+# Perform a clean-up
+for dir in $(ls -h | grep -v dist); do
+ cd $WORKSPACE/$dir
+ git clean -fxd
+done
+
+cd $WORKSPACE
+
+# Print some basic system info
+HOST=$(hostname --short)
+echo "Building on $(hostname) with the following env"
+echo "*****"
+env
+echo "*****"
+
+export LC_ALL=C # the following is vulnerable to i18n
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+chacra_url="https://chacra.ceph.com/"
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
--- /dev/null
+#!/bin/bash
+set -ex
+
+# only do work if we are a RPM distro
+if [[ ! -f /etc/redhat-release && ! -f /usr/bin/zypper ]] ; then
+ exit 0
+fi
--- /dev/null
+- scm:
+ name: ceph-iscsi
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-iscsi.git
+ branches:
+ - $CEPH_ISCSI_BRANCH
+ skip-tag: true
+ wipe-workspace: true
+ basedir: "ceph-iscsi"
+
+- scm:
+ name: ceph-iscsi-tools
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-iscsi-tools.git
+ branches:
+ - $CEPH_ISCSI_TOOLS_BRANCH
+ skip-tag: true
+ wipe-workspace: true
+ basedir: "ceph-iscsi-tools"
+
+- job:
+ name: ceph-iscsi-stable
+ project-type: matrix
+ defaults: global
+ display-name: 'ceph-iscsi-stable'
+ concurrent: true
+
+ parameters:
+ - string:
+ name: CEPH_ISCSI_BRANCH
+ description: "The git branch (or tag) to build"
+ default: "3.6"
+
+ - string:
+ name: CEPH_ISCSI_TOOLS_BRANCH
+ description: "The git branch (or tag) to build"
+ default: "2.2"
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos8 centos9"
+ default: "centos8 centos9"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64"
+ default: "x86_64"
+
+ - bool:
+ name: THROWAWAY
+ description: "
+Default: False. When True it will not POST binaries to chacra. Artifacts will not be around for long. Useful to test builds."
+ default: false
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+ default: true
+
+ - string:
+ name: BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ execution-strategy:
+ combination-filter: |
+ DIST == AVAILABLE_DIST && ARCH == AVAILABLE_ARCH &&
+ (ARCH == "x86_64" || (ARCH == "arm64" && ["centos8"].contains(DIST)))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - huge
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos8
+ - centos9
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+ scm:
+ - ceph-iscsi
+ - ceph-iscsi-tools
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ rm -rf dist
+ rm -rf venv
+ rm -rf release
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_rpm
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_rpm
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - inject:
+ properties-file: ${{WORKSPACE}}/build_info
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/failure
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+- job:
+ name: ceph-iscsi-tools-trigger
+ node: built-in
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: 1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-iscsi-tools
+ discard-old-builds: true
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-iscsi-tools.git
+ branches:
+ - 'origin/main*'
+ - 'origin/wip*'
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - trigger-builds:
+ - project: 'ceph-iscsi-tools'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+if test "$DISTRO" != "debian" -a "$DISTRO" != "ubuntu"; then
+ exit 0
+fi
+
+exit 1
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+PROJECT=ceph-iscsi-tools
+BRANCH=`branch_slash_filter $BRANCH`
+
+# Only do actual work when we are an RPM distro
+if test "$DISTRO" != "fedora" -a "$DISTRO" != "centos" -a "$DISTRO" != "rhel"; then
+ exit 0
+fi
+
+# Install the dependencies
+sudo yum install -y mock
+
+## Get some basic information about the system and the repository
+# Get version
+get_rpm_dist
+VERSION="$(git describe --abbrev=0 --tags HEAD)"
+REVISION="$(git describe --tags HEAD | cut -d - -f 2- | sed 's/-/./')"
+if [ "$VERSION" = "$REVISION" ]; then
+ REVISION="1"
+fi
+
+# Create dummy dist tar
+tar cf dist/${PROJECT}-${VERSION}.tar.gz \
+ --exclude .git --exclude dist \
+ --transform "s,^,${PROJECT}-${VERSION}/," *
+tar tfv dist/${PROJECT}-${VERSION}.tar.gz
+
+# Update spec version
+sed -i "s/^Version:.*$/Version:\t${VERSION}/g" $WORKSPACE/${PROJECT}.spec
+sed -i "s/^Release:.*$/Release:\t${REVISION}%{?dist}/g" $WORKSPACE/${PROJECT}.spec
+# for debugging
+cat $WORKSPACE/${PROJECT}.spec
+
+# Update setup.py version
+sed -i "s/version=\"[^\"]*\"/version=\"${VERSION}\"/g" $WORKSPACE/setup.py
+# for debugging
+cat $WORKSPACE/setup.py
+
+## Create the source rpm
+echo "Building SRPM"
+rpmbuild \
+ --define "_sourcedir $WORKSPACE/dist" \
+ --define "_specdir $WORKSPACE/dist" \
+ --define "_builddir $WORKSPACE/dist" \
+ --define "_srcrpmdir $WORKSPACE/dist/SRPMS" \
+ --define "_rpmdir $WORKSPACE/dist/RPMS" \
+ --nodeps -bs $WORKSPACE/${PROJECT}.spec
+SRPM=$(readlink -f $WORKSPACE/dist/SRPMS/*.src.rpm)
+
+## Build the binaries with mock
+echo "Building RPMs"
+sudo mock --verbose -r ${MOCK_TARGET}-${RELEASE}-${ARCH} --scrub=all
+sudo mock --verbose -r ${MOCK_TARGET}-${RELEASE}-${ARCH} --resultdir=$WORKSPACE/dist/RPMS/ ${SRPM} || ( tail -n +1 $WORKSPACE/dist/RPMS/{root,build}.log && exit 1 )
+
+## Upload the created RPMs to chacra
+chacra_endpoint="${PROJECT}/${BRANCH}/${GIT_COMMIT}/${DISTRO}/${RELEASE}"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+find $WORKSPACE/dist/RPMS/ | egrep "\.noarch\.rpm" | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/noarch/
+PACKAGE_MANAGER_VERSION=$(rpm --queryformat '%{VERSION}-%{RELEASE}\n' -qp $(find $WORKSPACE/dist/RPMS/ | egrep "\.noarch\.rpm" | head -1))
+
+# write json file with build info
+cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$VERSION",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+# post the json to repo-extra json to chacra
+curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_endpoint}/extra/
+
+# start repo creation
+$VENV/chacractl repo update ${chacra_endpoint}
+
+echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
--- /dev/null
+#! /usr/bin/bash
+#
+# Ceph distributed storage system
+#
+# Copyright (C) 2016 Red Hat <contact@redhat.com>
+#
+# Author: Boris Ranto <branto@redhat.com>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+set -ex
+
+# Make sure we execute at the top level directory before we do anything
+cd $WORKSPACE
+
+# This will set the DISTRO and MOCK_TARGET variables
+get_distro_and_target
+
+# Perform a clean-up
+git clean -fxd
+
+# Make sure the dist directory is clean
+rm -rf dist
+mkdir -p dist
+
+# Print some basic system info
+HOST=$(hostname --short)
+echo "Building on $(hostname) with the following env"
+echo "*****"
+env
+echo "*****"
+
+export LC_ALL=C # the following is vulnerable to i18n
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -f -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
--- /dev/null
+#!/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+if test -f /etc/redhat-release ; then
+ exit 0
+fi
--- /dev/null
+#!/bin/bash
+set -ex
+
+# only do work if we are a RPM distro
+if [[ ! -f /etc/redhat-release && ! -f /usr/bin/zypper ]] ; then
+ exit 0
+fi
--- /dev/null
+- job:
+ name: ceph-iscsi-tools
+ project-type: matrix
+ defaults: global
+ display-name: 'ceph-iscsi-tools'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: "main"
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: xenial, centos7, centos8"
+ default: "centos7 centos8"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64"
+ default: "x86_64"
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ execution-strategy:
+ combination-filter: DIST==AVAILABLE_DIST && ARCH==AVAILABLE_ARCH && (ARCH=="x86_64" || (ARCH == "arm64" && (DIST == "xenial" || DIST == "centos7")))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - small
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos7
+ - centos8
+ - xenial
+ - axis:
+ type: dynamic
+ name: DIST
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+ scm:
+ - git:
+ url: git@github.com:ceph/ceph-iscsi-tools.git
+ # Use the SSH key attached to the ceph-jenkins GitHub account.
+ credentials-id: 'jenkins-build'
+ branches:
+ - $BRANCH
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ rm -rf dist
+ rm -rf venv
+ rm -rf release
+ # debian build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_deb
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_deb
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_rpm
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_rpm
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+#!/bin/bash
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+cd "$WORKSPACE/ceph-iscsi"
+
+$VENV/tox -rv
--- /dev/null
+- scm:
+ name: ceph-iscsi
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-iscsi.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+ basedir: "ceph-iscsi"
+
+- job:
+ name: ceph-iscsi-tox
+ description: Runs tox tests for ceph-iscsi on each GitHub PR
+ project-type: freestyle
+ node: focal && x86_64
+ block-downstream: false
+ block-upstream: false
+ defaults: global
+ display-name: 'ceph-iscsi: tox'
+ quiet-period: 5
+ retry-count: 3
+
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: 15
+ artifact-num-to-keep: 15
+ - github:
+ url: https://github.com/ceph/ceph-iscsi/
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ admin-list:
+ - dillaman
+ org-list:
+ - ceph
+ trigger-phrase: 'jenkins tox'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "tox"
+
+ scm:
+ - ceph-iscsi
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
--- /dev/null
+- job:
+ name: ceph-iscsi-trigger
+ node: built-in
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: 1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-iscsi
+ discard-old-builds: true
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-iscsi.git
+ branches:
+ - 'origin/main*'
+ - 'origin/wip*'
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - trigger-builds:
+ - project: 'ceph-iscsi'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+PROJECT=ceph-iscsi
+BRANCH=`branch_slash_filter $BRANCH`
+
+# Only do actual work when we are an RPM distro
+if test "$DISTRO" != "fedora" -a "$DISTRO" != "centos" -a "$DISTRO" != "rhel"; then
+ exit 0
+fi
+
+# Install the dependencies
+sudo yum install -y mock
+
+## Get some basic information about the system and the repository
+# Get version
+get_rpm_dist
+VERSION="$(git describe --abbrev=0 --tags HEAD)"
+REVISION="$(git describe --tags HEAD | cut -d - -f 2- | sed 's/-/./')"
+if [ "$VERSION" = "$REVISION" ]; then
+ REVISION="1"
+fi
+
+# Create dummy dist tar
+tar cf dist/${PROJECT}-${VERSION}.tar.gz \
+ --exclude .git --exclude dist \
+ --transform "s,^,${PROJECT}-${VERSION}/," *
+tar tfv dist/${PROJECT}-${VERSION}.tar.gz
+
+# Update spec version
+sed -i "s/^Version:.*$/Version:\t${VERSION}/g" $WORKSPACE/${PROJECT}.spec
+sed -i "s/^Release:.*$/Release:\t${REVISION}%{?dist}/g" $WORKSPACE/${PROJECT}.spec
+# for debugging
+cat $WORKSPACE/${PROJECT}.spec
+
+# Update setup.py version
+sed -i "s/version=\"[^\"]*\"/version=\"${VERSION}\"/g" $WORKSPACE/setup.py
+# for debugging
+cat $WORKSPACE/setup.py
+
+## Create the source rpm
+echo "Building SRPM"
+rpmbuild \
+ --define "_sourcedir $WORKSPACE/dist" \
+ --define "_specdir $WORKSPACE/dist" \
+ --define "_builddir $WORKSPACE/dist" \
+ --define "_srcrpmdir $WORKSPACE/dist/SRPMS" \
+ --define "_rpmdir $WORKSPACE/dist/RPMS" \
+ --nodeps -bs $WORKSPACE/${PROJECT}.spec
+SRPM=$(readlink -f $WORKSPACE/dist/SRPMS/*.src.rpm)
+
+## Build the binaries with mock
+echo "Building RPMs"
+sudo mock --verbose -r ${MOCK_TARGET}-${RELEASE}-${ARCH} --scrub=all
+sudo mock --verbose -r ${MOCK_TARGET}-${RELEASE}-${ARCH} --resultdir=$WORKSPACE/dist/RPMS/ ${SRPM} || ( tail -n +1 $WORKSPACE/dist/RPMS/{root,build}.log && exit 1 )
+
+## Upload the created RPMs to chacra
+chacra_endpoint="${PROJECT}/${BRANCH}/${GIT_COMMIT}/${DISTRO}/${RELEASE}"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+find $WORKSPACE/dist/RPMS/ | egrep "\.noarch\.rpm" | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/noarch/
+PACKAGE_MANAGER_VERSION=$(rpm --queryformat '%{VERSION}-%{RELEASE}\n' -qp $(find $WORKSPACE/dist/RPMS/ | egrep "\.noarch\.rpm" | head -1))
+
+# write json file with build info
+cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$VERSION",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+# post the json to repo-extra json to chacra
+curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_endpoint}/extra/
+
+# start repo creation
+$VENV/chacractl repo update ${chacra_endpoint}
+
+echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
--- /dev/null
+#! /usr/bin/bash
+#
+# Ceph distributed storage system
+#
+# Copyright (C) 2016 Red Hat <contact@redhat.com>
+#
+# Author: Boris Ranto <branto@redhat.com>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+set -ex
+
+# Make sure we execute at the top level directory before we do anything
+cd $WORKSPACE
+
+# This will set the DISTRO and MOCK_TARGET variables
+get_distro_and_target
+
+# Perform a clean-up
+git clean -fxd
+
+# Make sure the dist directory is clean
+rm -rf dist
+mkdir -p dist
+
+# Print some basic system info
+HOST=$(hostname --short)
+echo "Building on $(hostname) with the following env"
+echo "*****"
+env
+echo "*****"
+
+export LC_ALL=C # the following is vulnerable to i18n
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -f -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
--- /dev/null
+#!/bin/bash
+set -ex
+
+# only do work if we are a RPM distro
+if [[ ! -f /etc/redhat-release && ! -f /usr/bin/zypper ]] ; then
+ exit 0
+fi
--- /dev/null
+- job:
+ name: ceph-iscsi
+ project-type: matrix
+ defaults: global
+ display-name: 'ceph-iscsi'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: "main"
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos7, centos8, centos9"
+ default: "centos7 centos8 centos9"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64"
+ default: "x86_64"
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ execution-strategy:
+ combination-filter: |
+ DIST == AVAILABLE_DIST && ARCH == AVAILABLE_ARCH &&
+ (ARCH == "x86_64" || (ARCH == "arm64" && ["centos7", "centos8", "centos9"].contains(DIST)))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - huge
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos7
+ - centos8
+ - centos9
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+ scm:
+ - git:
+ url: git@github.com:ceph/ceph-iscsi.git
+ # Use the SSH key attached to the ceph-jenkins GitHub account.
+ credentials-id: 'jenkins-build'
+ branches:
+ - $BRANCH
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ rm -rf dist
+ rm -rf venv
+ rm -rf release
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_rpm
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_rpm
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+---
+- job:
+ name: ceph-multibranch-pipeline
+ project-type: multibranch
+ number-to-keep: 300
+ days-to-keep: 30
+ scm:
+ - github:
+ repo: ceph
+ repo-owner: ceph
+ ssh-checkout:
+ credentials: 'jenkins-build'
+ credentials-id: 8cffdeb4-283c-4d96-a190-05d5645bcc2f
+ clean:
+ before: true
+ shallow-clone: true
+ do-not-fetch-tags: true
+ discover-pr-forks-trust: contributors
--- /dev/null
+# macros
+
+- scm:
+ name: ceph-main
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.git
+ branches:
+ - origin/main
+ skip-tag: true
+ timeout: 20
+ basedir: "ceph-main"
+ shallow-clone: true
+ wipe-workspace: true
+
+- scm:
+ name: ceph-pr
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.git
+ branches:
+ - origin/pr/${{ghprbPullId}}/merge
+ refspec: +refs/pull/${{ghprbPullId}}/*:refs/remotes/origin/pr/${{ghprbPullId}}/*
+ timeout: 20
+ basedir: "ceph-pr"
+ shallow-clone: true
+ wipe-workspace: true
+
+- scm:
+ name: cbt
+ scm:
+ - git:
+ url: https://github.com/ceph/cbt.git
+ refspec: +refs/heads/main:refs/remotes/origin/main
+ do-not-fetch-tags: true
+ honor-refspec: true
+ name: origin
+ branches:
+ - refs/heads/main
+ timeout: 20
+ wipe-workspace: false
+ basedir: "cbt"
+ skip-tag: true
+ shallow-clone: true
+ clean:
+ after: true
+
+- builder:
+ name: run-cbt
+ builders:
+ - shell: |
+ cd {src-dir}
+ archive_dir={archive-basedir}/$(git rev-parse --short HEAD)
+ if test -d $archive_dir ; then
+ exit 0
+ fi
+ export NPROC=$(nproc)
+ export FOR_MAKE_CHECK=true
+ cxx_compiler=g++
+ c_compiler=gcc
+ for i in $(seq 15 -1 10); do
+ if type -t clang-$i > /dev/null; then
+ cxx_compiler="clang++-$i"
+ c_compiler="clang-$i"
+ break
+ fi
+ done
+ if test {osd-flavor} = "crimson-release" ; then
+ export WITH_CRIMSON=true
+ # TODO use clang-10 on ubuntu/focal
+ timeout 7200 src/script/run-make.sh \
+ --cmake-args "-DCMAKE_CXX_COMPILER=$cxx_compiler -DCMAKE_C_COMPILER=$c_compiler -DCMAKE_BUILD_TYPE=Release -DWITH_CRIMSON=ON -DWITH_TESTS=OFF" \
+ vstart-base crimson-osd
+ src/script/run-cbt.sh --build-dir $PWD/build --source-dir $PWD --cbt ${{WORKSPACE}}/cbt -a $archive_dir src/test/crimson/cbt/radosbench_4K_read.yaml
+ else
+ timeout 7200 src/script/run-make.sh --cmake-args "-DCMAKE_BUILD_TYPE=Release -DWITH_TESTS=OFF" vstart-base
+ src/script/run-cbt.sh --build-dir $PWD/build --source-dir $PWD --cbt ${{WORKSPACE}}/cbt -a $archive_dir src/test/crimson/cbt/radosbench_4K_read.yaml --classical
+ fi
+
+- builder:
+ name: compare-cbt-results
+ builders:
+ - shell: |
+ cd ${{WORKSPACE}}/{src-dir-main}
+ archive_dir_main={archive-main}/$(git rev-parse --short HEAD)
+ cd ${{WORKSPACE}}/{src-dir-pr}
+ archive_dir_pr={archive-pr}/$(git rev-parse --short HEAD)
+ . ${{WORKSPACE}}/gh-venv/bin/activate
+ ${{WORKSPACE}}/cbt/compare.py -v \
+ -a $archive_dir_pr \
+ -b $archive_dir_main \
+ --output report.md && result=success || result=failure
+ github-check \
+ --owner {check-repo-owner} \
+ --repo {check-repo-name} \
+ --pkey-file ${{GITHUB_CHECK_PKEY_PEM}} \
+ --app-id {check-app-id} \
+ --install-id {check-install-id} \
+ --name {check-name} \
+ --sha ${{ghprbActualCommit}} \
+ --external-id ${{BUILD_ID}} \
+ --details-url ${{BUILD_URL}} \
+ --status completed --conclusion ${{result}} \
+ --title perf-test \
+ --summary ${{result}} \
+ --text report.md
+
+- job-template:
+ name: 'ceph-perf-{osd-flavor}'
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ # use lastest rhel and ubuntu for crimson for clang build
+ node: performance
+ display-name: 'ceph: {osd-flavor} perf test'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ check-app-id: "62865"
+ check-install-id: "8465036"
+ check-name: "perf-test"
+ check-repo-owner: "ceph"
+ check-repo-name: "ceph"
+
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 300
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph/
+ - rebuild:
+ auto-rebuild: true
+
+ parameters:
+ - string:
+ name: ghprbPullId
+ description: "the GitHub pull id, like '72' in 'ceph/pull/72'"
+
+ triggers:
+ - github-pull-request:
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ trigger-phrase: 'jenkins test {osd-flavor} perf'
+ skip-build-phrase: '^jenkins do not test.*'
+ only-trigger-phrase: false
+ white-list-labels:
+ - performance
+ - crimson
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ cancel-builds-on-update: true
+
+ scm:
+ - ceph-main
+ - ceph-pr
+ - cbt
+
+ builders:
+ - shell: |
+ cd ${{WORKSPACE}}/cbt
+ . /etc/os-release || ID=ubuntu
+ case $ID in
+ debian|ubuntu)
+ sudo env DEBIAN_FRONTEND=noninteractive apt-get install -y python3-yaml python3-lxml python3-prettytable clang-12
+ ;;
+ centos|rhel)
+ sudo dnf copr remove tchaikov/llvm-toolset-10 || true
+ sudo dnf module enable -y llvm-toolset
+ sudo dnf install -y llvm-toolset
+ sudo yum install -y python3-pyyaml python3-lxml python3-prettytable
+ sudo yum update -y libarchive
+ gcc_toolset_ver=9
+ # so clang is able to find gcc-toolset-${{gcc_toolset_ver}} which is listed as a
+ # BuildRequires in ceph.spec.in, and it is installed by `run-make.sh`.
+ # clang searches for GCC in a bunch of well known places:
+ # see https://github.com/llvm-mirror/clang/blob/main/lib/Driver/ToolChains/Gnu.cpp
+ sudo ln -sf /opt/rh/gcc-toolset-${{gcc_toolset_ver}}/root/lib/gcc/x86_64-redhat-linux/${{gcc_toolset_ver}} \
+ /usr/lib/gcc/x86_64-redhat-linux/${{gcc_toolset_ver}}
+ ;;
+ fedora)
+ sudo yum install -y python3-pyyaml python3-lxml python3-prettytable clang
+ ;;
+ *)
+ echo "unknown distro: $ID"
+ exit 1
+ ;;
+ esac
+ virtualenv -q --python python3 ${{WORKSPACE}}/gh-venv
+ . ${{WORKSPACE}}/gh-venv/bin/activate
+ pip install git+https://github.com/ceph/githubcheck.git
+ echo "please hold tight..." | github-check \
+ --owner {check-repo-owner} \
+ --repo {check-repo-name} \
+ --pkey-file ${{GITHUB_CHECK_PKEY_PEM}} \
+ --app-id {check-app-id} \
+ --install-id {check-install-id} \
+ --name {check-name} \
+ --sha ${{ghprbActualCommit}} \
+ --external-id ${{BUILD_ID}} \
+ --details-url ${{BUILD_URL}} \
+ --status in_progress \
+ --title perf-test \
+ --summary running
+
+ - run-cbt:
+ src-dir: "ceph-main"
+ osd-flavor: '{osd-flavor}'
+ # ideally cbt-results should be persited across jobs, so the test result can be reused
+ archive-basedir: "$WORKSPACE/cbt-results"
+ - run-cbt:
+ src-dir: "ceph-pr"
+ osd-flavor: '{osd-flavor}'
+ # use the basedir of git checkout, so it can be wiped
+ archive-basedir: "$WORKSPACE/ceph-pr"
+ - compare-cbt-results:
+ src-dir-main: "ceph-main"
+ archive-main: "$WORKSPACE/cbt-results"
+ src-dir-pr: "ceph-pr"
+ archive-pr: "$WORKSPACE/ceph-pr"
+ check-app-id: '{check-app-id}'
+ check-install-id: '{check-install-id}'
+ check-name: '{check-name}'
+ check-repo-owner: '{check-repo-owner}'
+ check-repo-name: '{check-repo-name}'
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell: "sudo reboot"
+
+ wrappers:
+ - credentials-binding:
+ - file:
+ credential-id: cephacheck.2020-04-29.private-key.pem
+ variable: GITHUB_CHECK_PKEY_PEM
+- project:
+ name: ceph-perf
+ osd-flavor:
+ - crimson-debug
+ - crimson-release
+ - classic
+ jobs:
+ - ceph-perf-{osd-flavor}
--- /dev/null
+#!/bin/bash -e
+cd src/pybind/mgr/dashboard
+timeout 7200 ./run-backend-api-tests.sh
--- /dev/null
+#!/bin/bash -e
+
+docs_pr_only
+container_pr_only
+gha_pr_only
+qa_pr_only
+if [[ "$DOCS_ONLY" = true || "$CONTAINER_ONLY" = true || "$GHA_ONLY" == true || "$QA_ONLY" == true ]]; then
+ echo "Only the doc/, container/, qa/ or .github/ dir changed. No need to run make check or API tests."
+ mkdir -p $WORKSPACE/build/out
+ echo "File created to avoid Jenkins' Artifact Archiving plugin from hanging" > $WORKSPACE/build/out/mgr.foo.log
+ exit 0
+fi
+
+n_build_jobs=$(get_nr_build_jobs)
+n_test_jobs=$(($(nproc) / 4))
+export CHECK_MAKEOPTS="-j${n_test_jobs} -N -Q"
+export BUILD_MAKEOPTS="-j${n_build_jobs}"
+export FOR_MAKE_CHECK=1
+timeout 2h ./src/script/run-make.sh \
+ --cmake-args '-DWITH_TESTS=OFF -DENABLE_GIT_VERSION=OFF'
+sleep 5
+ps -ef | grep ceph || true
--- /dev/null
+- job:
+ name: ceph-api
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ node: huge && bionic && x86_64 && !smithi
+ display-name: 'ceph: API'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 300
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph/
+ - rebuild:
+ auto-rebuild: true
+ - inject:
+ properties-content: |
+ TERM=xterm
+
+ parameters:
+ - string:
+ name: sha1
+ description: "commit id or a refname, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ cancel-builds-on-update: true
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ white-list-target-branches:
+ - main
+ - tentacle
+ - squid
+ - reef
+ - "feature-.*"
+ trigger-phrase: 'jenkins test api'
+ skip-build-phrase: '^jenkins do not test.*'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "ceph API tests"
+ started-status: "running API tests"
+ success-status: "ceph API tests succeeded"
+ failure-status: "ceph API tests failed"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.git
+ branches:
+ - origin/pr/${{ghprbPullId}}/merge
+ refspec: +refs/pull/${{ghprbPullId}}/*:refs/remotes/origin/pr/${{ghprbPullId}}/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ shallow-clone: true
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+ - ../../../scripts/dashboard/install-backend-api-test-deps.sh
+ - ../../build/api
+
+ # This job seems to get aborted more often than others. Multiple times in the past week,
+ # it's gotten aborted during an apt transaction which leaves a dirty dpkg DB.
+ # This will make sure that gets cleaned up before the next job runs (and would inevitably fail).
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - ABORTED
+ build-steps:
+ - shell: "sudo dpkg --configure -a"
+
+ - archive:
+ artifacts: 'build/out/*.log'
+ allow-empty: true
+ latest-only: false
+
+ wrappers:
+ - ansicolor
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: github-readonly-token
+ username: GITHUB_USER
+ password: GITHUB_PASS
--- /dev/null
+#!/bin/bash
+
+# Don't require signed commits if only docs changed.
+# I tried using the excluded-regions parameter for the ghprb plugin but since
+# this job/check is required, it hung with 'Expected - Waiting for status to be reported'
+docs_pr_only
+if [ "$DOCS_ONLY" = false ]; then
+ echo "Not a docs only change. Will proceed with signed commit check."
+ pytest_mark="code_test"
+elif [ "$DOCS_ONLY" = true ]; then
+ echo "Only the doc/ dir changed. No need to check for signed commits."
+ pytest_mark="doc_test"
+else
+ echo "Could not determine if this is a docs only change. Failing job."
+ exit 1
+fi
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "pytest" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+cd "$WORKSPACE"
+timeout 600 $VENV/py.test -m "${pytest_mark}" -vs --junitxml="$WORKSPACE/report.xml" "$WORKSPACE/ceph-build/ceph-pr-commits/build/test_commits.py"
--- /dev/null
+import pytest
+
+
+def pytest_configure(config):
+ config.addinivalue_line(
+ "markers", "code_test: mark test to run against code related changes"
+ )
+ config.addinivalue_line(
+ "markers", "doc_test: mark test to run against doc only changes"
+ )
+
+
+def pytest_addoption(parser):
+ parser.addoption("--skip-code-test", action="store_true",
+ help="skip code tests")
+
+
+def pytest_runtest_setup(item):
+ if "code_test" in item.keywords and item.config.getoption("--skip-code-test"):
+ pytest.skip("skipping due to --skip-code-test")
--- /dev/null
+from __future__ import print_function
+from subprocess import check_output
+import os
+import shlex
+import re
+from os.path import dirname
+try:
+ from itertools import filterfalse
+except ImportError:
+ from itertools import ifilterfalse as filterfalse
+
+import pytest
+
+
+class TestCommits(object):
+ """
+ This class will contain all checks required for commits
+ """
+ target_branch = os.getenv('ghprbTargetBranch', 'main')
+ source_branch = 'HEAD'
+
+ workspace = os.getenv('WORKSPACE') or dirname(
+ dirname(dirname(dirname(os.path.abspath(__file__)))))
+ ceph_checkout = os.path.join(workspace, 'ceph')
+
+ @classmethod
+ def command(cls, command):
+ print("Running command:", command)
+ args = shlex.split(command)
+ output = check_output(args, cwd=cls.ceph_checkout)
+ return output.decode(encoding='utf-8', errors='ignore')
+
+ @classmethod
+ def setup_class(cls):
+ # ensure that we have the latest commits from main
+ cls.command(
+ 'git fetch origin +refs/heads/{target_branch}:refs/remotes/origin/{target_branch}'.format(
+ target_branch=cls.target_branch))
+
+ @pytest.mark.doc_test
+ def test_doc_title(self):
+ doc_regex = '\nDate:[^\n]+\n\n doc'
+ all_commits = 'git log -z --no-merges origin/%s..%s' % (
+ self.target_branch, self.source_branch)
+ wrong_commits = list(filterfalse(
+ re.compile(doc_regex).search,
+ self.command(all_commits).split('\0')))
+ if wrong_commits:
+ raise AssertionError("\n".join([
+ "The title/s of following commit/s is/are not started with 'doc', but they only touch files under 'doc/'. Please make sure the commit titles",
+ "are started with 'doc'. See the 'Submitting Patches' guide:",
+ "https://github.com/ceph/ceph/blob/main/SubmittingPatches.rst#commit-title",
+ ""] +
+ wrong_commits
+ ))
+
+ @pytest.mark.code_test
+ def test_signed_off_by(self):
+ signed_off_regex = r'Signed-off-by: \S.* <[^@]+@[^@]+\.[^@]+>'
+ # '-z' puts a '\0' between commits, see later split('\0')
+ check_signed_off_commits = 'git log -z --no-merges origin/%s..%s' % (
+ self.target_branch, self.source_branch)
+ wrong_commits = list(filterfalse(
+ re.compile(signed_off_regex).search,
+ self.command(check_signed_off_commits).split('\0')))
+ if wrong_commits:
+ raise AssertionError("\n".join([
+ "Following commit/s is/are not signed, please make sure all TestCommits",
+ "are signed following the 'Submitting Patches' guide:",
+ "https://github.com/ceph/ceph/blob/main/SubmittingPatches.rst#1-sign-your-work",
+ ""] +
+ wrong_commits
+ ))
--- /dev/null
+- scm:
+ name: ceph
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/${{ghprbPullId}}/*:refs/remotes/origin/pr/${{ghprbPullId}}/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+ basedir: "ceph"
+
+- scm:
+ name: ceph-build
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build.git
+ branches:
+ - origin/main
+ browser-url: https://github.com/ceph/ceph-build
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+ basedir: "ceph-build"
+
+
+- job:
+ name: ceph-pr-commits
+ node: small
+ project-type: freestyle
+ defaults: global
+ display-name: 'ceph: Pull Request commits'
+ concurrent: true
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ artifact-days-to-keep: 15
+ - github:
+ url: https://github.com/ceph/ceph/
+
+ parameters:
+ - string:
+ name: sha1
+ description: "commit id or a refname, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ trigger-phrase: 'jenkins test signed'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "Signed-off-by"
+ started-status: "checking if commits are signed"
+ success-status: "all commits in this PR are signed"
+ failure-status: "one or more commits in this PR are not signed"
+
+ scm:
+ - ceph
+ - ceph-build
+
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ publishers:
+ - junit:
+ results: report.xml
+ allow-empty-results: true
--- /dev/null
+#!/bin/bash
+
+set -xo pipefail
+
+# make sure any shaman list file is removed. At some point if all nodes
+# are clean this will not be needed.
+sudo rm -f /etc/apt/sources.list.d/shaman*
+sudo rm -f /etc/apt/sources.list.d/ubuntu-toolchain-r*
+sudo rm -f /etc/apt/sources.list.d/ceph-boost*
+
+# Ceph doc build deps, Ubuntu only because ditaa is not packaged for CentOS
+sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none
+sudo apt-get install -y gcc python3-dev python3-pip python3-virtualenv libxml2-dev libxslt-dev doxygen graphviz ant ditaa cython3
+
+virtualenv -q --python python3 venv
+. venv/bin/activate
+pip install tox
+pip install git+https://github.com/ceph/githubcheck.git
+sha1=$(git rev-parse refs/remotes/origin/pr/${ghprbPullId}/head)
+
+output=$(mktemp $PWD/build-doc-XXX.out)
+
+if timeout 3600 ./admin/build-doc 2>&1 | tee ${output}; then
+ succeed=true
+else
+ succeed=false
+fi
+
+if ! $succeed; then
+ cat ${output} | github-check \
+ --sphinx \
+ --sphinx-root=. \
+ --owner "ceph" \
+ --repo "ceph" \
+ --pkey-file $GITHUB_CHECK_PKEY_PEM \
+ --app-id "62865" \
+ --install-id "8465036" \
+ --name "ceph-pr-docs" \
+ --sha $sha1 \
+ --external-id $BUILD_ID \
+ --details-url $BUILD_URL \
+ --title sphinx-build
+fi
+
+$succeed
--- /dev/null
+- job:
+ name: ceph-pr-docs
+ display-name: 'ceph: Pull Requests Docs Check'
+ concurrent: true
+ node: bionic && x86_64
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph
+ - build-discarder:
+ days-to-keep: 14
+
+ discard-old-builds: true
+
+ triggers:
+ - github-pull-request:
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ cancel-builds-on-update: true
+ only-trigger-phrase: false
+ trigger-phrase: 'jenkins test docs.*'
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "Docs: build check"
+ started-status: "Docs: building"
+ success-status: "OK - docs built"
+ failure-status: "Docs: failed with errors"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph
+ browser: auto
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/${{ghprbPullId}}/*:refs/remotes/origin/pr/${{ghprbPullId}}/*
+ skip-tag: true
+ shallow-clone: true
+ honor-refspec: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw-verbatim: ../../build/build
+
+ wrappers:
+ - credentials-binding:
+ - file:
+ credential-id: cephacheck.2020-04-29.private-key.pem
+ variable: GITHUB_CHECK_PKEY_PEM
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+cd "$WORKSPACE"
+
+function has_modified_submodules() {
+ local target_branch=$1
+ shift
+ local actual_commit=$1
+ shift
+ # Ensure that our clone has the very latest target branch.
+ # The Jenkins Git plugin may have not updated this particular ref.
+ git fetch origin ${target_branch}:refs/remotes/origin/${target_branch}
+
+ echo "Comparing the following target branch:"
+ git rev-parse origin/${target_branch}
+
+ # show diffs between $ghprbTargetBranch (where the merge is going) and
+ # $ghprbActualCommit (the tip of the branch that's merging) with '...',
+ # which is equivalent to diff $(git merge-base TB AC) AC, or "show
+ # diff from common ancestor of the target branch and this branch with the
+ # tip of this branch". With --submodule, also show detail of diff in submodules.
+ modified_submodules="$(git diff --submodule=log origin/${target_branch}...${actual_commit} | grep ^Submodule || true)"
+ if test -n "${modified_submodules}"; then
+ modified_submodules=$(echo $modified_submodules | awk '{print $2}')
+ return 0
+ else
+ return 1
+ fi
+}
+
+function is_planned() {
+ local target_branch=$1
+ shift
+ local magic_word=$1
+ shift
+
+ IFS=$'\n'
+ for line in $(git log -z --no-merges origin/${target_branch}..HEAD); do
+ echo "${line}" | grep -q "${magic_word}" && return 0
+ done
+ # no lines match the magic word
+ return 1
+}
+
+if has_modified_submodules "${ghprbTargetBranch}" "${ghprbActualCommit}"; then
+ echo "Project has modified submodules: $modified_submodules !"
+ magic_word="$(basename $modified_submodules) submodule"
+ if is_planned "${ghprbTargetBranch}" "${magic_word}"; then
+ # ahh, it's planned
+ exit 0
+ else
+ echo "please include '${magic_word}' in your commit message, if this change is intentional."
+ exit 1
+ fi
+fi
+
+exit 0
--- /dev/null
+- job:
+ name: ceph-pr-submodules
+ node: small
+ project-type: freestyle
+ defaults: global
+ display-name: 'ceph: Pull Request modified submodules'
+ concurrent: true
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph/
+
+ parameters:
+ - string:
+ name: sha1
+ description: "commit id or a refname, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ trigger-phrase: 'jenkins test submodules'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "Unmodified Submodules"
+ started-status: "checking if PR has modified submodules"
+ success-status: "submodules for project are unmodified"
+ failure-status: "Approval needed: modified submodules found"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/${{ghprbPullId}}/*:refs/remotes/origin/pr/${{ghprbPullId}}/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/build
--- /dev/null
+#!/bin/bash -ex
+
+docs_pr_only
+container_pr_only
+gha_pr_only
+qa_pr_only
+if [[ "$DOCS_ONLY" = true || "$CONTAINER_ONLY" = true || "$GHA_ONLY" == true || "$QA_ONLY" == true ]]; then
+ echo "Only the doc/, container/, qa/ or .github/ dir changed. No need to run make check."
+ exit 0
+fi
+
+NPROC=$(nproc)
+NPMCACHE=${HOME}/npmcache
+cat >.env <<EOF
+NPROC=${NPROC}
+MAX_PARALLEL_JOBS=${NPROC}
+WITH_CRIMSON=true
+WITH_RBD_RWL=true
+JENKINS_HOME=${JENKINS_HOME}
+REWRITE_COVERAGE_ROOTDIR=${PWD}/src/pybind/mgr/dashboard/frontend
+EOF
+# TODO: enable (read-only?) sccache support
+npm_cache_info() {
+ echo '===== npm cache info ======='
+ du -sh "${NPMCACHE}" || echo "${NPMCACHE} not present"
+ echo '============================'
+}
+bwc() {
+ # specify timeout in hours for $1
+ local timeout=$(($1*60*60))
+ shift
+ timeout "${timeout}" ./src/script/build-with-container.py \
+ -d "${DISTRO_BASE:-jammy}" \
+ --env-file="${PWD}/.env" \
+ --current-branch="${GIT_BRANCH:-main}" \
+ -t+arm64 \
+ --npm-cache-path="${NPMCACHE}" \
+ "${@}"
+}
+
+podman login -u ${DOCKER_HUB_USERNAME} -p ${DOCKER_HUB_PASSWORD} docker.io
+
+npm_cache_info
+bwc 1 -e configure
+# try to pre-load the npm cache so that it doesn't fail
+# during the normal build step
+for i in {0..5}; do
+ bwc 1 -e custom -- \
+ cmake --build build -t mgr-dashboard-frontend-deps && break
+ echo "Warning: Attempt $((i+1)) to cache npm packages failed."
+ sleep $((10 + 30 * i))
+done
+npm_cache_info
+bwc 4 -e tests
+npm_cache_info
+sleep 5
+ps -ef | grep -v jnlp | grep ceph || true
--- /dev/null
+#!/bin/bash -ex
+
+# kill all descendant processes of ctest
+
+# ceph-pull-requests-arm64/build/build is killed by jenkins when the ceph-pull-requests-arm64 job is aborted or
+# canceled, see https://www.jenkins.io/doc/book/using/aborting-a-build/ . but build/build does not
+# wait until all its children processes quit. after ctest is killed by SIGTERM, there is chance
+# that some tests are still running as ctest does not get a chance to kill them before it terminates.
+# if these tests had timed out, ctest would kill them using SIGKILL. so we need to kill them
+# manually after the job is aborted.
+
+# if ctest is still running, get its pid, otherwise we are done.
+ctest_pid=$(pgrep ctest) || exit 0
+# the parent process of ctest should have been terminated, but this might not be true when
+# it comes to some of its descendant processes, for instance, unittest-seastar-messenger
+ctest_pgid=$(ps --no-headers --format 'pgid:1' --pid $ctest_pid)
+kill -SIGTERM -- -"$ctest_pgid"
+# try harder
+for seconds in 0 1 1 2 3; do
+ sleep $seconds
+ if pgrep --pgroup $ctest_pgid > /dev/null; then
+ # kill only if we've waited for a while
+ if test $seconds != 0; then
+ pgrep --pgroup $ctest_pgid
+ echo 'try harder'
+ kill -SIGKILL -- -"$ctest_pgid"
+ fi
+ else
+ echo 'killed'
+ break
+ fi
+done
--- /dev/null
+- job:
+ block-downstream: false
+ block-upstream: false
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../../scripts/setup_container_runtime.sh
+ - ../../build/build
+ concurrent: true
+ disabled: false
+ name: ceph-pull-requests-arm64
+ node: 'arm64 && (installed-os-noble || centos9)'
+ parameters:
+ - string:
+ name: ghprbPullId
+ description: "the GitHub pull id, like '72' in 'ceph/pull/72'"
+ default: origin/main
+ project-type: freestyle
+ properties:
+ - build-discarder:
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ days-to-keep: 15
+ num-to-keep: 300
+ - raw:
+ xml: |
+ <com.sonyericsson.jenkins.plugins.bfa.model.ScannerJobProperty plugin="build-failure-analyzer@1.18.1">
+ <doNotScan>false</doNotScan>
+ </com.sonyericsson.jenkins.plugins.bfa.model.ScannerJobProperty>
+ - github:
+ url: https://github.com/ceph/ceph/
+ - raw:
+ xml: |
+ <com.sonyericsson.rebuild.RebuildSettings plugin="rebuild@1.25">
+ <autoRebuild>false</autoRebuild>
+ <rebuildDisabled>false</rebuildDisabled>
+ </com.sonyericsson.rebuild.RebuildSettings>
+ quiet-period: '5'
+ retry-count: '3'
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.git
+ name: origin
+ branches:
+ - origin/pr/${{ghprbPullId}}/merge
+ refspec: +refs/pull/${{ghprbPullId}}/*:refs/remotes/origin/pr/${{ghprbPullId}}/*
+ skip-tag: true
+ shallow-clone: true
+ honor-refspec: true
+ timeout: 20
+ wipe-workspace: true
+ triggers:
+ - github-pull-request:
+ cancel-builds-on-update: true
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ trigger-phrase: 'jenkins test make check arm64'
+ skip-build-phrase: '^jenkins do not test.*'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "make check (arm64)"
+ started-status: "running make check"
+ success-status: "make check succeeded"
+ failure-status: "make check failed"
+ white-list-target-branches:
+ - main
+ publishers:
+ - cobertura:
+ report-file: "src/pybind/mgr/dashboard/frontend/coverage/cobertura-coverage.xml"
+ only-stable: "true"
+ health-auto-update: "false"
+ stability-auto-update: "false"
+ zoom-coverage-chart: "true"
+ source-encoding: "Big5"
+ targets:
+ - files:
+ healthy: 10
+ unhealthy: 20
+ failing: 30
+ - method:
+ healthy: 10
+ unhealthy: 20
+ failing: 30
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/kill-tests
+ - xunit:
+ thresholds:
+ - failed:
+ unstable: 0
+ unstablenew: 0
+ failure: 0
+ failurenew: 0
+ types:
+ - ctest:
+ pattern: "build/Testing/**/Test.xml"
+ skip-if-no-test-files: true
+ wrappers:
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: github-readonly-token
+ username: GITHUB_USER
+ password: GITHUB_PASS
+ - username-password-separated:
+ credential-id: dgalloway-docker-hub
+ username: DOCKER_HUB_USERNAME
+ password: DOCKER_HUB_PASSWORD
--- /dev/null
+#!/bin/bash -ex
+
+docs_pr_only
+container_pr_only
+gha_pr_only
+qa_pr_only
+if [[ "$DOCS_ONLY" = true || "$CONTAINER_ONLY" = true || "$GHA_ONLY" == true || "$QA_ONLY" == true ]]; then
+ echo "Only the doc/, container/, qa/ or .github/ dir changed. No need to run make check."
+ exit 0
+fi
+
+NPROC=$(nproc)
+NPMCACHE=${HOME}/npmcache
+cat >.env <<EOF
+NPROC=${NPROC}
+MAX_PARALLEL_JOBS=${NPROC}
+WITH_CRIMSON=true
+WITH_RBD_RWL=true
+JENKINS_HOME=${JENKINS_HOME}
+REWRITE_COVERAGE_ROOTDIR=${PWD}/src/pybind/mgr/dashboard/frontend
+EOF
+# TODO: enable (read-only?) sccache support
+npm_cache_info() {
+ echo '===== npm cache info ======='
+ du -sh "${NPMCACHE}" || echo "${NPMCACHE} not present"
+ echo '============================'
+}
+bwc() {
+ # specify timeout in hours for $1
+ local timeout=$(($1*60*60))
+ shift
+ timeout "${timeout}" ./src/script/build-with-container.py \
+ -d "${DISTRO_BASE:-jammy}" \
+ --env-file="${PWD}/.env" \
+ --current-branch="${GIT_BRANCH:-main}" \
+ -t+amd64 \
+ --npm-cache-path="${NPMCACHE}" \
+ "${@}"
+}
+
+podman login -u ${DOCKER_HUB_USERNAME} -p ${DOCKER_HUB_PASSWORD} docker.io
+
+npm_cache_info
+bwc 1 -e configure
+# try to pre-load the npm cache so that it doesn't fail during the normal build
+# step
+for i in {0..5}; do
+ bwc 1 -e custom -- \
+ cmake --build build -t mgr-dashboard-frontend-deps && break
+ echo "Warning: Attempt $((i+1)) to cache npm packages failed."
+ sleep $((10 + 30 * i))
+done
+npm_cache_info
+bwc 4 -e tests
+npm_cache_info
+sleep 5
+ps -ef | grep -v jnlp | grep ceph || true
--- /dev/null
+#!/bin/bash -x
+
+podman unshare chown -R 0:0 "${WORKSPACE}"
--- /dev/null
+#!/bin/bash -ex
+
+# kill all descendant processes of ctest
+
+# ceph-pull-requests/build/build is killed by jenkins when the ceph-pull-requests job is aborted or
+# canceled, see https://www.jenkins.io/doc/book/using/aborting-a-build/ . but build/build does not
+# wait until all its children processes quit. after ctest is killed by SIGTERM, there is chance
+# that some tests are still running as ctest does not get a chance to kill them before it terminates.
+# if these tests had timed out, ctest would kill them using SIGKILL. so we need to kill them
+# manually after the job is aborted.
+
+# if ctest is still running, get its pid, otherwise we are done.
+ctest_pid=$(pgrep ctest) || exit 0
+# the parent process of ctest should have been terminated, but this might not be true when
+# it comes to some of its descendant processes, for instance, unittest-seastar-messenger
+ctest_pgid=$(ps --no-headers --format 'pgid:1' --pid $ctest_pid)
+kill -SIGTERM -- -"$ctest_pgid"
+# try harder
+for seconds in 0 1 1 2 3; do
+ sleep $seconds
+ if pgrep --pgroup $ctest_pgid > /dev/null; then
+ # kill only if we've waited for a while
+ if test $seconds != 0; then
+ pgrep --pgroup $ctest_pgid
+ echo 'try harder'
+ kill -SIGKILL -- -"$ctest_pgid"
+ fi
+ else
+ echo 'killed'
+ break
+ fi
+done
--- /dev/null
+- job:
+ name: ceph-pull-requests
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ node: x86_64 && (installed-os-noble || centos9)
+ display-name: 'ceph: Pull Requests'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 300
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph/
+ - rebuild:
+ auto-rebuild: true
+ - inject:
+ properties-content: |
+ TERM=xterm
+
+ parameters:
+ - string:
+ name: ghprbPullId
+ description: "the GitHub pull id, like '72' in 'ceph/pull/72'"
+
+ triggers:
+ - github-pull-request:
+ cancel-builds-on-update: true
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ trigger-phrase: 'jenkins test make check'
+ skip-build-phrase: '^jenkins do not test.*'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "make check"
+ started-status: "running make check"
+ success-status: "make check succeeded"
+ failure-status: "make check failed"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.git
+ branches:
+ - origin/pr/${{ghprbPullId}}/merge
+ refspec: +refs/pull/${{ghprbPullId}}/*:refs/remotes/origin/pr/${{ghprbPullId}}/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ shallow-clone: true
+ honor-refspec: true
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../../scripts/setup_container_runtime.sh
+ - ../../build/build
+
+ publishers:
+ - cobertura:
+ report-file: "src/pybind/mgr/dashboard/frontend/coverage/cobertura-coverage.xml"
+ only-stable: "true"
+ health-auto-update: "false"
+ stability-auto-update: "false"
+ zoom-coverage-chart: "true"
+ source-encoding: "Big5"
+ targets:
+ - files:
+ healthy: 10
+ unhealthy: 20
+ failing: 30
+ - method:
+ healthy: 10
+ unhealthy: 20
+ failing: 30
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/kill-tests
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - SUCCESS
+ - UNSTABLE
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell: !include-raw-verbatim:
+ - ../../build/cleanup
+
+ - xunit:
+ thresholds:
+ - failed:
+ unstable: 0
+ unstablenew: 0
+ failure: 0
+ failurenew: 0
+ types:
+ - ctest:
+ pattern: "build/Testing/**/Test.xml"
+ skip-if-no-test-files: true
+ wrappers:
+ - ansicolor
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: github-readonly-token
+ username: GITHUB_USER
+ password: GITHUB_PASS
+ - username-password-separated:
+ credential-id: dgalloway-docker-hub
+ username: DOCKER_HUB_USERNAME
+ password: DOCKER_HUB_PASSWORD
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+
+# run tox by recreating the environment and in verbose mode
+# by default this will run all environments defined, although currently
+# it is just flake8
+$VENV/tox -rv
--- /dev/null
+- job:
+ name: ceph-qa-suite-pull-requests
+ node: trusty
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ display-name: 'ceph-qa-suite: Pull Requests'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-qa-suite/
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ org-list:
+ - ceph
+ trigger-phrase: ''
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: false
+ auto-close-on-fail: false
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-qa-suite.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
--- /dev/null
+// vim: ts=2 sw=2 expandtab
+ceph_repo = "https://github.com/ceph/ceph.git"
+
+pipeline {
+ agent any
+ stages {
+ stage("build arch-specific images") {
+ matrix {
+ axes {
+ axis {
+ name 'ARCH'
+ values 'x86_64', 'arm64'
+ }
+ }
+ stages {
+ stage("arch-specific image") {
+ agent {
+ label "centos9&&${ARCH}"
+ }
+ environment {
+ // set up env the way build.sh expects it
+ // BRANCH, VERSION, REMOVE_LOCAL_IMAGES, NO_PUSH already set
+ FLAVOR = 'default'
+ CEPH_SHA1 = "${SHA1}"
+ ARCH = "${ARCH}"
+ CONTAINER_REPO_HOSTNAME = 'quay.ceph.io'
+ CONTAINER_REPO_ORGANIZATION = 'ceph'
+ CONTAINER_REPO_CREDS = credentials('quay.ceph.io-ceph-prerelease')
+ DOWNLOAD_PRERELEASE_CREDS = credentials('download.ceph.com-prerelease')
+ // keep all the podman/skopeo auths in the same place
+ REGISTRY_AUTH_FILE = '/home/jenkins-build/manifest.auth.json'
+ // the one variant value. If I try to do this with conditional code in the steps,
+ // manipulating 'env.CONTAINER_REPO', it appears as if the env instance is somehow
+ // shared between the two executors, *even across different builders*. Yes, I know
+ // it sounds nuts. I don't know how it does it, but I've got a test case that shows
+ // one builder setting it, another builder setting it, and the first builder getting
+ // the second builder's value. *I KNOW*.
+ //
+ // this, however, seems to set it privately to the builder.
+ CONTAINER_REPO = "${(env.ARCH == 'x86_64') ? 'prerelease-amd64' : 'prerelease-arm64'}"
+ }
+ steps {
+ sh './scripts/setup_container_runtime.sh'
+ sh 'echo "Building on $(hostname)"'
+ buildDescription "${env.CONTAINER_REPO} image build"
+ dir('ceph') {
+ checkout scmGit(
+ branches: [[ name: env.SHA1 ]],
+ userRemoteConfigs: [[ url: ceph_repo ]],
+ sparseCheckout: [[ path: 'container' ]],
+ extensions: [
+ [ $class: 'WipeWorkspace' ],
+ ],
+ )
+ }
+ dir('ceph') {
+ script {
+ // translate to the names build.sh wants
+ env.PRERELEASE_USERNAME = env.DOWNLOAD_PRERELEASE_CREDS_USR
+ env.PRERELEASE_PASSWORD = env.DOWNLOAD_PRERELEASE_CREDS_PSW
+ env.CONTAINER_REPO_USERNAME = env.CONTAINER_REPO_CREDS_USR
+ env.CONTAINER_REPO_PASSWORD = env.CONTAINER_REPO_CREDS_PSW
+ sh '''#!/bin/bash -ex
+ podman login -u ${CONTAINER_REPO_CREDS_USR} -p ${CONTAINER_REPO_CREDS_PSW} ${CONTAINER_REPO_HOSTNAME}/${CONTAINER_REPO_ORGANIZATION}
+ cd container;
+ ./build.sh
+ '''
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ stage("make manifest-list image") {
+ agent {
+ label "centos9"
+ }
+ environment {
+ CONTAINER_REPO_HOSTNAME = 'quay.ceph.io'
+ CONTAINER_REPO_ORGANIZATION = 'ceph'
+ CONTAINER_REPO_CREDS = credentials('quay.ceph.io-ceph-prerelease')
+ REGISTRY_AUTH_FILE = '/home/jenkins-build/manifest.auth.json'
+ }
+ steps {
+ dir('ceph') {
+ checkout scmGit(
+ branches: [[ name: env.SHA1 ]],
+ userRemoteConfigs: [[ url: ceph_repo ]],
+ sparseCheckout: [[ path: 'container' ]],
+ extensions: [
+ [ $class: 'WipeWorkspace' ],
+ ],
+ )
+ script {
+ sh '''#!/bin/bash -ex
+ podman login -u ${CONTAINER_REPO_CREDS_USR} -p ${CONTAINER_REPO_CREDS_PSW} ${CONTAINER_REPO_HOSTNAME}/${CONTAINER_REPO_ORGANIZATION}
+ skopeo login -u ${CONTAINER_REPO_CREDS_USR} -p ${CONTAINER_REPO_CREDS_PSW} ${CONTAINER_REPO_HOSTNAME}/${CONTAINER_REPO_ORGANIZATION}
+ cd container;
+ ./make-manifest-list.py
+ '''
+ }
+ }
+ }
+ }
+ }
+}
--- /dev/null
+- job:
+ name: ceph-release-containers
+ description: Build ceph release containers from download.ceph.com and push to quay.ceph.io/prerelease*
+ project-type: pipeline
+ quiet-period: 1
+ concurrent: true
+ pipeline-scm:
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build
+ branches:
+ - main
+ shallow-clone: true
+ submodule:
+ disable: true
+ wipe-workspace: true
+ script-path: ceph-release-containers/build/Jenkinsfile
+ lightweight-checkout: true
+ do-not-fetch-tags: true
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: main
+
+ - string:
+ name: SHA1
+ description: "SHA1 of the commit to build"
+
+ - string:
+ name: VERSION
+ description: "Ceph version string (e.g. 19.2.0)"
+
+ - string:
+ name: NO_PUSH
+ description: "Set to non-empty if you want to skip pushing images to container repo"
+ default:
+
+ - string:
+ name: REMOVE_LOCAL_IMAGES
+ description: "Set to false if you want to keep local container images on the build host (for debug)"
+ default: true
+
+ - string:
+ name: CONTAINER_REPO_HOSTNAME
+ description: "FQDN of prerelease container repo server"
+ default: "quay.ceph.io"
+
+ - string:
+ name: CONTAINER_REPO_ORGANIZATION
+ description: "Name of container repo organization (e.g. 'ceph-ci')"
+ default: "ceph"
+
+ - string:
+ name: CEPH_BUILD_BRANCH
+ description: "Use the Jenkinsfile from this ceph-build branch"
+ default: main
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: quay.ceph.io-ceph-prerelease
+ username: PRERELEASE_CONTAINER_REPO_USERNAME
+ password: PRERELEASE_CONTAINER_REPO_PASSWORD
+ - username-password-separated:
+ credential-id: download.ceph.com-prerelease
+ username: PRERELEASE_DOWNLOAD_CEPH_COM_USERNAME
+ password: PRERELEASE_DOWNLOAD_CEPH_COM_PASSWORD
--- /dev/null
+// Global variables to use across stages
+def CEPH_TAG_CREATE_JOB_ID, CEPH_TAG_CREATE_JOB_URL
+def PACKAGE_BUILD_ID, PACKAGE_BUILD_URL
+def CEPH_TAG_PUSH_JOB_ID, CEPH_TAG_PUSH_JOB_URL
+
+pipeline {
+ agent any
+ stages {
+ stage('create ceph release tag') {
+ steps {
+ script {
+ def createTagJob = build(
+ job: 'ceph-tag',
+ parameters: [
+ string(name: 'VERSION', value: env.VERSION),
+ string(name: 'BRANCH', value: env.BRANCH),
+ booleanParam(name: 'FORCE', value: env.FORCE?.toBoolean()),
+ booleanParam(name: 'FORCE_VERSION', value: env.FORCE_VERSION?.toBoolean()),
+ string(name: 'RELEASE_TYPE', value: env.RELEASE_TYPE),
+ booleanParam(name: 'RELEASE_BUILD', value: true),
+ booleanParam(name: 'TAG', value: env.TAG?.toBoolean()),
+ booleanParam(name: "THROWAWAY", value: env.THROWAWAY?.toBoolean()),
+ string(name: "CEPH_BUILD_BRANCH", value: env.CEPH_BUILD_BRANCH),
+ string(name: 'TAG_PHASE', value: 'create')
+ ],
+ )
+ CEPH_TAG_CREATE_JOB_ID = createTagJob.getNumber()
+ CEPH_TAG_CREATE_JOB_URL = new URI([env.JENKINS_URL, "job", "ceph-tag", CEPH_TAG_CREATE_JOB_ID].join("/")).normalize()
+
+ copyArtifacts projectName: 'ceph-tag',
+ selector: specific("${CEPH_TAG_CREATE_JOB_ID}"),
+ filter: 'ceph-build/ansible/ceph/dist/sha1',
+ target: '.',
+ flatten: true
+
+ def sha1 = readFile('sha1').trim()
+ env.SHA1 = sha1
+ echo "Downstream returned: ${env.SHA1}"
+ def build_description = """\
+ BRANCH=${env.BRANCH}<br />
+ VERSION=${env.VERSION}<br />
+ SHA1=${env.SHA1}<br />
+ """.stripIndent()
+ // Note: This requires the 'build-name-setter' plugin
+ buildDescription build_description
+ }
+ }
+ }
+ stage("package build") {
+ steps {
+ script {
+ def package_build = build(
+ job: 'ceph-dev-pipeline',
+ parameters: [
+ // These are in order of the parameters ceph-dev-pipeline takes
+ string(name: "BRANCH", value: env.BRANCH),
+ string(name: "SHA1", value: env.SHA1),
+ string(name: "DISTROS", value: env.DISTROS),
+ string(name: "ARCHS", value: env.ARCHS),
+ string(name: "FLAVORS", value: 'default'),
+ booleanParam(name: "CI_COMPILE", value: true),
+ booleanParam(name: "THROWAWAY", value: env.THROWAWAY),
+ booleanParam(name: "FORCE", value: env.FORCE),
+ string(name: 'FLAVOR', value: 'default'),
+ // Release containers are built manually from signed packages so we don't need to build them here
+ booleanParam(name: 'CI_CONTAINER', value: false),
+ booleanParam(name: 'DWZ', value: true),
+ booleanParam(name: 'SCCACHE', value: false),
+ string(name: "CEPH_BUILD_BRANCH", value: env.CEPH_BUILD_BRANCH),
+ string(name: 'RELEASE_TYPE', value: env.RELEASE_TYPE),
+ booleanParam(name: 'RELEASE_BUILD', value: true),
+ string(name: 'VERSION', value: env.VERSION)
+ ]
+ )
+ PACKAGE_BUILD_ID = package_build.getNumber()
+ PACKAGE_BUILD_URL = new URI([env.JENKINS_URL, "job", "ceph-dev-pipeline", PACKAGE_BUILD_ID].join("/")).normalize()
+ }
+ }
+ }
+ stage('push ceph release tag') {
+ steps {
+ script {
+ def pushTagJob = build(
+ job: 'ceph-tag',
+ parameters: [
+ string(name: 'VERSION', value: env.VERSION ?: ''),
+ string(name: 'BRANCH', value: env.BRANCH ?: ''),
+ booleanParam(name: 'FORCE_VERSION', value: env.FORCE_VERSION),
+ string(name: 'RELEASE_TYPE', value: env.RELEASE_TYPE ?: ''),
+ booleanParam(name: 'RELEASE_BUILD', value: true),
+ booleanParam(name: 'TAG', value: env.TAG),
+ string(name: 'TAG_PHASE', value: 'push')
+ ],
+ )
+ CEPH_TAG_PUSH_JOB_ID = pushTagJob.getNumber()
+ CEPH_TAG_PUSH_JOB_URL = new URI([env.JENKINS_URL, "job", "ceph-tag", CEPH_TAG_PUSH_JOB_ID].join("/")).normalize()
+ }
+ }
+ }
+ }
+}
--- /dev/null
+- job:
+ name: ceph-release-pipeline
+ description: 'This Jenkins pipeline creates upstream release tags, and packages.'
+ project-type: pipeline
+ quiet-period: 1
+ concurrent: true
+ pipeline-scm:
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build
+ branches:
+ - ${{CEPH_BUILD_BRANCH}}
+ shallow-clone: true
+ submodule:
+ disable: true
+ wipe-workspace: true
+ script-path: ceph-release-pipeline/build/Jenkinsfile
+ lightweight-checkout: true
+ do-not-fetch-tags: true
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The release branch to build (e.g., pacific)"
+ default: main
+
+ - string:
+ name: VERSION
+ description: "The version for release, e.g. 0.94.4"
+
+ - bool:
+ name: TEST
+ description: |
+ If this is unchecked, then the builds will be pushed to chacra with the correct ref. This is the default.
+
+ If this is checked, then the builds will be pushed to chacra under the 'test' ref.
+
+ - bool:
+ name: TAG
+ description: |
+ When this is checked, Jenkins will remove the previous private tag and recreate it again, changing the control files and committing again. When this is unchecked, Jenkins will not do any commit or tag operations. If you've already created the private tag separately or are re-running a build, leave this unchecked.
+ Defaults to checked.
+ default: true
+
+ - bool:
+ name: THROWAWAY
+ description: |
+ Default: False. When True it will not POST binaries to chacra. Artifacts will not be around for long. Useful to test builds.
+ default: false
+
+ - bool:
+ name: FORCE_VERSION
+ description: |
+ Default: False. When True it will force the Debian version (when wanting to release older versions after newer ones have been released.
+ Mostly useful for DEBs to append the `-b` flag for dhc.)
+ default: false
+
+ - bool:
+ name: FORCE
+ description: |
+ If this is unchecked, then then nothing is built or pushed if they already exist in chacra. This is the default.
+
+ If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra.
+
+ - choice:
+ name: RELEASE_TYPE
+ description: |
+ STABLE: A normal release. Builds from BRANCH branch and pushed to BRANCH-release branch.
+ RELEASE_CANDIDATE: A normal release except the binaries will be pushed to chacra using the $BRANCH-rc name
+ HOTFIX: Builds from BRANCH-release branch. BRANCH-release will be git merged back into BRANCH.
+ SECURITY: Builds from BRANCH-release branch in ceph-private.git (private repo).
+ choices:
+ - STABLE
+ - RELEASE_CANDIDATE
+ - HOTFIX
+ - SECURITY
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos9, noble, jammy, focal, bionic, xenial, trusty, precise, wheezy, jessie, buster, bullseye, bookworm"
+ default: "noble jammy centos8 centos9 bookworm"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64, and arm64"
+ default: "x86_64 arm64"
+
+ - string:
+ name: CEPH_BUILD_BRANCH
+ description: "Use the Jenkinsfile from this ceph-build.git branch"
+ default: main
+
+
+ wrappers:
+ - build-name:
+ name: "#${{BUILD_NUMBER}} ${{BRANCH}}, ${{VERSION}}"
+
--- /dev/null
+#!/bin/sh -e
+
+HOST=$(hostname --short)
+echo "Building on ${HOST}"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo "Building on Host: $(hostname)"
+
+# remove any previous builds
+rm -rf dist
+rm -rf RPMBUILD
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# create the .chacractl config file using global variables
+make_chacractl_config
+
+# What are we building ?
+
+[ "$TEST" = true ] && chacra_ref="test" || chacra_ref="${RELEASE}"
+
+target=$DIST
+if [ "$target" = "centos7" ] ; then
+ target=el7
+ chacra_baseurl="ceph-release/${chacra_ref}/HEAD/centos/7"
+fi
+if [ "$target" = "centos8" ] ; then
+ target=el8
+ chacra_baseurl="ceph-release/${chacra_ref}/HEAD/centos/8"
+fi
+if [ "$target" = "centos9" ] ; then
+ target=el9
+ chacra_baseurl="ceph-release/${chacra_ref}/HEAD/centos/9"
+fi
+if [ "$target" = "sles11sp2" ] ; then
+ target=sles11
+ chacra_baseurl="ceph-release/${chacra_ref}/HEAD/sles/11"
+fi
+echo "Target directory is: $target"
+
+check_binary_existence $VENV $chacra_baseurl/noarch
+
+# setup rpm build area
+mkdir -p RPMBUILD/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+BUILDAREA=$WORKSPACE/RPMBUILD
+
+ceph_release="$RELEASE"
+build_ceph_release_rpm ${BUILDAREA} false
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+if [ $? -eq 0 ] ; then
+ # we actually do noarch stuff here
+ find $BUILDAREA/RPMS/* | grep noarch | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_baseurl}/noarch
+ find $BUILDAREA/SRPMS | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_baseurl}/source
+fi
--- /dev/null
+- job:
+ name: ceph-release-rpm
+ project-type: matrix
+ defaults: global
+ description: Builds the repository configuration package for ceph-release. RPMS Only
+ block-downstream: false
+ block-upstream: false
+
+ parameters:
+ - string:
+ name: RELEASE
+ default: pacific
+
+ - bool:
+ name: TEST
+ description: "
+If this is unchecked, then the builds will be pushed to chacra with the correct ref. This is the default.
+
+If this is checked, then the builds will be pushed to chacra under the 'test' ref."
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ axes:
+ - axis:
+ type: label-expression
+ name: ARCH
+ values:
+ - x86_64
+
+ - axis:
+ type: label-expression
+ name: DIST
+ values:
+ - centos7
+ - centos8
+ - centos9
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
--- /dev/null
+#!/usr/bin/env bash
+set +x
+echo "Starting cleanup..."
+docker container prune -f
+minikube stop
+minikube delete
+echo "Cleanup completed."
--- /dev/null
+- job:
+ name: ceph-orchestrator-rook-e2e
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ node: huge && jammy && x86_64
+ display-name: 'ceph: Rook Orchestrator E2E'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 300
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph/
+ - rebuild:
+ auto-rebuild: true
+ - inject:
+ properties-content: |
+ TERM=xterm
+ parameters:
+ - string:
+ name: sha1
+ description: "commit id or a refname, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ cancel-builds-on-update: true
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ white-list-labels:
+ - orchestrator
+ - rook
+ black-list-target-branches:
+ - luminous
+ - mimic
+ - nautilus
+ - pacific
+ - quincy
+ - octopus
+ trigger-phrase: 'jenkins test rook e2e'
+ skip-build-phrase: '^jenkins do not test.*'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "ceph rook orchestrator e2e tests"
+ started-status: "running ceph rook orchestrator e2e tests"
+ success-status: "ceph rook orchestrator e2e tests succeeded"
+ failure-status: "ceph rook orchestrator e2e tests failed"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.git
+ branches:
+ - origin/pr/${{ghprbPullId}}/merge
+ refspec: +refs/pull/${{ghprbPullId}}/*:refs/remotes/origin/pr/${{ghprbPullId}}/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ shallow-clone: true
+ wipe-workspace: true
+
+ - git:
+ url: https://github.com/ceph/ceph-build.git
+ branches:
+ - main
+ basedir: ceph-build
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/rook-orch/install-rook-e2e-deps.sh
+ - shell: |
+ export COMMIT_INFO_MESSAGE="$ghprbPullTitle"
+ timeout 3600 ./src/pybind/mgr/rook/ci/run-rook-e2e-tests.sh
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - ansicolor
+
+ publishers:
+
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - SUCCESS
+ - UNSTABLE
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell: "${{WORKSPACE}}/ceph-build/ceph-rook-e2e/build/cleanup"
--- /dev/null
+#!/bin/bash -ex
+
+cd $WORKSPACE/ceph-build/ansible/ceph
+
+HOST=$(hostname --short)
+echo "Building on ${HOST}"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BRANCH=$BRANCH"
+echo " SHA1=$(git rev-parse HEAD)"
+
+if [ -x "$BRANCH" ] ; then
+ echo "No git branch was supplied"
+ exit 1
+fi
+
+echo "Building version $(git describe --abbrev=8) Branch $BRANCH"
+
+rm -rf dist
+rm -rf release
+
+# fix version/release. Hack needed only for the spec
+# file for rc candidates.
+#export force=force
+#sed -i 's/^Version:.*/Version: 0.80/' ceph.spec.in
+#sed -i 's/^Release:.*/Release: rc1%{?dist}/' ceph.spec.in
+#sed -i 's/^Source0:.*/Source0: http:\/\/ceph.com\/download\/%{name}-%{version}-rc1.tar.bz2/' ceph.spec.in
+#sed -i 's/^%setup.*/%setup -q -n %{name}-%{version}-rc1/' ceph.spec.in
+
+# run submodule updates regardless
+echo "Running submodule update ..."
+git submodule update --init --quiet
+
+CEPH_EXTRA_RPMBUILD_ARGS="--with tcmalloc"
+CEPH_EXTRA_CMAKE_ARGS="$CEPH_EXTRA_CMAKE_ARGS -DALLOCATOR=tcmalloc"
+
+
+# When using autotools/autoconf it is possible to see output from `git diff`
+# since some macros can be copied over to the ceph source, triggering this
+# check. This is why this check now is done just before running autogen.sh
+# which calls `aclocal -I m4 --install` that copies a system version of
+# ltsugar.m4 that can be different from the one included in the ceph source
+# tree.
+if git diff --quiet; then
+ echo repository is clean
+else
+ echo
+ echo "**** REPOSITORY IS DIRTY ****"
+ echo
+ git diff
+ if [ "$force" != "force" ]; then
+ echo "add 'force' argument if you really want to continue."
+ exit 1
+ fi
+ echo "forcing."
+fi
+
+mkdir -p release
+
+# Contents below used to come from /srv/release_tarball.sh and
+# was called like::
+#
+# $bindir/release_tarball.sh release release/version
+
+releasedir='release'
+versionfile='release/version'
+
+cephver=`git describe --abbrev=8 --match "v*" | sed s/^v//`
+echo current version $cephver
+
+srcdir=`pwd`
+
+if [ -d "$releasedir/$cephver" ]; then
+ echo "$releasedir/$cephver already exists; reuse that release tarball"
+else
+ echo building tarball
+ rm ceph-*.tar.gz || true
+ rm ceph-*.tar.bz2 || true
+
+ ./make-dist $cephver
+ vers=`ls ceph-*.tar.bz2 | cut -c 6- | sed 's/.tar.bz2//'`
+ extension="tar.bz2"
+ extract_flags="jxf"
+ compress_flags="jcf"
+
+ echo tarball vers $vers
+
+ echo extracting
+ mkdir -p $releasedir/$cephver/rpm
+ cp rpm/*.patch $releasedir/$cephver/rpm || true
+ cd $releasedir/$cephver
+
+ tar $extract_flags $srcdir/ceph-$vers.$extension
+
+ [ "$vers" != "$cephver" ] && mv ceph-$vers ceph-$cephver
+
+ tar zcf ceph_$cephver.orig.tar.gz ceph-$cephver
+ cp -a ceph_$cephver.orig.tar.gz ceph-$cephver.tar.gz
+
+ tar jcf ceph-$cephver.tar.bz2 ceph-$cephver
+
+ # copy debian dir, too. Prevent errors with `true` when using cmake
+ cp -a $srcdir/debian debian || true
+ cd $srcdir
+
+ # copy in spec file, too. If using cmake, the spec file
+ # will already exist.
+ cp ceph.spec $releasedir/$cephver || true
+fi
+
+if [ -n "$versionfile" ]; then
+ echo $cephver > $versionfile
+ echo "wrote $cephver to $versionfile"
+fi
+
+vers=`cat release/version`
+
+(
+ cd release/$vers
+ mkdir -p ceph-$vers/debian
+ cp -r debian/* ceph-$vers/debian/
+ dpkg-source -b ceph-$vers
+)
+
+mkdir -p dist
+# Debian Source Files
+mv release/$vers/*.dsc dist/.
+mv release/$vers/*.diff.gz dist/.
+mv release/$vers/*.orig.tar.gz dist/.
+# RPM Source Files
+mkdir -p dist/rpm/
+mv release/$vers/rpm/*.patch dist/rpm/ || true
+mv release/$vers/ceph.spec dist/.
+mv release/$vers/*.tar.* dist/.
+# Parameters
+mv release/version dist/.
+
+cat > dist/sha1 << EOF
+SHA1=$(git rev-parse HEAD)
+EOF
+
+# CEPH_EXTRA_{CONFIGURE,RPMBUILD}_ARGS are consumed by ceph-build before
+# the switch to cmake; CEPH_EXTRA_CMAKE_ARGS is for after cmake
+cat > dist/other_envvars << EOF
+CEPH_EXTRA_RPMBUILD_ARGS=${CEPH_EXTRA_RPMBUILD_ARGS}
+CEPH_EXTRA_CMAKE_ARGS=${CEPH_EXTRA_CMAKE_ARGS}
+EOF
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "ansible" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# remove "-release" from $BRANCH variable in case it was accidentally passed in the Jenkins UI
+BRANCH=${BRANCH//-release/}
+
+# run ansible to do all the tagging and release specifying
+# a local connection and 'localhost' as the host where to execute
+cd "$WORKSPACE/ceph-build/ansible/"
+$VENV/ansible-playbook -i "localhost," -c local release.yml -vvv --extra-vars="stage=create version=$VERSION branch=$BRANCH force_version=$FORCE_VERSION release=$RELEASE_TYPE tag=$TAG throwaway=$THROWAWAY project=ceph"
--- /dev/null
+#!/bin/bash -ex
+
+# update shaman with the failed build status. At this point there aren't any
+# architectures or distro information, so we just report this with the current
+# (ceph-dev-setup) build information that includes log and build urls
+BRANCH=`branch_slash_filter $BRANCH`
+SHA1=${GIT_COMMIT}
+
+failed_build_status "ceph"
--- /dev/null
+- job:
+ name: ceph-setup
+ description: "This job:\r\n- Creates the version commit\r\n- Checks out the branch and builds the tarballs, diffs, and dsc that are passed to the ceph-build step.\r\n\r\nNotes:\r\nJob needs to run on a releatively recent debian system. The Restrict where run feature is used to specifiy an appropriate label.\r\nThe clear workspace before checkout box for the git plugin is used."
+ node: huge && bionic && !arm64
+ display-name: 'ceph-setup'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 25
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph
+ - copyartifact:
+ projects: ceph-build,ceph-tag,ceph
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build (e.g., pacific) DO NOT INCLUDE '-release'"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build.git
+ credentials-id: 'jenkins-build'
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+ basedir: "ceph-build"
+ branches:
+ - origin/main
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/create_tag
+ - ../../build/build
+ publishers:
+ - archive:
+ artifacts: 'ceph-build/ansible/ceph/dist/**'
+ allow-empty: false
+ latest-only: false
+
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/failure
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
+ - ssh-agent-credentials:
+ # "jenkins-build" SSH key, needed so we can push/pull to/from private repos
+ user: 'jenkins-build'
--- /dev/null
+import groovy.transform.Field
+
+@Field def checkout_ref = ""
+
+pipeline {
+ agent {
+ label "gigantic&&x86_64"
+ }
+ stages {
+ stage("repository") {
+ steps {
+ dir("ceph") {
+ script {
+ if ( env.SHA1 ) {
+ checkout_ref = env.SHA1
+ } else {
+ checkout_ref = env.BRANCH
+ }
+
+ // Rewrite repo + ref if RELEASE_BUILD=true.
+ // RELEASE_BUILD is intentionally undefinable as a ceph-source-dist parameter but instead
+ // defined by ceph-release-pipeline so that only that job may clone from ceph-releases.git.
+ def repoUrl = params.RELEASE_BUILD ? 'git@github.com:ceph/ceph-releases.git' : env.CEPH_REPO
+ env.checkout_ref = params.RELEASE_BUILD ? "v${params.VERSION}" : env.BRANCH
+ env.CEPH_REPO = repoUrl
+
+ checkout scmGit(
+ branches: [[name: checkout_ref]],
+ userRemoteConfigs: [[
+ url: env.CEPH_REPO,
+ credentialsId: 'jenkins-build'
+ ]],
+ extensions: [
+ [$class: 'CleanBeforeCheckout'],
+ [
+ $class: 'CloneOption',
+ shallow: true,
+ depth: 100,
+ timeout: 90
+ ]
+ ]
+ )
+
+ // No need to fetch tags if this is a release build
+ if (!params.RELEASE_BUILD?.toBoolean()) {
+ sh 'git fetch --tags https://github.com/ceph/ceph.git'
+ }
+ }
+ }
+ }
+ }
+ stage("tarball") {
+ steps {
+ script {
+ dir("ceph") {
+ sh '''#!/bin/bash
+ while [ -z "$(git describe --abbrev=8 --match 'v*' | sed s/^v//)" ]; do
+ git fetch --deepen 50
+ done
+ '''
+ def ceph_version_git = sh(
+ script: "git describe --abbrev=8 --match 'v*' | sed s/^v//",
+ returnStdout: true,
+ ).trim()
+ sh """
+ mkdir dist
+ echo ${ceph_version_git} > dist/version
+ rm -f ceph-*.tar.*
+ """
+ sh """#!/bin/bash
+ ./make-dist ${ceph_version_git}
+ """
+ sh '''#!/bin/bash -ex
+ declare -A compression=( ["bz2"]="j" ["gz"]="z" ["xz"]="J" )
+ for cmp in "${!compression[@]}"; do
+ extension="tar.$cmp"
+ ceph_version_tarball=$(ls ceph-*.$extension | cut -c 6- | sed "s/.$extension//" || true)
+ flag="${compression[$cmp]}"
+ extract_flags="${flag}xf"
+ compress_flags="${flag}cf"
+ if [ "$ceph_version_tarball" != "" ]; then break; fi
+ done
+ echo tarball vers $ceph_version_tarball
+
+ ln ceph.spec dist/
+ ln ceph-$ceph_version_tarball.$extension dist/
+
+ echo "SHA1=$(git rev-parse HEAD)" > dist/sha1
+
+ if [ "${RELEASE_BUILD:-}" = "true" ]; then
+ # For security, the following vars are written to dist/other_envvars to be passed
+ # to ceph-dev-pipeline instead of via parameters.
+ # ceph-dev-pipeline does not offer ceph-releases.git as an option for CEPH_REPO,
+ # and we don't want RELEASE_BUILD to be settable by the user to avoid being able
+ # clone from ceph-releases.git.
+ if [ "${RELEASE_TYPE}" = "PRIVATE" ]; then
+ echo "CEPH_REPO=https://github.com/ceph/ceph-private" > dist/other_envvars
+ else
+ echo "CEPH_REPO=https://github.com/ceph/ceph-releases" > dist/other_envvars
+ fi
+ echo "RELEASE_BUILD=true" >> dist/other_envvars
+ echo "chacra_url=https://chacra.ceph.com/" >> dist/other_envvars
+ echo "BRANCH=${BRANCH}-release" > dist/branch
+
+ echo "Creating tar.gz for upstream release"
+ pushd dist
+ bunzip2 ceph-$ceph_version_tarball.$extension
+ gzip ceph-$ceph_version_tarball.tar.gz
+ popd
+ else
+ echo "BRANCH=${BRANCH}" > dist/branch
+ fi
+
+ mv dist ..
+ '''
+ }
+ }
+ }
+ }
+ }
+ post {
+ always {
+ archiveArtifacts artifacts: 'dist/**', fingerprint: true
+ }
+ }
+}
--- /dev/null
+- job:
+ name: ceph-source-dist
+ project-type: pipeline
+ concurrent: true
+ pipeline-scm:
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build
+ branches:
+ - ${{CEPH_BUILD_BRANCH}}
+ shallow-clone: true
+ submodule:
+ disable: true
+ wipe-workspace: true
+ script-path: ceph-source-dist/build/Jenkinsfile
+ lightweight-checkout: true
+ do-not-fetch-tags: true
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 100
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: 50
+ - copyartifact:
+ projects: ceph-dev-pipeline,ceph-dev,ceph-dev-build,ceph-dev-new,ceph-dev-new-build
+
+ parameters:
+ - choice:
+ name: CEPH_REPO
+ choices:
+ - git@github.com:ceph/ceph-ci.git
+ - git@github.com:ceph/ceph.git
+ - https://github.com/ceph/ceph-ci
+ - https://github.com/ceph/ceph
+
+ - string:
+ name: BRANCH
+ description: "The Ceph branch to build"
+
+ - string:
+ name: SHA1
+ description: "The specific commit to build"
+
+ - string:
+ name: CEPH_BUILD_BRANCH
+ description: "Use the Jenkinsfile from this ceph-build branch"
+ default: main
+
+ scm:
+ - git:
+ url: ${{CEPH_REPO}}
+ # Use the SSH key attached to the ceph-jenkins GitHub account.
+ credentials-id: "jenkins-build"
+ branches:
+ - $BRANCH
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "ansible" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# remove "-release" from $BRANCH variable in case it was accidentally passed in the Jenkins UI
+BRANCH=${BRANCH//-release/}
+
+# run ansible to do all the tagging and release specifying
+# a local connection and 'localhost' as the host where to execute
+cd "$WORKSPACE/ceph-build/ansible/"
+$VENV/ansible-playbook -i "localhost," -c local release.yml --extra-vars="stage=$TAG_PHASE version=$VERSION branch=$BRANCH force_version=$FORCE_VERSION release=$RELEASE_TYPE tag=$TAG project=ceph token=$GITHUB_TOKEN"
--- /dev/null
+- job:
+ name: ceph-tag
+ node: bionic
+ description: "This job checks out the version commit previously pushed to ceph-releases.git and pushes it to ceph.git."
+ display-name: 'ceph-tag'
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 25
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph
+ - copyartifact:
+ projects: ceph-dev-pipeline,ceph-release-pipeline
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git BRANCH to build (e.g., pacific)"
+ default: main
+
+ - bool:
+ name: TAG
+ description: "When this is checked, Jenkins will remove the previous private tag and recreate it again, changing the control files and committing again. When this is unchecked, Jenkins will not do any commit or tag operations. If you've already created the private tag separately, then leave this unchecked.
+Defaults to checked."
+ default: true
+
+ - choice:
+ name: TAG_PHASE
+ description: "
+create: When TAG=true, pull BRANCH-release and create the tag/version commit on top of it
+push: When TAG=true, push the afformentioned tag to ceph.git, and create the PR to merge BRANCH-release back into BRANCH."
+ default: push
+ choices:
+ - push
+ - create
+
+ - bool:
+ name: THROWAWAY
+ description: "
+Default: False. When True it will not POST binaries to chacra. Artifacts will not be around for long. Useful to test builds."
+ default: false
+
+ - string:
+ name: VERSION
+ description: "The version for release, e.g. 0.94.4"
+
+ - choice:
+ name: RELEASE_TYPE
+ description: "
+STABLE: A normal release. Builds from BRANCH branch and pushed to BRANCH-release branch.
+RELEASE_CANDIDATE: A normal release except the binaries will be pushed to chacra using the $BRANCH-rc name
+HOTFIX: Builds from BRANCH-release branch. BRANCH-release will be git merged back into BRANCH.
+SECURITY: Builds from BRANCH-release branch in ceph-private.git (private repo)."
+ choices:
+ - STABLE
+ - RELEASE_CANDIDATE
+ - HOTFIX
+ - SECURITY
+
+ - string:
+ name: CEPH_BUILD_BRANCH
+ description: "The ceph-build.git branch to use. Useful for testing changes to the ceph-tag job"
+ default: main
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build.git
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+ basedir: "ceph-build"
+ branches:
+ - ${{CEPH_BUILD_BRANCH}}
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ publishers:
+ - archive:
+ artifacts: 'ceph-build/ansible/ceph/dist/sha1'
+ allow-empty: false
+ latest-only: false
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - ssh-agent-credentials:
+ # "jenkins-build" SSH key, needed so we can push/pull to/from private repos
+ user: 'jenkins-build'
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: 8cffdeb4-283c-4d96-a190-05d5645bcc2f
+ username: GITHUB_USER
+ password: GITHUB_TOKEN
--- /dev/null
+
+# ceph-trigger-build
+
+This pipeline's role is to:
+
+1. Be triggered by a git push event to [ceph-ci.git](https://github.com/ceph/ceph-ci)
+2. Determine which Jenkins job or pipeline to use for the source creation, compilation, and packaging.
+3. Trigger the appropriate Jenkins job/pipeline using default parameters based on the branch name, defined parameters via [git trailers](https://git-scm.com/docs/git-interpret-trailers), or a combination of both.
+
+
+### Git Trailer Parameters
+
+- All parameters are optional. For example, if you only want to build packages on `x86_64` for a branch targeted at `tentacle`, tentacle's default distros are `jammy centos9 windows` so a pipeline would be triggered to build x86_64 packages of each distro.
+- The use of git trailers is only supported in combination with CEPH-BUILD-JOB: ceph-dev-pipeline.
+- Only the head commit's trailers will be evaluated.
+
+|Parameter|Description|Available Options|Default|
+|--|--|--|--|
+|CEPH-BUILD-JOB|Which Jenkins job to trigger. Only ceph-dev-pipeline supports the options below.|ceph-dev-pipeline, ceph-dev-new|`ceph-dev-pipeline`|
+|DISTROS|Space-sparated list of Linux distributions to build for|focal, jammy, noble, centos9, windows|Depends on keywords in branch name|
+|ARCHS|Space-separated list of architectures to build on|x86_64, arm64|`x86_64 arm64`|
+|FLAVORS|Crimson or non-Crimson|default, crimson-debug, crimson-release|`default`|
+|CI-COMPILE|Compile binaries and packages[^1]|Boolean|`true`|
+|CI-CONTAINER|Build a dev container using the packages built|Boolean|`true`|
+|DWZ|Use [DWZ](https://sourceware.org/dwz/) to make debuginfo packages smaller|Boolean|`true` when using ceph-dev-new<br>`false` when using ceph-dev-pipeline[^2]|
+|SCCACHE|Use [sccache](https://github.com/mozilla/sccache) to reduce compilation time|Boolean|`false` when using ceph-dev-new<br>`true` when using ceph-dev-pipeline[^3]|
+|CEPH-BUILD-BRANCH|Which ceph-build.git branch to use. Useful for testing.|N/A|`main`|
+
+
+[^1]: You might set this to `false` if you know packages already exist and you only want to build a container using them.
+[^2]: DWZ adds a lot of time to builds for a small decrease in disk usage. The default behavior is changing with the switch to the ceph-dev-pipeline job.
+[^3]: This is new functionality provided in the ceph-dev-pipeline job.
+
+### Git Trailer Examples
+"I only want to build x86 packages for Ubuntu 22.04. I don't care about containers."
+
+ CEPH-BUILD-JOB: ceph-dev-pipeline
+ DISTROS: jammy
+ ARCHS: x86_64
+ CI-CONTAINER: false
+
+"I only want to build packages and a container for CentOS 9."
+
+ CEPH-BUILD-JOB: ceph-dev-pipeline
+ DISTROS: centos9
+
+"My container build failed but I know the package build succeeded. Let's try again."
+
+ CEPH-BUILD-JOB: ceph-dev-pipeline
+ DISTROS: centos9
+ CI-COMPILE: false
+
+"I don't trust sccache."
+
+ CEPH-BUILD-JOB: ceph-dev-pipeline
+ SCCACHE: false
+
--- /dev/null
+import groovy.json.JsonBuilder
+import groovy.transform.Field
+
+def pretty(obj) {
+ return new JsonBuilder(obj).toPrettyString()
+}
+
+// These parameters are able to be parsed from git trailers
+@Field def gitTrailerParameterNames = [
+ "ARCHS",
+ "CEPH_BUILD_BRANCH",
+ "CEPH_BUILD_JOB",
+ "CI_COMPILE",
+ "CI_CONTAINER",
+ "DISTROS",
+ "DWZ",
+ "FLAVORS",
+ "SCCACHE",
+]
+// These are the default parameter values for the pipeline
+@Field def defaults = [
+ 'CEPH_BUILD_JOB': 'ceph-dev-pipeline',
+ 'DISTROS': 'centos9 jammy noble windows',
+ 'ARCHS': 'x86_64',
+ 'FLAVOR': 'default',
+]
+// This will later hold the initial set of parameters, before any branch-based
+// values are inserted.
+@Field def initialParams = [:]
+// this will later hold parameters parsed from git trailers
+@Field def trailerParams = [:]
+// This will later hold one or more parameter sets. Each parameter set will
+// result in a triggered job.
+@Field def paramMaps = []
+// This will later hold the build's description; we need to store it so that
+// we can append to it later, as there is no way to read it.
+@Field def description = "";
+
+// This encodes the same logic as the ceph-dev-new-trigger job.
+// It returns a list of one or more parameter sets.
+// For ceph-dev-pipeline, only one set is returned.
+def params_from_branch(initialParams) {
+ def singleSet = ( initialParams['CEPH_BUILD_JOB'].contains('ceph-dev-pipeline') )
+ def params = [initialParams.clone()]
+ switch (initialParams.BRANCH) {
+ case "main":
+ params[0]['ARCHS'] += ' arm64'
+ break
+ case ~/.*reef.*/:
+ params[-1]['DISTROS'] = 'centos9 jammy focal windows'
+ params[-1]['ARCHS'] += ' arm64'
+ break
+ case ~/.*squid.*/:
+ params[-1]['ARCHS'] += ' arm64'
+ break
+ case ~/.*tentacle.*/:
+ if ( !singleSet ) {
+ params << params[0].clone()
+ params[-1]['ARCHS'] = 'x86_64'
+ params[-1]['DISTROS'] = 'centos9'
+ params[-1]['FLAVOR'] = 'crimson-debug'
+ } else {
+ params[0]['ARCHS'] += ' arm64'
+ params[0]['FLAVOR'] += ' crimson-debug'
+ }
+ break
+ case ~/.*centos9-only.*/:
+ params[0]['DISTROS'] = 'centos9'
+ break
+ case ~/.*crimson-only.*/:
+ params[0]['ARCHS'] = 'x86_64'
+ params[0]['DISTROS'] = 'centos9'
+ if ( !singleSet ) {
+ params << params[0].clone()
+ params[0]['FLAVOR'] = 'crimson-debug'
+ params[1]['FLAVOR'] = 'crimson-release'
+ } else {
+ params[0]['FLAVOR'] = 'crimson-debug crimson-release'
+ }
+ break
+ default:
+ if ( !singleSet ) {
+ params << params[0].clone()
+ params[-1]['ARCHS'] = 'x86_64'
+ params[-1]['DISTROS'] = 'centos9'
+ params[-1]['FLAVOR'] = 'crimson-debug'
+ } else {
+ params[0]['FLAVOR'] += ' crimson-debug'
+ }
+ }
+ if ( singleSet ) {
+ params[0]['FLAVORS'] = params[0]['FLAVOR']
+ params[0].remove('FLAVOR')
+ }
+ return params
+}
+
+pipeline {
+ agent any
+ stages {
+ stage("Prepare parameters") {
+ steps {
+ script {
+ initialParams.BRANCH = env.ref.replace("refs/heads/", "")
+ initialParams.SHA1 = env.head_commit_id
+ initialParams.putAll(defaults)
+ println("BRANCH=${initialParams.BRANCH}")
+ }
+ script {
+ println("SHA1=${env.head_commit_id}")
+ }
+ script {
+ println("pusher=${env.pusher}")
+ }
+ script {
+ println("Looking for git trailer parameters: ${gitTrailerParameterNames}")
+ writeFile(
+ file: "head_commit_message.txt",
+ text: env.head_commit_message,
+ )
+ def trailer = sh(
+ script: "git interpret-trailers --parse head_commit_message.txt",
+ returnStdout: true,
+ )
+ println("trailer: ${trailer}")
+ for (item in trailer.split("\n")) {
+ def matcher = item =~ /(.+): (.+)/
+ if (matcher.matches()) {
+ def key = matcher[0][1].replace("-", "_").toUpperCase()
+ def value = matcher[0][2]
+ if ( key in gitTrailerParameterNames && value ) {
+ trailerParams[key] = value
+ }
+ }
+ }
+ }
+ script {
+ if ( trailerParams.containsKey('CEPH_BUILD_JOB') ) {
+ initialParams['CEPH_BUILD_JOB'] = trailerParams['CEPH_BUILD_JOB']
+ }
+ paramMaps = params_from_branch(initialParams)
+ if ( initialParams['CEPH_BUILD_JOB'].contains('ceph-dev-pipeline') ) {
+ paramMaps[0].putAll(trailerParams)
+ }
+ }
+ script {
+ println("Final parameters: ${pretty(paramMaps)}")
+ }
+ script {
+ paramMaps.each { paramMap ->
+ paramMap.each { key, value -> description += "${key}=${value}\n<br />" }
+ description += "---\n<br />"
+ }
+ buildDescription description.trim()
+ }
+ }
+ }
+ stage("Trigger job") {
+ steps {
+ script {
+ for (paramsMap in paramMaps) {
+ // Before we trigger, we need to transform the parameter sets from
+ // the base Groovy types into the types expected by Jenkins
+ def paramsList = []
+ paramsMap.each {
+ entry -> paramsList.push(string(name: entry.key, value: entry.value))
+ }
+ def job = paramsMap.CEPH_BUILD_JOB
+ def buildId = "_ID_"
+ if ( job.contains("ceph-dev-pipeline") ) {
+ triggeredBuild = build(
+ job: job,
+ parameters: paramsList,
+ wait: false,
+ waitForStart: true,
+ )
+ buildId = triggeredBuild.getId()
+ println("triggered pipeline: ${pretty(paramsMap)}")
+ } else {
+ def legacy_trigger_enabled = Jenkins.instance.getItem("ceph-dev-new-trigger").isBuildable();
+ if ( legacy_trigger_enabled ) {
+ println("skipped triggering since legacy trigger is enabled: ${pretty(paramsMap)}")
+ } else {
+ def triggeredBuild = build(
+ job: job,
+ parameters: paramsList,
+ wait: false,
+ waitForStart: true,
+ )
+ buildId = triggeredBuild.getId()
+ println("triggered legacy: ${pretty(paramsMap)}")
+ }
+ }
+ def buildUrl = new URI([env.JENKINS_URL, "job", job, buildId].join("/")).normalize()
+ description = """\
+ ${description}<a href="${buildUrl}">${job} ${buildId}</a>
+ """.trim()
+ buildDescription(description)
+ }
+ }
+ }
+ }
+ }
+}
--- /dev/null
+- job:
+ name: ceph-trigger-build
+ description: "this is a proof-of-concept and will not actually trigger builds."
+ node: built-in
+ project-type: pipeline
+ defaults: global
+ concurrent: true
+ quiet-period: 0
+ block-downstream: false
+ block-upstream: false
+ pipeline-scm:
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build
+ branches:
+ - main
+ shallow-clone: true
+ submodule:
+ disable: true
+ wipe-workspace: true
+ script-path: ceph-trigger-build/build/Jenkinsfile
+ lightweight-checkout: true
+ do-not-fetch-tags: true
+ properties:
+ - build-discarder:
+ num-to-keep: 500
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-ci
+
+ triggers:
+ - generic-webhook-trigger:
+ token: ceph-trigger-build
+ token-credential-id: ceph-trigger-build-token
+ print-contrib-var: true
+ header-params:
+ - key: X-GitHub-Event
+ - key: X-GitHub-Hook-ID
+ - key: X-GitHub-Delivery
+ post-content-params:
+ - type: JSONPath
+ key: head_commit_message
+ value: $.head_commit.message
+ - type: JSONPath
+ key: head_commit_id
+ value: $.head_commit.id
+ - type: JSONPath
+ key: ref
+ value: $.ref
+ - type: JSONPath
+ key: pusher
+ value: $.pusher.name
+ # github sends push events for branch deletion, and those events
+ # are missing commit-related fields, so we must special-case
+ # them to prevent failures
+ - type: JSONPath
+ key: is_delete
+ value: $.deleted
+ regex-filter-text: $is_delete $ref
+ regex-filter-expression: "(?i)false refs/heads/.*"
+ cause: "Push to $ref by $pusher"
--- /dev/null
+#!/bin/bash
+set -ex
+env
+WORKDIR=$(mktemp -td tox.XXXXXXXXXX)
+
+# set up variables needed for
+# githubstatus to report back to the github PR
+# if this project was started manually
+github_status_setup
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" "github-status>0.0.3" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+set_centos_python3_version "python3.9"
+install_python_packages $TEMPVENV "pkgs[@]" "pip==22.0.4"
+
+GITHUB_STATUS_STATE="pending" $VENV/github-status create
+
+prune_stale_vagrant_vms $WORKSPACE/../**/tests
+delete_libvirt_vms
+clear_libvirt_networks
+restart_libvirt_services
+update_vagrant_boxes
+
+cd src/ceph-volume/ceph_volume/tests/functional/${DISTRO}/${OBJECTSTORE}/${METHOD}/${SCENARIO}
+
+CEPH_DEV_BRANCH=$ghprbTargetBranch $VENV/tox --workdir=$WORKDIR -vre ${DISTRO}-${OBJECTSTORE}-${METHOD}-${SCENARIO} -- --provider=libvirt
+
+GITHUB_STATUS_STATE="success" $VENV/github-status create
--- /dev/null
+#!/bin/bash
+# There has to be a better way to do this than this script which just looks
+# for every Vagrantfile in scenarios and then just destroys whatever is left.
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" "github-status>0.0.3" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+GITHUB_STATUS_STATE="failure" $VENV/github-status create
+
+cd $WORKSPACE/src/ceph-volume/ceph_volume/tests/functional
+
+# the method exists in scripts/build_utils.sh
+teardown_vagrant_tests $VENV
--- /dev/null
+
+- project:
+ name: ceph-volume-cephadm-prs
+ distro:
+ - centos
+ objectstore:
+ - bluestore
+ method:
+ - lvm
+ - raw
+ scenario:
+ - unencrypted
+ - dmcrypt
+
+ jobs:
+ - 'ceph-volume-prs-{distro}-{objectstore}-{method}-{scenario}'
+
+- job-template:
+ name: 'ceph-volume-prs-{distro}-{objectstore}-{method}-{scenario}'
+ display-name: 'ceph-volume {method}: Pull Request [{distro}-{objectstore}-{scenario}]'
+ node: vagrant&&libvirt&¢os9
+ concurrent: true
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph
+ - build-discarder:
+ days-to-keep: 30
+ discard-old-builds: true
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ # this is injected by the ghprb plugin, and is fully optional but may help in manually triggering
+ # a job that can end up updating a PR
+ - string:
+ name: ghprbSourceBranch
+ description: "When manually triggered, and the remote PR isn't a branch in the ceph.git repo This can be specified to determine the actual branch."
+ - string:
+ name: ghprbTargetBranch
+ description: 'Required when manually triggered, the targeted branch needs to be set (e.g. "luminous" or "main")'
+ - string:
+ name: GITHUB_SHA
+ description: "The tip (last commit) in the PR, a sha1 like 7d787849556788961155534039886aedfcdb2a88 (if set, will report status to Github)"
+ - password:
+ name: GITHUB_OAUTH_TOKEN
+ description: "Secret API Token to set status. Only needed when manually triggering a PR test"
+
+
+ triggers:
+ - github-pull-request:
+ cancel-builds-on-update: true
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ only-trigger-phrase: true
+ trigger-phrase: '^jenkins test ceph-volume {distro} {objectstore}-{method}-{scenario}|jenkins test ceph-volume all.*|jenkins test ceph-volume {distro} all.*'
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "ceph-volume {method} testing {distro}-{objectstore}-{scenario}"
+ started-status: "ceph-volume {method} running {distro}-{objectstore}-{scenario}"
+ success-status: "ceph-volume {method} {distro}-{objectstore}-{scenario} OK"
+ failure-status: "ceph-volume {method} {distro}-{objectstore}-{scenario} failed"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - inject:
+ properties-content: |
+ DISTRO={distro}
+ OBJECTSTORE={objectstore}
+ METHOD={method}
+ SCENARIO={scenario}
+ GITHUB_REPOSITORY="ceph/ceph"
+ GITHUB_STATUS_CONTEXT="ceph-volume testing {distro}-{objectstore}-{method}-{scenario}"
+ GITHUB_STATUS_STARTED="running"
+ GITHUB_STATUS_SUCCESS="OK"
+ GITHUB_STATUS_FAILURE="failed"
+ GITHUB_STATUS_ERROR="completed with errors"
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/build_utils.sh
+ - ../../build/teardown
+
+ - archive:
+ artifacts: 'logs/**'
+ allow-empty: true
+ latest-only: false
--- /dev/null
+#!/bin/bash
+set -ex
+WORKDIR=$(mktemp -td tox.XXXXXXXXXX)
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+prune_stale_vagrant_vms $WORKSPACE/../**/tests
+delete_libvirt_vms
+clear_libvirt_networks
+restart_libvirt_services
+update_vagrant_boxes
+
+cd src/ceph-volume/ceph_volume/tests/functional/$SUBCOMMAND
+
+if [[ "$CEPH_BRANCH" == "reef" ]]; then
+ CEPH_ANSIBLE_BRANCH="stable-8.0"
+else
+ CEPH_ANSIBLE_BRANCH="main"
+fi
+
+
+VAGRANT_RELOAD_FLAGS="--debug --no-provision" CEPH_ANSIBLE_BRANCH=$CEPH_ANSIBLE_BRANCH CEPH_DEV_BRANCH=$CEPH_BRANCH $VENV/tox --workdir=$WORKDIR -vre $DISTRO-$OBJECTSTORE-$SCENARIO -- --provider=libvirt
--- /dev/null
+#!/bin/bash
+# There has to be a better way to do this than this script which just looks
+# for every Vagrantfile in scenarios and then just destroys whatever is left.
+
+
+cd $WORKSPACE/src/ceph-volume/ceph_volume/tests/functional
+
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+# the method exists in scripts/build_utils.sh
+teardown_vagrant_tests $VENV
--- /dev/null
+- project:
+ name: ceph-volume-nightly-lvm
+ distro:
+ - centos
+ objectstore:
+ - bluestore
+ method:
+ - lvm
+ - raw
+ scenario:
+ - unencrypted
+ - dmcrypt
+ ceph_branch:
+ - main
+ - tentacle
+ - squid
+ - reef
+
+ jobs:
+ - 'ceph-volume-nightly-{ceph_branch}-{distro}-{objectstore}-{method}-{scenario}'
+
+- job-template:
+ name: 'ceph-volume-nightly-{ceph_branch}-{distro}-{objectstore}-{method}-{scenario}'
+ display-name: 'ceph-volume {ceph_branch}: [{distro}-{objectstore}-{method}-{scenario}]'
+ node: vagrant&&libvirt&¢os9
+ concurrent: true
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph
+ - build-discarder:
+ days-to-keep: 30
+ discard-old-builds: true
+
+ triggers:
+ - timed: '@daily'
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.git
+ branches:
+ - '{ceph_branch}'
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - inject:
+ properties-content: |
+ DISTRO={distro}
+ OBJECTSTORE={objectstore}
+ METHOD={method}
+ SCENARIO={scenario}
+ CEPH_BRANCH={ceph_branch}
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/build_utils.sh
+ - ../../build/teardown
+
+ - archive:
+ artifacts: 'logs/**'
+ allow-empty: true
+ latest-only: false
+
+ - email:
+ recipients: gabrioux@ibm.com
--- /dev/null
+#!/bin/bash
+set -ex
+WORKDIR=$(mktemp -td tox.XXXXXXXXXX)
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox==4.2.8" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+
+delete_libvirt_vms
+clear_libvirt_networks
+restart_libvirt_services
+update_vagrant_boxes
+
+cd src/ceph-volume/ceph_volume/tests/functional/${DISTRO}/${OBJECTSTORE}/${METHOD}/${SCENARIO}
+
+CEPH_DEV_BRANCH=$CEPH_BRANCH CEPH_DEV_SHA1=$CEPH_SHA1 $VENV/tox --workdir=$WORKDIR -vre ${DISTRO}-${OBJECTSTORE}-${METHOD}-${OBJECTSTORE}-${SCENARIO} -- --provider=libvirt
--- /dev/null
+#!/bin/bash
+
+cd $WORKSPACE/src/ceph-volume/ceph_volume/tests/functional
+
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+# the method exists in scripts/build_utils.sh
+teardown_vagrant_tests $VENV
--- /dev/null
+
+- job:
+ name: 'ceph-volume-scenario'
+ node: vagrant&&libvirt&¢os9
+ concurrent: true
+ defaults: global
+ display-name: 'ceph-volume: individual scenario testing'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ artifact-days-to-keep: 15
+ - github:
+ url: https://github.com/ceph/ceph
+
+ parameters:
+ - string:
+ name: DISTRO
+ description: "The host OS to use."
+ default: "centos"
+ - string:
+ name: METHOD
+ description: "The subcommand in ceph-volume we are testing. (lvm or raw)"
+ default: "lvm"
+ - string:
+ name: SCENARIO
+ description: "unencrypted or dmcrypt OSDs"
+ default: "unencrypted"
+ - string:
+ name: CEPH_BRANCH
+ description: "The ceph branch to test against"
+ default: "main"
+ - string:
+ name: CEPH_SHA1
+ description: "The ceph sha1 to test against"
+ default: "latest"
+ - string:
+ name: CEPH_REPO_URL
+ description: "The full https url to clone from"
+ default: "https://github.com/ceph/ceph.git"
+
+ scm:
+ - git:
+ url: $CEPH_REPO_URL
+ branches:
+ - $CEPH_BRANCH
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/teardown
+
+ - archive:
+ artifacts: 'logs/**'
+ allow-empty: true
+ latest-only: false
--- /dev/null
+- job:
+ name: "ceph-volume-test"
+ description: 'This job will trigger all ceph-volume functional tests given a branch and sha1.'
+ project-type: multijob
+ defaults: global
+ display-name: 'ceph-volume functional test suite'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ properties:
+ - build-discarder:
+ days-to-keep: 20
+ artifact-num-to-keep: 20
+ - github:
+ url: https://github.com/ceph/ceph
+
+ parameters:
+ - string:
+ name: CEPH_BRANCH
+ description: "The ceph branch to test against"
+ default: "main"
+ - string:
+ name: CEPH_SHA1
+ description: "The ceph sha1 to test against"
+ default: "latest"
+ - string:
+ name: CEPH_REPO_URL
+ description: "The full https url to clone from"
+ default: "https://github.com/ceph/ceph.git"
+
+ builders:
+ - multijob:
+ name: 'testing ceph-volume'
+ condition: SUCCESSFUL
+ projects:
+ - name: ceph-volume-scenario
+ current-parameters: true
+ predefined-parameters: |
+ DISTRO=centos
+ OBJECTSTORE=bluestore
+ METHOD=lvm
+ SCENARIO=unencrypted
+ - name: ceph-volume-scenario
+ current-parameters: true
+ predefined-parameters: |
+ DISTRO=centos
+ OBJECTSTORE=bluestore
+ METHOD=lvm
+ SCENARIO=dmcrypt
+ - name: ceph-volume-scenario
+ current-parameters: true
+ predefined-parameters: |
+ DISTRO=centos
+ OBJECTSTORE=bluestore
+ METHOD=raw
+ SCENARIO=unencrypted
+ - name: ceph-volume-scenario
+ current-parameters: true
+ predefined-parameters: |
+ DISTRO=centos
+ OBJECTSTORE=bluestore
+ METHOD=raw
+ SCENARIO=dmcrypt
--- /dev/null
+#!/bin/bash
+set -ex
+
+# set up variables needed for
+# githubstatus to report back to the github PR
+# if this project was started manually
+github_status_setup
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" "github-status>0.0.3")
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+cd src/ceph-volume
+
+GITHUB_STATUS_STATE="pending" $VENV/github-status create
+
+$VENV/tox -vr
+
+GITHUB_STATUS_STATE="success" $VENV/github-status create
--- /dev/null
+#!/bin/bash
+# There has to be a better way to do this than this script which just looks
+# for every Vagrantfile in scenarios and then just destroys whatever is left.
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "github-status>0.0.3" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+GITHUB_STATUS_STATE="failure" $VENV/github-status create
--- /dev/null
+- job:
+ name: ceph-volume-unit-tests
+ display-name: 'ceph-volume: Pull Request unit tests'
+ node: small && centos9
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph
+ - build-discarder:
+ days-to-keep: 14
+ discard-old-builds: true
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ # this is injected by the ghprb plugin, and is fully optional but may help in manually triggering
+ # a job that can end up updating a PR
+ - string:
+ name: ghprbSourceBranch
+ description: "When manually triggered, and the remote PR isn't a branch in the ceph.git repo This can be specified to determine the actual branch."
+ - string:
+ name: ghprbTargetBranch
+ description: 'Required when manually triggered, the targeted branch needs to be set (e.g. "luminous" or "main")'
+ - string:
+ name: GITHUB_SHA
+ description: "The tip (last commit) in the PR, a sha1 like 7d787849556788961155534039886aedfcdb2a88 (if set, will report status to Github)"
+ - password:
+ name: GITHUB_OAUTH_TOKEN
+ description: "Secret API Token to set status. Only needed when manually triggering a PR test"
+
+ triggers:
+ - github-pull-request:
+ cancel-builds-on-update: true
+ only-trigger-phrase: true
+ trigger-phrase: 'jenkins ceph-volume unit tests'
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "ceph-volume tox testing"
+ started-status: "ceph-volume tox running"
+ success-status: "ceph-volume tox OK"
+ failure-status: "ceph-volume tox failed"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph
+ browser: auto
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - inject:
+ properties-content: |
+ GITHUB_REPOSITORY="ceph/ceph"
+ GITHUB_STATUS_CONTEXT="ceph-volume unit tests"
+ GITHUB_STATUS_STARTED="running"
+ GITHUB_STATUS_SUCCESS="OK"
+ GITHUB_STATUS_FAILURE="failed"
+ GITHUB_STATUS_ERROR="completed with errors"
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/teardown
--- /dev/null
+#!/bin/bash
+set -ex
+
+env
+
+BRANCH=$(echo $GIT_BRANCH | sed 's:.*/::')
+
+set +e
+export NVM_DIR="$HOME/.nvm"
+[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
+[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"
+set -e
+
+echo "Using node version $(node -v)"
+
+# https://docs.npmjs.com/cli/v7/commands/npm-ci
+npm ci
+
+npm run build:development
+
+if [ "$BRANCH" = "main" ]; then
+ echo "branch must not be named 'main', exiting"
+ exit 1
+fi
+
+if [ ! -d /opt/www/${BRANCH} ]; then
+ mkdir -p /opt/www/${BRANCH}
+fi
+
+rsync -av --delete-after dist/ /opt/www/${BRANCH}/
+
+echo "===== Begin pruning old builds ====="
+old_builds=$(find /opt/www/ -maxdepth 1 -not -path "/opt/www/main" -type d -mtime +90 | sed 's:.*/::')
+for old_build in $old_builds; do
+ echo $old_build
+ if [ ! -z "$old_build" ]; then # So we don't accidentally wipe out /opt/www somehow
+ rm -rf "/opt/www/$old_build"
+ fi
+done
+echo "===== Done pruning old builds ====="
+
+# This just makes the last `echo` line not repeat
+{ set +x; } 2>/dev/null
+
+echo "Success! This site is available at https://${BRANCH}.ceph.io."
--- /dev/null
+- job:
+ name: ceph-website-prs
+ description: This job builds PRs from github.com/ceph/ceph.io and serves them at $branch.ceph.io.
+ node: www
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 20
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph.io
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID or branch, like 'origin/pr/72/head' or wip-blogpost1"
+
+ triggers:
+ - github-pull-request:
+ org-list:
+ - ceph
+ cancel-builds-on-update: true
+ trigger-phrase: 'jenkins test.*|jenkins retest.*'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: false
+ auto-close-on-fail: false
+ status-context: "Compiling site"
+ started-status: "Compiling site"
+ success-status: "Site compiled successfully!"
+ failure-status: "Site compilation failed"
+ success-comment: "Site built/updated successfully! https://${{GIT_BRANCH}}.ceph.io"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.io
+ branches:
+ - origin/pr/${{ghprbPullId}}/merge
+ refspec: +refs/pull/${{ghprbPullId}}/*:refs/remotes/origin/pr/${{ghprbPullId}}/*
+ browser: auto
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/ceph-website/install-deps.sh
+ - ../../build/build
--- /dev/null
+#!/bin/bash
+set -ex
+
+env
+
+set +e
+export NVM_DIR="$HOME/.nvm"
+[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
+[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"
+set -e
+
+echo "Using node version $(node -v)"
+
+npm install
+
+npm run build:production
+
+if [ ! -d /opt/www/main ]; then
+ mkdir -p /opt/www/main
+fi
+
+rsync -av --delete-after dist/ /opt/www/main/
--- /dev/null
+- job:
+ name: ceph-website
+ description: This job builds the main branch of https://github.com/ceph/ceph.io and keeps the website up to date
+ node: www
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 20
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph.io
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.io
+ branches:
+ - main
+ browser: auto
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/ceph-website/install-deps.sh
+ - ../../build/build
--- /dev/null
+# Ceph Windows Image Build
+
+The Windows image can be generated, fully unattended, via the build script:
+
+```bash
+./build/build
+```
+
+It accepts the following environment variables:
+
+* `SSH_PRIVATE_KEY` (required) - The SSH private key path that will be authorized to access the VMs using the new image.
+* `WINDOWS_SERVER_2019_ISO_URL` (optional) - URL to the Windows Server 2019 ISO image. It defaults to the official Microsoft evaluation ISO.
+* `VIRTIO_WIN_ISO_URL` (optional) - URL to the virtio-win guest tools ISO image. It defaults to the stable ISO from the [official docs](https://github.com/virtio-win/virtio-win-pkg-scripts/blob/master/README.md#downloads).
+
+The build script assumes that the host is a KVM enabled machine, and it will do the following:
+
+1. Download the ISOs from the URLs specified in the environment variables (or the defaults if not given).
+
+2. Start a libvirt virtual machine and install the Windows Server 2019 from the ISO.
+
+ * The process is fully unattended, via the `autounattended.xml` file with the input needed to install the operating system.
+
+ * The virtio drivers and the guest tools are installed from the ISO.
+
+ * SSH is configured and the given SSH private key is authorized.
+
+3. Install the latest Windows updates.
+
+4. Run the `setup.ps1` script to prepare the CI environment.
+
+5. Generalize the VM image via `Sysprep`.
--- /dev/null
+<?xml version="1.0" encoding="utf-8"?>
+<unattend xmlns="urn:schemas-microsoft-com:unattend">
+
+ <settings pass="windowsPE">
+
+ <component name="Microsoft-Windows-International-Core-WinPE" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+ <SetupUILanguage>
+ <UILanguage>en-US</UILanguage>
+ </SetupUILanguage>
+ <SystemLocale>en-US</SystemLocale>
+ <UILanguage>en-US</UILanguage>
+ <UserLocale>en-US</UserLocale>
+ </component>
+
+ <component name="Microsoft-Windows-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+
+ <DiskConfiguration>
+ <WillShowUI>OnError</WillShowUI>
+ <Disk wcm:action="add">
+ <CreatePartitions>
+ <CreatePartition wcm:action="add">
+ <Order>1</Order>
+ <Size>100</Size>
+ <Type>Primary</Type>
+ </CreatePartition>
+ <CreatePartition wcm:action="add">
+ <Order>2</Order>
+ <Extend>true</Extend>
+ <Type>Primary</Type>
+ </CreatePartition>
+ </CreatePartitions>
+ <ModifyPartitions>
+ <ModifyPartition wcm:action="add">
+ <Active>true</Active>
+ <Label>Boot</Label>
+ <Format>NTFS</Format>
+ <Order>1</Order>
+ <PartitionID>1</PartitionID>
+ </ModifyPartition>
+ <ModifyPartition wcm:action="add">
+ <Format>NTFS</Format>
+ <Order>2</Order>
+ <PartitionID>2</PartitionID>
+ <Label>System</Label>
+ </ModifyPartition>
+ </ModifyPartitions>
+ <DiskID>0</DiskID>
+ <WillWipeDisk>true</WillWipeDisk>
+ </Disk>
+ </DiskConfiguration>
+
+ <ImageInstall>
+ <OSImage>
+ <InstallTo>
+ <PartitionID>2</PartitionID>
+ <DiskID>0</DiskID>
+ </InstallTo>
+ <InstallToAvailablePartition>false</InstallToAvailablePartition>
+ <WillShowUI>OnError</WillShowUI>
+ <InstallFrom>
+ <MetaData wcm:action="add">
+ <Key>/IMAGE/NAME</Key>
+ <Value>Windows Server 2019 SERVERSTANDARDCORE</Value>
+ </MetaData>
+ </InstallFrom>
+ </OSImage>
+ </ImageInstall>
+
+ <UserData>
+ <!-- Product Key from http://technet.microsoft.com/en-us/library/jj612867.aspx -->
+ <ProductKey>
+ <!-- Do not uncomment the Key element if you are using trial ISOs -->
+ <!-- You must uncomment the Key element (and optionally insert your own key) if you are using retail or volume license ISOs -->
+ <!-- <Key></Key> -->
+ <WillShowUI>OnError</WillShowUI>
+ </ProductKey>
+ <AcceptEula>true</AcceptEula>
+ </UserData>
+
+ </component>
+
+ <component name="Microsoft-Windows-PnpCustomizationsWinPE" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+ <DriverPaths>
+ <PathAndCredentials wcm:action="add" wcm:keyValue="1">
+ <Path>E:\NetKVM\2k19\amd64\</Path>
+ </PathAndCredentials>
+ <PathAndCredentials wcm:action="add" wcm:keyValue="2">
+ <Path>E:\viostor\2k19\amd64\</Path>
+ </PathAndCredentials>
+ <PathAndCredentials wcm:action="add" wcm:keyValue="3">
+ <Path>E:\vioserial\2k19\amd64\</Path>
+ </PathAndCredentials>
+ </DriverPaths>
+ </component>
+
+ </settings>
+
+ <settings pass="oobeSystem">
+ <component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+
+ <VisualEffects>
+ <FontSmoothing>ClearType</FontSmoothing>
+ </VisualEffects>
+
+ <UserAccounts>
+ <!--
+ Password to be used only during initial provisioning.
+ Must be reset with final Sysprep.
+ -->
+ <AdministratorPassword>
+ <Value>Passw0rd</Value>
+ <PlainText>true</PlainText>
+ </AdministratorPassword>
+ </UserAccounts>
+
+ <AutoLogon>
+ <Password>
+ <Value>Passw0rd</Value>
+ <PlainText>true</PlainText>
+ </Password>
+ <Enabled>true</Enabled>
+ <Username>Administrator</Username>
+ </AutoLogon>
+
+ <ComputerName>*</ComputerName>
+
+ <OOBE>
+ <NetworkLocation>Work</NetworkLocation>
+ <HideEULAPage>true</HideEULAPage>
+ <ProtectYourPC>3</ProtectYourPC>
+ <SkipMachineOOBE>true</SkipMachineOOBE>
+ <SkipUserOOBE>true</SkipUserOOBE>
+ </OOBE>
+
+ <FirstLogonCommands>
+
+ <SynchronousCommand wcm:action="add">
+ <CommandLine>%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe -NoLogo -NonInteractive -ExecutionPolicy RemoteSigned -File A:\first-logon.ps1</CommandLine>
+ <Order>1</Order>
+ </SynchronousCommand>
+
+ </FirstLogonCommands>
+
+ </component>
+
+ </settings>
+
+ <settings pass="specialize">
+
+ <component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+ <TimeZone>UTC</TimeZone>
+ <ComputerName>*</ComputerName>
+ </component>
+
+ </settings>
+
+</unattend>
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+if [[ -z $SSH_PRIVATE_KEY ]]; then
+ echo "ERROR: The SSH private key secret is not set"
+ exit 1
+fi
+
+WINDOWS_SERVER_2019_ISO_URL=${WINDOWS_SERVER_2019_ISO_URL:-"https://software-download.microsoft.com/download/pr/17763.737.190906-2324.rs5_release_svc_refresh_SERVER_EVAL_x64FRE_en-us_1.iso"}
+VIRTIO_WIN_ISO_URL=${VIRTIO_WIN_ISO_URL:-"https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso"}
+
+BUILD_DIR="$(cd $(dirname "${BASH_SOURCE[0]}") && pwd)"
+
+source ${BUILD_DIR}/../../scripts/build_utils.sh
+
+
+function restart_windows_vm() {
+ echo "Restarting Windows VM"
+ ssh_exec "shutdown.exe /r /t 0 & sc.exe stop sshd"
+ SECONDS=0
+ TIMEOUT=${1:-600}
+ while true; do
+ if [[ $SECONDS -gt $TIMEOUT ]]; then
+ echo "Timeout waiting for the VM to start"
+ exit 1
+ fi
+ ssh_exec hostname || {
+ echo "Cannot execute SSH commands yet"
+ sleep 10
+ continue
+ }
+ break
+ done
+ echo "Windows VM restarted"
+}
+
+
+if ! which virt-install >/dev/null; then
+ sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none || true
+ sudo apt-get install -y virtinst
+fi
+if ! which xmllint >/dev/null; then
+ sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none || true
+ sudo apt-get install -y libxml2-utils
+fi
+if ! which jq >/dev/null; then
+ sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none || true
+ sudo apt-get install -y jq
+fi
+
+if ! sudo virsh net-info default &>/dev/null; then
+ cat << EOF > $WORKSPACE/default-net.xml
+<network>
+ <name>default</name>
+ <bridge name="virbr0"/>
+ <forward mode="nat"/>
+ <ip address="192.168.122.1" netmask="255.255.255.0">
+ <dhcp>
+ <range start="192.168.122.2" end="192.168.122.254"/>
+ </dhcp>
+ </ip>
+</network>
+EOF
+ sudo virsh net-define $WORKSPACE/default-net.xml
+ sudo virsh net-start default
+ sudo virsh net-autostart default
+ rm $WORKSPACE/default-net.xml
+fi
+
+echo "Downloading virtio-win ISO"
+retrycmd_if_failure 5 0 30m curl -C - -L $VIRTIO_WIN_ISO_URL -o ${BUILD_DIR}/virtio-win.iso
+
+echo "Downloading Windows Server 2019 ISO"
+retrycmd_if_failure 5 0 60m curl -C - -L $WINDOWS_SERVER_2019_ISO_URL -o ${BUILD_DIR}/windows-server-2019.iso
+
+echo "Creating floppy image"
+qemu-img create -f raw ${BUILD_DIR}/floppy.img 1440k
+mkfs.msdos -s 1 ${BUILD_DIR}/floppy.img
+mkdir ${BUILD_DIR}/floppy
+sudo mount ${BUILD_DIR}/floppy.img ${BUILD_DIR}/floppy
+ssh-keygen -y -f $SSH_PRIVATE_KEY > ${BUILD_DIR}/id_rsa.pub
+sudo cp \
+ ${BUILD_DIR}/autounattend.xml \
+ ${BUILD_DIR}/first-logon.ps1 \
+ ${BUILD_DIR}/utils.ps1 \
+ ${BUILD_DIR}/id_rsa.pub \
+ ${BUILD_DIR}/floppy/
+sudo umount ${BUILD_DIR}/floppy
+rmdir ${BUILD_DIR}/floppy
+
+echo "Starting libvirt VM"
+qemu-img create -f qcow2 ${BUILD_DIR}/ceph-win-ltsc2019-ci-image.qcow2 50G
+VM_NAME="ceph-win-ltsc2019"
+sudo virt-install \
+ --name $VM_NAME \
+ --os-variant win2k19 \
+ --boot hd,cdrom \
+ --virt-type kvm \
+ --graphics spice \
+ --cpu host \
+ --vcpus 4 \
+ --memory 4096 \
+ --disk ${BUILD_DIR}/floppy.img,device=floppy \
+ --disk ${BUILD_DIR}/ceph-win-ltsc2019-ci-image.qcow2,bus=virtio \
+ --disk ${BUILD_DIR}/windows-server-2019.iso,device=cdrom \
+ --disk ${BUILD_DIR}/virtio-win.iso,device=cdrom \
+ --network network=default,model=virtio \
+ --controller type=virtio-serial \
+ --channel unix,target_type=virtio,name=org.qemu.guest_agent.0 \
+ --noautoconsol
+
+# Find the VM NIC MAC address
+sudo virsh dumpxml $VM_NAME > $WORKSPACE/libvirt_vm.xml
+VM_NIC_MAC_ADDRESS=`xmllint --xpath 'string(/domain/devices/interface/mac/@address)' $WORKSPACE/libvirt_vm.xml`
+rm $WORKSPACE/libvirt_vm.xml
+
+export SSH_USER="administrator"
+export SSH_KNOWN_HOSTS_FILE="${BUILD_DIR}/known_hosts"
+export SSH_KEY="$SSH_PRIVATE_KEY"
+
+SECONDS=0
+TIMEOUT=1200
+SLEEP_SECS=30
+while true; do
+ if [[ $SECONDS -gt $TIMEOUT ]]; then
+ echo "Timeout waiting for the VM to start"
+ exit 1
+ fi
+ # Get the VM NIC IP address from the "default" virsh network
+ VM_IP=$(sudo virsh qemu-agent-command $VM_NAME '{"execute":"guest-network-get-interfaces"}' | jq -r ".return[] | select(.\"hardware-address\"==\"${VM_NIC_MAC_ADDRESS}\") | .\"ip-addresses\"[] | select(.\"ip-address\" | startswith(\"192.168.122.\")) | .\"ip-address\"") || {
+ echo "Retrying in $SLEEP_SECS seconds"
+ sleep $SLEEP_SECS
+ continue
+ }
+ if [[ -z $VM_IP ]]; then
+ echo "Cannot find the VM IP address. Retrying in $SLEEP_SECS seconds"
+ sleep $SLEEP_SECS
+ continue
+ fi
+ ssh-keyscan -H $VM_IP &> $SSH_KNOWN_HOSTS_FILE || {
+ echo "SSH is not reachable yet"
+ sleep $SLEEP_SECS
+ continue
+ }
+ SSH_ADDRESS=$VM_IP ssh_exec hostname || {
+ echo "Cannot execute SSH commands yet"
+ sleep $SLEEP_SECS
+ continue
+ }
+ break
+done
+export SSH_ADDRESS=$VM_IP
+
+scp_upload ${BUILD_DIR}/install-windows-updates.ps1 /install-windows-updates.ps1
+SSH_TIMEOUT=1h ssh_exec powershell.exe -File /install-windows-updates.ps1
+ssh_exec powershell.exe Remove-Item -Force /install-windows-updates.ps1
+
+restart_windows_vm 1800
+
+scp_upload ${BUILD_DIR}/utils.ps1 /utils.ps1
+scp_upload ${BUILD_DIR}/setup.ps1 /setup.ps1
+SSH_TIMEOUT=1h ssh_exec powershell.exe -File /setup.ps1
+ssh_exec powershell.exe Remove-Item -Force /setup.ps1, /utils.ps1
+
+restart_windows_vm
+
+sudo virsh qemu-agent-command $VM_NAME \
+ '{"execute":"guest-exec", "arguments":{"path":"ipconfig.exe", "arg":["/release"]}}'
+sudo virsh qemu-agent-command $VM_NAME \
+ '{"execute":"guest-exec", "arguments":{"path":"C:\\Windows\\System32\\Sysprep\\Sysprep.exe", "arg":["/generalize", "/oobe", "/shutdown", "/quiet"]}}'
+
+SECONDS=0
+TIMEOUT=600
+while true; do
+ if [[ $SECONDS -gt $TIMEOUT ]]; then
+ echo "Timeout waiting for the sysprep to shut down the VM"
+ exit 1
+ fi
+ if sudo virsh list | grep -q " $VM_NAME "; then
+ echo "Sysprep is still running"
+ sleep 10
+ continue
+ fi
+ echo "Sysprep finished"
+ break
+done
+
+sudo mv "${BUILD_DIR}/ceph-win-ltsc2019-ci-image.qcow2" ${WORKSPACE}/
+
+${BUILD_DIR}/cleanup
+
+echo "Image successfully built at: ${WORKSPACE}/ceph-win-ltsc2019-ci-image.qcow2"
+
+ssh-keyscan -H drop.front.sepia.ceph.com &>> $SSH_KNOWN_HOSTS_FILE
+
+export SSH_USER="${FILEDUMP_USER}"
+export SSH_KEY="${FILEDUMP_SSH_KEY}"
+export SSH_ADDRESS="drop.front.sepia.ceph.com"
+for FILE in ceph-win-ltsc2019-ci-image.qcow2 ceph-win-ltsc2019-ci-image.qcow2.sha256; do
+ FILE_PATH="/ceph/filedump.ceph.com/windows/${FILE}"
+ if ssh_exec test -f ${FILE_PATH}; then
+ echo "Backup existing image file https://filedump.ceph.com/windows/${FILE}"
+ ssh_exec sudo mv ${FILE_PATH} ${FILE_PATH}.old
+ fi
+done
+
+echo "Uploading the image to https://filedump.ceph.com/windows/"
+cd ${WORKSPACE}
+sha256sum ceph-win-ltsc2019-ci-image.qcow2 > ceph-win-ltsc2019-ci-image.qcow2.sha256
+rsync -azvP ceph-win-ltsc2019-ci-image.qcow2 ceph-win-ltsc2019-ci-image.qcow2.sha256 --rsync-path="sudo rsync" -e "ssh -i ${FILEDUMP_SSH_KEY} -o UserKnownHostsFile=${SSH_KNOWN_HOSTS_FILE}" ${FILEDUMP_USER}@drop.front.sepia.ceph.com:/ceph/filedump.ceph.com/windows/
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+BUILD_DIR="$(cd $(dirname "${BASH_SOURCE[0]}") && pwd)"
+
+source ${BUILD_DIR}/../../scripts/build_utils.sh
+
+if mountpoint -q -- "${BUILD_DIR}/floppy"; then
+ sudo umount ${BUILD_DIR}/floppy
+fi
+
+delete_libvirt_vms
+clear_libvirt_networks
+
+sudo rm -rf "${BUILD_DIR}/virtio-win.iso" "${BUILD_DIR}/windows-server-2019.iso" \
+ "${BUILD_DIR}/floppy" "${BUILD_DIR}/floppy.img" "${BUILD_DIR}/ceph-win-ltsc2019-ci-image.qcow2" \
+ "${BUILD_DIR}/known_hosts" "${BUILD_DIR}/id_rsa.pub"
--- /dev/null
+$ErrorActionPreference = "Stop"
+
+. "${PSScriptRoot}\utils.ps1"
+
+$VIRTIO_WIN_PATH = "E:\"
+
+# Install QEMU quest agent
+Write-Output "Installing QEMU guest agent"
+$p = Start-Process -FilePath "msiexec.exe" -ArgumentList @("/i", "${VIRTIO_WIN_PATH}\guest-agent\qemu-ga-x86_64.msi", "/qn") -NoNewWindow -PassThru -Wait
+if($p.ExitCode) {
+ Throw "The QEMU guest agent installation failed. Exit code: $($p.ExitCode)"
+}
+Write-Output "Successfully installed QEMU guest agent"
+
+# Install OpenSSH server
+Start-ExecuteWithRetry {
+ Get-WindowsCapability -Online -Name OpenSSH* | Add-WindowsCapability -Online
+}
+Set-Service -Name "sshd" -StartupType Automatic
+Stop-Service -Name "sshd"
+
+# Create SSH firewall rule
+New-NetFirewallRule -Name "sshd" -DisplayName 'OpenSSH Server (sshd)' -Enabled True -Direction Inbound -Protocol TCP -Action Allow -LocalPort 22
+
+# Authorize the SSH key
+$authorizedKeysFile = Join-Path $env:ProgramData "ssh\administrators_authorized_keys"
+New-Item -ItemType File -Path $authorizedKeysFile -Force
+Set-Content -Path $authorizedKeysFile -Value (Get-Content "${PSScriptRoot}\id_rsa.pub") -Encoding ascii
+$acl = Get-Acl $authorizedKeysFile
+$acl.SetAccessRuleProtection($true, $false)
+$administratorsRule = New-Object system.security.accesscontrol.filesystemaccessrule("Administrators", "FullControl", "Allow")
+$systemRule = New-Object system.security.accesscontrol.filesystemaccessrule("SYSTEM", "FullControl", "Allow")
+$acl.SetAccessRule($administratorsRule)
+$acl.SetAccessRule($systemRule)
+$acl | Set-Acl
+
+# Reboot the machine to complete first logon process
+Restart-Computer -Force -Confirm:$false
--- /dev/null
+$ErrorActionPreference = "Stop"
+$ProgressPreference='SilentlyContinue'
+
+Write-Output "Installing PSWindowsUpdate PowerShell module"
+Install-PackageProvider -Name "NuGet" -Force -Confirm:$false
+Set-PSRepository -Name "PSGallery" -InstallationPolicy Trusted
+Install-Module -Name "PSWindowsUpdate" -Force -Confirm:$false
+
+Write-Output "Installing latest Windows updates"
+$updateScript = {
+ Import-Module "PSWindowsUpdate"
+ Install-WindowsUpdate -AcceptAll -IgnoreReboot | Out-File -FilePath "${env:SystemDrive}\PSWindowsUpdate.log" -Encoding ascii
+}
+Invoke-WUJob -Script $updateScript -Confirm:$false -RunNow
+while($true) {
+ $task = Get-ScheduledTask -TaskName "PSWindowsUpdate"
+ if($task.State -eq "Ready") {
+ break
+ }
+ Start-Sleep -Seconds 10
+}
+Get-Content "${env:SystemDrive}\PSWindowsUpdate.log"
+Remove-Item -Force -Path "${env:SystemDrive}\PSWindowsUpdate.log"
+Unregister-ScheduledTask -TaskName "PSWindowsUpdate" -Confirm:$false
+Write-Output "Windows updates successfully installed"
--- /dev/null
+$ErrorActionPreference = "Stop"
+$ProgressPreference = "SilentlyContinue"
+
+. "${PSScriptRoot}\utils.ps1"
+
+$VS_2019_BUILD_TOOLS_URL = "https://aka.ms/vs/16/release/vs_buildtools.exe"
+$WDK_URL = "https://download.microsoft.com/download/7/d/6/7d602355-8ae9-414c-ae36-109ece2aade6/wdk/wdksetup.exe" # Windows 11 WDK (22000.1). It can be used to develop drivers for previous OS releases.
+$PYTHON3_URL = "https://www.python.org/ftp/python/3.11.0/python-3.11.0-amd64.exe"
+$FIO_URL = "https://bsdio.com/fio/releases/fio-3.27-x64.msi"
+$DOKANY_URL = "https://github.com/dokan-dev/dokany/releases/download/v2.0.6.1000/Dokan_x64.msi"
+
+$WNBD_GIT_REPO = "https://github.com/ceph/wnbd.git"
+$WNBD_GIT_BRANCH = "main"
+
+
+function Get-WindowsBuildInfo {
+ $p = Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion"
+ $table = New-Object System.Data.DataTable
+ $table.Columns.AddRange(@("Release", "Version", "Build"))
+ $table.Rows.Add($p.ProductName, $p.ReleaseId, "$($p.CurrentBuild).$($p.UBR)") | Out-Null
+ return $table
+}
+
+function Get-GitHubReleaseAssets {
+ Param(
+ [Parameter(Mandatory=$true)]
+ [String]$Repository,
+ [String]$Version="latest"
+ )
+ $releasesUrl = "https://api.github.com/repos/${Repository}/releases"
+ if($Version -eq "latest") {
+ $release = Invoke-CommandLine "curl.exe" "-s ${releasesUrl}/latest" | ConvertFrom-Json
+ } else {
+ $releases = Invoke-CommandLine "curl.exe" "-s ${releasesUrl}" | ConvertFrom-Json
+ $release = $releases | Where-Object { $_.tag_name -eq $Version }
+ if(!$release) {
+ Throw "Cannot find '${Repository}' release '${Version}'."
+ }
+ }
+ return $release.assets
+}
+
+function Set-VCVars {
+ Param(
+ [String]$Version="2019",
+ [String]$Platform="x86_amd64"
+ )
+ Push-Location "${env:ProgramFiles(x86)}\Microsoft Visual Studio\${Version}\BuildTools\VC\Auxiliary\Build"
+ try {
+ cmd.exe /c "vcvarsall.bat ${Platform} & set" | ForEach-Object {
+ if ($_ -match "=") {
+ $v = $_.split("=")
+ Set-Item -Force -Path "ENV:\$($v[0])" -Value "$($v[1])"
+ }
+ }
+ }
+ finally {
+ Pop-Location
+ }
+}
+
+function Install-Requirements {
+ # Create the needed directories
+ New-Item `
+ -ItemType Directory -Force `
+ -Path @(
+ "${env:SystemDrive}\tmp",
+ "${env:SystemDrive}\ceph",
+ "${env:SystemDrive}\wnbd",
+ "${env:SystemDrive}\workspace",
+ "${env:SystemDrive}\workspace\repos"
+ )
+ Add-ToPathEnvVar -Path @("${env:SystemDrive}\ceph", "${env:SystemDrive}\wnbd")
+ # Set UTC time zone
+ Set-TimeZone -Id "UTC"
+ # Allow ping requests
+ Get-NetFirewallRule -Name @("FPS-ICMP4-ERQ-In", "FPS-ICMP6-ERQ-In") | Enable-NetFirewallRule
+ # Allow test signed drivers
+ Invoke-CommandLine "bcdedit.exe" "/set testsigning yes"
+}
+
+function Install-VisualStudio2019BuildTools {
+ Write-Output "Installing Visual Studio 2019 Build Tools"
+ $params = @(
+ "--quiet", "--wait", "--norestart", "--nocache",
+ "--add", "Microsoft.VisualStudio.Workload.VCTools",
+ "--add", "Microsoft.VisualStudio.Workload.MSBuildTools",
+ "--add", "Microsoft.VisualStudio.Component.VC.Runtimes.x86.x64.Spectre",
+ "--add", "Microsoft.VisualStudio.Component.VC.Tools.x86.x64",
+ "--add", "Microsoft.VisualStudio.Component.Windows11SDK.22000"
+ )
+ Install-Tool -URL $VS_2019_BUILD_TOOLS_URL -Params $params -AllowedExitCodes @(0, 3010)
+ Write-Output "Successfully installed Visual Studio 2019 Build Tools"
+}
+
+function Install-WDK {
+ # Install WDK excluding WDK.vsix
+ Write-Output "Installing Windows Development Kit (WDK)"
+ Install-Tool -URL $WDK_URL -Params @("/q")
+ # Install WDK.vsix in manual manner
+ Copy-Item -Path "${env:ProgramFiles(x86)}\Windows Kits\10\Vsix\VS2019\WDK.vsix" -Destination "${env:TEMP}\wdkvsix.zip"
+ Expand-Archive "${env:TEMP}\wdkvsix.zip" -DestinationPath "${env:TEMP}\wdkvsix"
+ $src = "${env:TEMP}\wdkvsix\`$MSBuild\Microsoft\VC\v160"
+ $dst = "${env:ProgramFiles(x86)}\Microsoft Visual Studio\2019\BuildTools\MSBuild\Microsoft\VC\v160"
+ New-Item -ItemType Directory -Force -Path $dst | Out-Null
+ Push-Location $src
+ Get-ChildItem -Recurse | Resolve-Path -Relative | ForEach-Object {
+ $item = Get-Item -Path $_
+ if($item.PSIsContainer) {
+ New-Item -ItemType Directory -Force -Path "${dst}\$_" | Out-Null
+ } else {
+ Copy-Item -Force -Path $item.FullName -Destination "${dst}\$_" | Out-Null
+ }
+ }
+ Pop-Location
+ Remove-Item -Recurse -Force -Path @("${env:TEMP}\wdkvsix.zip", "${env:TEMP}\wdkvsix")
+ Write-Output "Successfully installed Windows Development Kit (WDK)"
+}
+
+function Get-GitDownloadUrl {
+ Param(
+ [String]$Version="latest"
+ )
+ $asset = Get-GitHubReleaseAssets -Repository "git-for-windows/git" -Version $Version | Where-Object {
+ $_.content_type -eq "application/executable" -and `
+ $_.name.StartsWith("Git-") -and `
+ $_.name.EndsWith("-64-bit.exe") }
+ if(!$asset) {
+ Throw "Cannot find Git on Windows release asset with the 64-bit installer."
+ }
+ if($asset.Count -gt 1) {
+ Throw "Found multiple Git on Windows release assets with the 64-bit installer."
+ }
+ return $asset.browser_download_url
+}
+
+function Install-Git {
+ Write-Output "Installing Git"
+ Install-Tool -URL (Get-GitDownloadUrl) -Params @("/VERYSILENT", "/NORESTART")
+ Add-ToPathEnvVar -Path @("${env:ProgramFiles}\Git\cmd", "${env:ProgramFiles}\Git\usr\bin")
+ Write-Output "Successfully installed Git"
+}
+
+function Install-WnbdDriver {
+ $outDir = Join-Path $env:SystemDrive "wnbd"
+ $gitDir = Join-Path $env:TEMP "wnbd"
+ Invoke-CommandLine "git.exe" "clone ${WNBD_GIT_REPO} --branch ${WNBD_GIT_BRANCH} ${gitDir}"
+ Push-Location $gitDir
+ try {
+ Set-VCVars
+ Invoke-CommandLine "MSBuild.exe" "vstudio\wnbd.sln /p:Configuration=Release"
+ Copy-Item -Force -Path "vstudio\x64\Release\driver\*" -Destination "${outDir}\"
+ Copy-Item -Force -Path "vstudio\x64\Release\libwnbd.dll" -Destination "${outDir}\"
+ Copy-Item -Force -Path "vstudio\x64\Release\wnbd-client.exe" -Destination "${outDir}\"
+ Copy-Item -Force -Path "vstudio\x64\Release\wnbd.cer" -Destination "${outDir}\"
+ Copy-Item -Force -Path "vstudio\x64\Release\pdb\driver\*" -Destination "${outDir}\"
+ Copy-Item -Force -Path "vstudio\x64\Release\pdb\libwnbd\*" -Destination "${outDir}\"
+ Copy-Item -Force -Path "vstudio\x64\Release\pdb\wnbd-client\*" -Destination "${outDir}\"
+ Copy-Item -Force -Path "vstudio\wnbdevents.xml" -Destination "${outDir}\"
+ Copy-Item -Force -Path "vstudio\reinstall.ps1" -Destination "${outDir}\"
+ } finally {
+ Pop-Location
+ Remove-Item -Recurse -Force -Path $gitDir
+ }
+ Import-Certificate -FilePath "${outDir}\wnbd.cer" -Cert Cert:\LocalMachine\Root
+ Import-Certificate -FilePath "${outDir}\wnbd.cer" -Cert Cert:\LocalMachine\TrustedPublisher
+ Invoke-CommandLine "wnbd-client.exe" "install-driver ${outDir}\wnbd.inf"
+ Invoke-CommandLine "wevtutil.exe" "im ${outDir}\wnbdevents.xml"
+}
+
+function Install-Python3 {
+ Write-Output "Installing Python3"
+ Install-Tool -URL $PYTHON3_URL -Params @("/quiet", "InstallAllUsers=1", "PrependPath=1")
+ Add-ToPathEnvVar -Path @("${env:ProgramFiles}\Python311\", "${env:ProgramFiles}\Python311\Scripts\")
+ Write-Output "Installing pip dependencies"
+ Start-ExecuteWithRetry {
+ # Needed to run the Ceph unit tests via https://github.com/ceph/ceph-win32-tests scripts
+ Invoke-CommandLine "pip3.exe" "install os-testr python-dateutil requests prettytable"
+ }
+ Write-Output "Successfully installed Python3"
+}
+
+function Get-Wix3ToolsetDownloadUrl {
+ Param(
+ [String]$Version="latest"
+ )
+ $asset = Get-GitHubReleaseAssets -Repository "wixtoolset/wix3" -Version $Version | Where-Object {
+ $_.content_type -eq "application/x-msdownload" }
+ if(!$asset) {
+ Throw "Cannot find Wix3 toolset release asset."
+ }
+ if($asset.Count -gt 1) {
+ Throw "Found multiple Wix3 toolset release assets."
+ }
+ return $asset.browser_download_url
+}
+
+function Install-Wix3Toolset {
+ Write-Output "Installing .NET Framework 3.5"
+ Install-WindowsFeature -Name "NET-Framework-Features" -ErrorAction Stop
+ Write-Output "Installing Wix3 toolset"
+ Install-Tool -URL (Get-Wix3ToolsetDownloadUrl) -Params @("/install", "/quiet", "/norestart")
+ Write-Output "Successfully installed Wix3 toolset"
+}
+
+function Install-FIO {
+ Write-Output "Installing FIO"
+ Install-Tool -URL $FIO_URL -Params @("/qn", "/l*v", "$env:TEMP\fio-install.log", "/norestart")
+ Write-Output "Successfully installed FIO"
+}
+
+function Install-Dokany {
+ Write-Output "Installing Dokany"
+ Install-Tool -URL $DOKANY_URL -Params @("/quiet", "/norestart")
+ Write-Output "Successfully installed Dokany"
+}
+
+Get-WindowsBuildInfo
+Install-Requirements
+Install-VisualStudio2019BuildTools
+Install-WDK
+Install-Git
+Install-WnbdDriver
+Install-Dokany
+Install-Python3
+Install-Wix3Toolset
+Install-FIO
+
+Write-Output "Successfully installed the CI environment. Please reboot the system before sysprep."
--- /dev/null
+function Invoke-CommandLine {
+ Param(
+ [Parameter(Mandatory=$true)]
+ [String]$Command,
+ [String]$Arguments,
+ [Int[]]$AllowedExitCodes=@(0)
+ )
+ & $Command $Arguments.Split(" ")
+ if($LASTEXITCODE -notin $AllowedExitCodes) {
+ Throw "$Command $Arguments returned a non zero exit code ${LASTEXITCODE}."
+ }
+}
+
+function Start-ExecuteWithRetry {
+ Param(
+ [Parameter(Mandatory=$true)]
+ [ScriptBlock]$ScriptBlock,
+ [Int]$MaxRetryCount=10,
+ [Int]$RetryInterval=3,
+ [String]$RetryMessage,
+ [Array]$ArgumentList=@()
+ )
+ $currentErrorActionPreference = $ErrorActionPreference
+ $ErrorActionPreference = "Continue"
+ $retryCount = 0
+ while ($true) {
+ try {
+ $res = Invoke-Command -ScriptBlock $ScriptBlock -ArgumentList $ArgumentList
+ $ErrorActionPreference = $currentErrorActionPreference
+ return $res
+ } catch [System.Exception] {
+ $retryCount++
+ if ($retryCount -gt $MaxRetryCount) {
+ $ErrorActionPreference = $currentErrorActionPreference
+ Throw $_
+ } else {
+ $prefixMsg = "Retry(${retryCount}/${MaxRetryCount})"
+ if($RetryMessage) {
+ Write-Host "${prefixMsg} - $RetryMessage"
+ } elseif($_) {
+ Write-Host "${prefixMsg} - $($_.ToString())"
+ }
+ Start-Sleep $RetryInterval
+ }
+ }
+ }
+}
+
+function Start-FileDownload {
+ Param(
+ [Parameter(Mandatory=$true)]
+ [String]$URL,
+ [Parameter(Mandatory=$true)]
+ [String]$Destination,
+ [Int]$RetryCount=10
+ )
+ Write-Output "Downloading $URL to $Destination"
+ Start-ExecuteWithRetry `
+ -ScriptBlock { Invoke-CommandLine -Command "curl.exe" -Arguments "-L -s -o $Destination $URL" } `
+ -MaxRetryCount $RetryCount `
+ -RetryMessage "Failed to download '${URL}'. Retrying"
+ Write-Output "Successfully downloaded."
+}
+
+function Add-ToPathEnvVar {
+ Param(
+ [Parameter(Mandatory=$true)]
+ [String[]]$Path,
+ [Parameter(Mandatory=$false)]
+ [ValidateSet([System.EnvironmentVariableTarget]::User, [System.EnvironmentVariableTarget]::Machine)]
+ [System.EnvironmentVariableTarget]$Target=[System.EnvironmentVariableTarget]::Machine
+ )
+ $pathEnvVar = [Environment]::GetEnvironmentVariable("PATH", $Target).Split(';')
+ $currentSessionPath = $env:PATH.Split(';')
+ foreach($p in $Path) {
+ if($p -notin $pathEnvVar) {
+ $pathEnvVar += $p
+ }
+ if($p -notin $currentSessionPath) {
+ $currentSessionPath += $p
+ }
+ }
+ $env:PATH = $currentSessionPath -join ';'
+ $newPathEnvVar = $pathEnvVar -join ';'
+ [Environment]::SetEnvironmentVariable("PATH", $newPathEnvVar, $Target)
+}
+
+function Install-Tool {
+ [CmdletBinding(DefaultParameterSetName = "URL")]
+ Param(
+ [Parameter(Mandatory=$true, ParameterSetName = "URL")]
+ [String]$URL,
+ [Parameter(Mandatory=$true, ParameterSetName = "LocalPath")]
+ [String]$LocalPath,
+ [Parameter(ParameterSetName = "URL")]
+ [Parameter(ParameterSetName = "LocalPath")]
+ [String[]]$Params=@(),
+ [Parameter(ParameterSetName = "URL")]
+ [Parameter(ParameterSetName = "LocalPath")]
+ [Int[]]$AllowedExitCodes=@(0)
+ )
+ PROCESS {
+ $installerPath = $LocalPath
+ if($PSCmdlet.ParameterSetName -eq "URL") {
+ $installerPath = Join-Path $env:TEMP $URL.Split('/')[-1]
+ Start-FileDownload -URL $URL -Destination $installerPath
+ }
+ Write-Output "Installing ${installerPath}"
+ $kwargs = @{
+ "FilePath" = $installerPath
+ "ArgumentList" = $Params
+ "NoNewWindow" = $true
+ "PassThru" = $true
+ "Wait" = $true
+ }
+ if((Get-ChildItem $installerPath).Extension -eq '.msi') {
+ $kwargs["FilePath"] = "msiexec.exe"
+ $kwargs["ArgumentList"] = @("/i", $installerPath) + $Params
+ }
+ $p = Start-Process @kwargs
+ if($p.ExitCode -notin $AllowedExitCodes) {
+ Throw "Installation failed. Exit code: $($p.ExitCode)"
+ }
+ if($PSCmdlet.ParameterSetName -eq "URL") {
+ Start-ExecuteWithRetry `
+ -ScriptBlock { Remove-Item -Force -Path $installerPath -ErrorAction Stop } `
+ -RetryMessage "Failed to remove ${installerPath}. Retrying"
+ }
+ }
+}
--- /dev/null
+- job:
+ name: ceph-windows-image-build
+ description: 'Builds the Ceph Windows VM image used in the CI.'
+ node: amd64 && focal && libvirt
+ project-type: freestyle
+ defaults: global
+ concurrent: false
+ display-name: 'ceph-windows-image-build'
+ properties:
+ - build-discarder:
+ days-to-keep: 30
+ num-to-keep: 30
+ artifact-days-to-keep: 30
+ artifact-num-to-keep: 30
+
+ parameters:
+ - string:
+ name: WINDOWS_SERVER_2019_ISO_URL
+ description: "The Windows Server 2019 ISO URL."
+ default: https://software-download.microsoft.com/download/pr/17763.737.190906-2324.rs5_release_svc_refresh_SERVER_EVAL_x64FRE_en-us_1.iso
+
+ - string:
+ name: VIRTIO_WIN_ISO_URL
+ description: "The virtio-win guest tools ISO URL."
+ default: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build.git
+ branches:
+ - main
+ basedir: ceph-build
+
+ builders:
+ - shell: "${{WORKSPACE}}/ceph-build/ceph-windows-image-build/build/build"
+
+ wrappers:
+ - credentials-binding:
+ - file:
+ credential-id: ceph_win_ci_private_key
+ variable: SSH_PRIVATE_KEY
+ - ssh-user-private-key:
+ credential-id: CEPH_WINDOWS_FILEDUMP_SSH_KEY
+ key-file-variable: FILEDUMP_SSH_KEY
+ username-variable: FILEDUMP_USER
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - UNSTABLE
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell: "${{WORKSPACE}}/ceph-build/ceph-windows-image-build/build/cleanup"
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+if [[ -z $WINDOWS_SSH_USER ]]; then echo "ERROR: The WINDOWS_SSH_USER env variable is not set"; exit 1; fi
+if [[ -z $WINDOWS_VM_IP ]]; then echo "ERROR: The WINDOWS_VM_IP env variable is not set"; exit 1; fi
+
+export SSH_USER=$WINDOWS_SSH_USER
+export SSH_ADDRESS=$WINDOWS_VM_IP
+
+BUILD_CONFIGURATION=${BUILD_CONFIGURATION:-"Release"}
+
+
+#
+# Setup the installer requirements
+#
+mkdir -p $WORKSPACE/ceph-windows-installer/Driver
+mkdir -p $WORKSPACE/ceph-windows-installer/Binaries
+mkdir -p $WORKSPACE/ceph-windows-installer/Symbols
+
+mv $WORKSPACE/build/wnbd/driver/* $WORKSPACE/ceph-windows-installer/Driver/
+mv $WORKSPACE/build/wnbd/binaries/* $WORKSPACE/ceph-windows-installer/Binaries/
+mv $WORKSPACE/build/wnbd/symbols/* $WORKSPACE/ceph-windows-installer/Symbols/
+
+pushd $WORKSPACE/build/ceph
+mkdir -p $WORKSPACE/ceph-windows-installer/Binaries/.debug
+for FILE in *.dll ceph-conf.exe rados.exe rbd.exe rbd-wnbd.exe ceph-dokan.exe; do
+ cp $FILE $WORKSPACE/ceph-windows-installer/Binaries/
+ cp .debug/${FILE}.debug $WORKSPACE/ceph-windows-installer/Binaries/.debug/
+done
+popd
+
+#
+# Upload ceph-windows-installer repo to the Windows VM
+#
+scp_upload $WORKSPACE/ceph-windows-installer /workspace/ceph-windows-installer
+
+#
+# Build the Visual Studio project
+#
+BUILD_CMD="MSBuild.exe %SystemDrive%\\workspace\\ceph-windows-installer\\ceph-windows-installer.sln /p:Platform=x64 /p:Configuration=${BUILD_CONFIGURATION}"
+SSH_TIMEOUT=30m ssh_exec "\"%ProgramFiles(x86)%\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Auxiliary\\Build\\vcvarsall.bat\" x86_amd64 & ${BUILD_CMD}"
+
+#
+# Download the installer
+#
+scp_download /workspace/ceph-windows-installer/bin/${BUILD_CONFIGURATION}/Ceph.msi $WORKSPACE/
+cp $WORKSPACE/ceph-windows-installer/Driver/wnbd.cer $WORKSPACE/wnbd_code_signing.cer
+
+#
+# Upload the installer to Chacra
+#
+if [ "$THROWAWAY" = false ]; then
+ # push binaries to chacra
+ chacra_binary="$VENV/chacractl binary --force"
+
+ ls $WORKSPACE/Ceph.msi $WORKSPACE/wnbd_code_signing.cer | $chacra_binary create ${chacra_binary_endpoint}
+
+ ceph_version=$(ssh_exec /workspace/ceph-windows-installer/Binaries/rbd.exe -v | awk '{print $3}' | tr -d '[:space:]')
+ wnbd_version=$(ssh_exec /workspace/ceph-windows-installer/Binaries/wnbd-client.exe -v | grep wnbd-client.exe | cut -d ':' -f2 | tr -d '[:space:]')
+ vers="ceph-${ceph_version}|wnbd-${wnbd_version}"
+
+ # write json file with build info
+ cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$vers",
+ "package_manager_version":"",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+ # post the json to repo-extra json to chacra
+ curl --fail -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_repo_endpoint}/extra/
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_repo_endpoint}
+
+ echo "Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_repo_endpoint}/"
+fi
+
+# update shaman with the completed build status
+update_build_status "completed" "ceph-windows-installer" $DISTRO $DISTRO_VERSION $ARCH
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+FLAVOR="default"
+
+BRANCH=`branch_slash_filter $BRANCH`
+
+# update shaman with the failed build status
+failed_build_status "ceph-windows-installer" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+CEPH_WINDOWS_BRANCH=${CEPH_WINDOWS_BRANCH:-"main"}
+CEPH_WINDOWS_SHA1=${CEPH_WINDOWS_SHA1:-"latest"}
+WNBD_BRANCH=${WNBD_BRANCH:-"main"}
+WNBD_SHA1=${WNBD_SHA1:-"latest"}
+
+GET_BIN_SCRIPT_URL="https://raw.githubusercontent.com/ceph/ceph-win32-tests/main/get-bin.py"
+
+DISTRO="windows"
+DISTRO_VERSION="1809"
+ARCH="x86_64"
+FLAVOR="default"
+
+BRANCH=`branch_slash_filter $BRANCH`
+SHA1="$GIT_COMMIT"
+
+#
+# Setup Chacra and Shaman
+#
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
+
+chacra_endpoint="ceph-windows-installer/${BRANCH}/${SHA1}/${DISTRO}/${DISTRO_VERSION}"
+chacra_binary_endpoint="${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}"
+chacra_repo_endpoint="${chacra_endpoint}/flavors/${FLAVOR}"
+chacra_check_url="${chacra_binary_endpoint}/Ceph.msi"
+
+# create build status in shaman
+update_build_status "started" "ceph-windows-installer" $DISTRO $DISTRO_VERSION $ARCH
+
+#
+# Install requirements (if needed)
+#
+if ! which unzip >/dev/null; then
+ sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none || true
+ sudo apt-get install -y unzip
+fi
+
+#
+# Download the Ceph Windows build and the WNBD build from Chacra
+#
+rm -rf $WORKSPACE/build
+mkdir -p $WORKSPACE/build
+cd $WORKSPACE/build
+
+timeout 1m curl -L -o ./get-chacra-bin.py $GET_BIN_SCRIPT_URL
+chmod +x ./get-chacra-bin.py
+
+timeout 10m ./get-chacra-bin.py --project ceph --branchname $CEPH_WINDOWS_BRANCH --sha1 $CEPH_WINDOWS_SHA1 --filename ceph.zip
+unzip -q ceph.zip
+
+timeout 10m ./get-chacra-bin.py --project wnbd --branchname $WNBD_BRANCH --sha1 $WNBD_SHA1 --filename wnbd.zip
+unzip -q wnbd.zip
--- /dev/null
+- job:
+ name: ceph-windows-installer-build
+ description: 'Builds the Ceph Windows MSI installer.'
+ node: amd64 && focal && libvirt
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ display-name: 'ceph-windows-installer-build'
+ properties:
+ - build-discarder:
+ days-to-keep: 30
+ num-to-keep: 30
+ artifact-days-to-keep: 30
+ artifact-num-to-keep: 30
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: main
+
+ - string:
+ name: CEPH_WINDOWS_BRANCH
+ description: "The branch name for the Ceph build."
+ default: main
+
+ - string:
+ name: CEPH_WINDOWS_SHA1
+ description: "The SHA1 for the Ceph build."
+ default: latest
+
+ - string:
+ name: WNBD_BRANCH
+ description: "The branch name for the WNBD build."
+ default: main
+
+ - string:
+ name: WNBD_SHA1
+ description: "The SHA1 for the WNBD build."
+ default: latest
+
+ - bool:
+ name: THROWAWAY
+ description: |
+ Default: False. When True it will not POST binaries to chacra. Artifacts will not be around for long. Useful to test builds.
+ default: false
+
+ triggers:
+ - timed: "H 1 * * *"
+
+ scm:
+ - git:
+ url: https://github.com/cloudbase/ceph-windows-installer.git
+ branches:
+ - $BRANCH
+ timeout: 20
+ wipe-workspace: true
+ basedir: ceph-windows-installer
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../../scripts/ceph-windows/setup_libvirt
+ - ../../../scripts/ceph-windows/setup_libvirt_windows_vm
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - file:
+ credential-id: ceph_win_ci_private_key
+ variable: CEPH_WIN_CI_KEY
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - SUCCESS
+ - UNSTABLE
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../../scripts/ceph-windows/cleanup
+
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - inject:
+ properties-file: ${{WORKSPACE}}/build_info
+
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/failure
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+docs_pr_only
+container_pr_only
+gha_pr_only
+qa_pr_only
+if [[ "$DOCS_ONLY" = true || "$CONTAINER_ONLY" = true || "$GHA_ONLY" == true || "$QA_ONLY" == true ]]; then
+ echo "Only the doc/, container/, qa/ or .github/ dir changed. No need to run Ceph Windows tests."
+ exit 0
+fi
--- /dev/null
+- job:
+ name: ceph-windows-pull-requests
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ node: amd64 && focal && libvirt && windows
+ display-name: 'ceph-windows: Pull Requests'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 300
+ artifact-days-to-keep: 15
+ artifact-num-to-keep: 100
+ - github:
+ url: https://github.com/ceph/ceph/
+ - rebuild:
+ auto-rebuild: true
+ - inject:
+ properties-content: |
+ TERM=xterm
+
+ parameters:
+ - string:
+ name: ghprbPullId
+ description: "The GitHub pull request id, like '72' in 'ceph/pull/72'"
+
+ triggers:
+ - github-pull-request:
+ cancel-builds-on-update: true
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ white-list-target-branches:
+ - main
+ - tentacle
+ - squid
+ - reef
+ trigger-phrase: 'jenkins test windows'
+ skip-build-phrase: '^jenkins do not test.*'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "ceph windows tests"
+ started-status: "running ceph windows tests"
+ success-status: "ceph windows tests succeeded"
+ failure-status: "ceph windows tests failed"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph.git
+ branches:
+ - origin/pr/${{ghprbPullId}}/merge
+ refspec: +refs/pull/${{ghprbPullId}}/*:refs/remotes/origin/pr/${{ghprbPullId}}/*
+ browser: auto
+ timeout: 20
+ do-not-fetch-tags: true
+ shallow-clone: true
+ honor-refspec: true
+ wipe-workspace: true
+ basedir: ceph
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/check_docs_pr_only
+ - ../../../scripts/ceph-windows/setup_libvirt
+ - ../../../scripts/ceph-windows/setup_libvirt_ubuntu_vm
+ - ../../../scripts/ceph-windows/win32_build
+ - ../../../scripts/ceph-windows/cleanup_libvirt_ubuntu_vm
+ - ../../../scripts/ceph-windows/setup_libvirt_ubuntu_vm
+ - ../../../scripts/ceph-windows/setup_libvirt_windows_vm
+ - ../../../scripts/ceph-windows/setup_ceph_vstart
+ - ../../../scripts/ceph-windows/run_tests
+
+ publishers:
+ - archive:
+ artifacts: 'artifacts/**'
+ allow-empty: true
+ latest-only: false
+
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - SUCCESS
+ - UNSTABLE
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../../scripts/ceph-windows/cleanup
+
+ wrappers:
+ - ansicolor
+ - credentials-binding:
+ - file:
+ credential-id: ceph_win_ci_private_key
+ variable: CEPH_WIN_CI_KEY
+ - username-password-separated:
+ credential-id: github-readonly-token
+ username: GITHUB_USER
+ password: GITHUB_PASS
--- /dev/null
+- job:
+ name: ceph-windows-test
+ description: 'Runs the unit tests from a Windows build uploaded to Chacra.'
+ node: amd64&&focal&&libvirt
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ display-name: 'ceph-windows-test'
+ properties:
+ - build-discarder:
+ days-to-keep: 30
+ num-to-keep: 30
+ artifact-days-to-keep: 30
+ artifact-num-to-keep: 15
+
+ parameters:
+ - string:
+ name: CEPH_GIT_REPO
+ description: "The Ceph git repo."
+ default: https://github.com/ceph/ceph.git
+
+ - string:
+ name: CEPH_GIT_BRANCH
+ description: "The Ceph git branch name."
+ default: main
+
+ - string:
+ name: CEPH_WIN32_BUILD_FLAGS
+ description: |
+ Space-separated list of key=value items passed as environment variables to Ceph './win32_build.sh' script.
+ For example: "ENABLE_SHARED=True NUM_WORKERS=4". If this is not set, the default build flags are used.
+
+ - bool:
+ name: INCLUDE_USERSPACE_CRASH_DUMPS
+ description: "Include Windows user-space crash dumps in the artifacts collected."
+
+ - bool:
+ name: INCLUDE_CEPH_ZIP
+ description: "Include Ceph Windows zip (useful for debugging with symbol files) in the artifacts collected."
+
+ scm:
+ - git:
+ url: $CEPH_GIT_REPO
+ branches:
+ - $CEPH_GIT_BRANCH
+ browser: auto
+ timeout: 20
+ do-not-fetch-tags: true
+ shallow-clone: true
+ honor-refspec: true
+ wipe-workspace: true
+ basedir: ceph
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../../scripts/ceph-windows/setup_libvirt
+ - ../../../scripts/ceph-windows/setup_libvirt_ubuntu_vm
+ - ../../../scripts/ceph-windows/win32_build
+ - ../../../scripts/ceph-windows/cleanup_libvirt_ubuntu_vm
+ - ../../../scripts/ceph-windows/setup_libvirt_ubuntu_vm
+ - ../../../scripts/ceph-windows/setup_libvirt_windows_vm
+ - ../../../scripts/ceph-windows/setup_ceph_vstart
+ - ../../../scripts/ceph-windows/run_tests
+
+ wrappers:
+ - credentials-binding:
+ - file:
+ credential-id: ceph_win_ci_private_key
+ variable: CEPH_WIN_CI_KEY
+
+ publishers:
+ - archive:
+ artifacts: 'artifacts/**'
+ allow-empty: true
+ latest-only: false
+
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - SUCCESS
+ - UNSTABLE
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../../scripts/ceph-windows/cleanup
--- /dev/null
+- job:
+ name: ceph
+ description: 'This is the main ceph build task which uses chacra.ceph.com.'
+ project-type: multijob
+ defaults: global
+ display-name: 'ceph'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 25
+ artifact-days-to-keep: 25
+ artifact-num-to-keep: 25
+ - github:
+ url: https://github.com/ceph/ceph
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build (e.g., pacific) DO NOT INCLUDE '-release'"
+ default: main
+
+ - bool:
+ name: TEST
+ description: "
+If this is unchecked, then the builds will be pushed to chacra with the correct ref. This is the default.
+
+If this is checked, then the builds will be pushed to chacra under the 'test' ref."
+ - bool:
+ name: TAG
+ description: "When this is checked, Jenkins will remove the previous private tag and recreate it again, changing the control files and committing again. When this is unchecked, Jenkins will not do any commit or tag operations. If you've already created the private tag separately, then leave this unchecked.
+Defaults to checked."
+ default: true
+
+ - bool:
+ name: THROWAWAY
+ description: "
+Default: False. When True it will not POST binaries to chacra. Artifacts will not be around for long. Useful to test builds."
+ default: false
+
+ - bool:
+ name: FORCE_VERSION
+ description: "
+Default: False. When True it will force the Debian version (when wanting to release older versions after newer ones have been released.
+Mostly useful for DEBs to append the `-b` flag for dhc."
+ default: false
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: VERSION
+ description: "The version for release, e.g. 0.94.4"
+
+ - choice:
+ name: RELEASE_TYPE
+ description: "
+STABLE: A normal release. Builds from BRANCH branch and pushed to BRANCH-release branch.
+RELEASE_CANDIDATE: A normal release except the binaries will be pushed to chacra using the $BRANCH-rc name
+HOTFIX: Builds from BRANCH-release branch. BRANCH-release will be git merged back into BRANCH.
+SECURITY: Builds from BRANCH-release branch in ceph-private.git (private repo)."
+ choices:
+ - STABLE
+ - RELEASE_CANDIDATE
+ - HOTFIX
+ - SECURITY
+
+ - string:
+ name: CEPH_BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos9, centos8, centos7, centos6, noble, jammy, focal, bionic, xenial, trusty, precise, wheezy, jessie, buster, bullseye, bookworm"
+ default: "noble jammy focal centos8 centos9 bookworm"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64, and arm64"
+ default: "x86_64 arm64"
+
+ - string:
+ name: CONTAINER_REPO_HOSTNAME
+ description: "Name of (prerelease) container repo server (i.e. 'quay.ceph.io')"
+ default: "quay.ceph.io"
+
+ - string:
+ name: CONTAINER_REPO_ORGANIZATION
+ description: "Name of (prerelease) container repo organization (i.e. 'ceph'). Container build script will add prerelease-<arch>"
+ default: "ceph"
+
+ builders:
+ - multijob:
+ name: 'ceph setup phase'
+ condition: SUCCESSFUL
+ projects:
+ - name: ceph-setup
+ current-parameters: true
+ exposed-scm: false
+ - copyartifact:
+ project: ceph-setup
+ filter: ceph-build/ansible/ceph/dist/sha1
+ which-build: multijob-build
+ - inject:
+ properties-file: ${{WORKSPACE}}/ceph-build/ansible/ceph/dist/sha1
+ - multijob:
+ name: 'ceph build phase'
+ condition: SUCCESSFUL
+ projects:
+ - name: ceph-build
+ current-parameters: true
+ exposed-scm: false
+ - multijob:
+ name: 'ceph tag phase'
+ condition: SUCCESSFUL
+ projects:
+ - name: ceph-tag
+ current-parameters: true
+ exposed-scm: false
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - build-name:
+ name: "#${{BUILD_NUMBER}} ${{BRANCH}}, ${{SHA1}}"
--- /dev/null
+#!/bin/bash
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+set_centos_python3_version "python3.9"
+install_python_packages $TEMPVENV "pkgs[@]" "pip==22.0.4"
+
+# XXX this might not be needed
+source $VENV/activate
+
+WORKDIR=$(mktemp -td tox.XXXXXXXXXX)
+
+delete_libvirt_vms
+clear_libvirt_networks
+restart_libvirt_services
+update_vagrant_boxes
+
+rm -rf "${HOME}"/ansible/facts/*
+
+if [[ -n "$DISTRIBUTION" ]]; then
+ ENVIRONMENT="${DISTRIBUTION}"-"${SCENARIO}"
+else
+ ENVIRONMENT="${SCENARIO}"
+fi
+
+# Skip the following scenarios as they are not valid:
+
+[[ "$ghprbTargetBranch" == pacific && "$DISTRIBUTION" == el9 ]] ||
+"${VENV}"/tox --workdir="${TEMPVENV}" -c tox.ini -e "${ENVIRONMENT}" -r -v -- --provider=libvirt
--- /dev/null
+#!/bin/bash
+
+cd "${WORKSPACE}"/tests
+
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+# the method exists in scripts/build_utils.sh
+teardown_vagrant_tests $VENV
+
+# clean fact cache
+rm -rf "${HOME}"/ansible/facts/*
--- /dev/null
+2.jenkins.ceph.com
--- /dev/null
+- project:
+ name: cephadm-ansible-prs-syntax
+ worker_labels: 'vagrant && libvirt && (braggi || adami)'
+ scenario:
+ - flake8
+ - mypy
+ - unittests
+ jobs:
+ - 'cephadm-ansible-prs-syntax'
+
+- project:
+ name: cephadm-ansible-prs-functional
+ worker_labels: 'vagrant && libvirt && (braggi || adami)'
+ distribution:
+ - el9
+ scenario:
+ - functional
+ jobs:
+ - 'cephadm-ansible-prs-functional'
+
+- job-template:
+ name: 'cephadm-ansible-prs-{scenario}'
+ id: 'cephadm-ansible-prs-syntax'
+ node: '{worker_labels}'
+ concurrent: true
+ defaults: global
+ display-name: 'cephadm-ansible: Pull Requests [{scenario}]'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - github:
+ url: https://github.com/ceph/cephadm-ansible
+ - build-discarder:
+ days-to-keep: 90
+ num-to-keep: -1
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ cancel-builds-on-update: true
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ skip-build-phrase: '^jenkins do not test.*|.*\[skip ci\].*'
+ trigger-phrase: '^jenkins test {scenario}|jenkins test all.*'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "Testing: {scenario}"
+ started-status: "Running: {scenario}"
+ success-status: "OK - {scenario}"
+ failure-status: "FAIL - {scenario}"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/cephadm-ansible.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+
+ builders:
+ - inject:
+ properties-content: |
+ SCENARIO={scenario}
+ - conditional-step:
+ condition-kind: shell
+ condition-command: |
+ #!/bin/bash
+ # Returns 1 if only .rst and README files were modified
+ echo "Checking if only rst and READMEs were modified"
+ git show HEAD | grep -qo ^Merge:
+ if [ $? -eq 0 ]; then
+ git diff --name-only $(git show HEAD | grep ^Merge: | cut -d ':' -f2) | grep -v '\.rst\|README'
+ if [ $? -eq 1 ]; then
+ echo "Only docs were modified. Skipping the rest of the job."
+ exit 1
+ fi
+ fi
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/build_utils.sh
+ - ../../build/teardown
+
+ - archive:
+ artifacts: 'logs/**'
+ allow-empty: true
+ latest-only: false
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+
+- job-template:
+ name: 'cephadm-ansible-prs-{distribution}-{scenario}'
+ id: 'cephadm-ansible-prs-functional'
+ node: '{worker_labels}'
+ concurrent: true
+ defaults: global
+ display-name: 'cephadm-ansible: Pull Requests [{distribution}-{scenario}]'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - github:
+ url: https://github.com/ceph/cephadm-ansible
+ - build-discarder:
+ days-to-keep: 90
+ num-to-keep: -1
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ cancel-builds-on-update: true
+ allow-whitelist-orgs-as-admins: true
+ org-list:
+ - ceph
+ skip-build-phrase: '^jenkins do not test.*|.*\[skip ci\].*'
+ trigger-phrase: '^jenkins test {distribution}-{scenario}|jenkins test all.*'
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+ status-context: "Testing: {distribution}-{scenario}"
+ started-status: "Running: {distribution}-{scenario}"
+ success-status: "OK - {distribution}-{scenario}"
+ failure-status: "FAIL - {distribution}-{scenario}"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/cephadm-ansible.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+
+ builders:
+ - inject:
+ properties-content: |
+ DISTRIBUTION={distribution}
+ SCENARIO={scenario}
+ - conditional-step:
+ condition-kind: shell
+ condition-command: |
+ #!/bin/bash
+ # Returns 1 if only .rst and README files were modified
+ echo "Checking if only rst and READMEs were modified"
+ git show HEAD | grep -qo ^Merge:
+ if [ $? -eq 0 ]; then
+ git diff --name-only $(git show HEAD | grep ^Merge: | cut -d ':' -f2) | grep -v '\.rst\|README'
+ if [ $? -eq 1 ]; then
+ echo "Only docs were modified. Skipping the rest of the job."
+ exit 1
+ fi
+ fi
+ on-evaluation-failure: dont-run
+ steps:
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-escape:
+ - ../../../scripts/build_utils.sh
+ - ../../build/teardown
+
+ - archive:
+ artifacts: 'logs/**'
+ allow-empty: true
+ latest-only: false
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: cephadm-ansible-quay-io
+ username: QUAY_IO_USERNAME
+ password: QUAY_IO_PASSWORD
+ - username-password-separated:
+ credential-id: cephadm-ansible-quay-ceph-io
+ username: QUAY_CEPH_IO_USERNAME
+ password: QUAY_CEPH_IO_PASSWORD
\ No newline at end of file
--- /dev/null
+#!/bin/bash
+set -ex
+# run tox by recreating the environment and in verbose mode
+# by default this will run all environments defined
+$VENV/tox -rv
--- /dev/null
+- job:
+ name: cephmetrics-pull-requests
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ display-name: 'Cephmetrics: Pull Requests'
+ node: small && (centos7 || trusty)
+ quiet-period: 0
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/cephmetrics/
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ admin-list:
+ - zmc
+ - pcuzner
+ - b-ranto
+ - GregMeno
+ org-list:
+ - ceph
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: false
+ auto-close-on-fail: false
+
+ scm:
+ - git:
+ url: https://github.com/ceph/cephmetrics.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../setup/setup
+ - ../../build/build
--- /dev/null
+#!/bin/bash
+set -ex
+if test $(id -u) != 0 ; then
+ SUDO=sudo
+fi
+
+deb_deps="python-dev python-virtualenv"
+rpm_deps="python-devel python-virtualenv"
+if test -f /etc/redhat-release ; then
+ $SUDO yum install -y $rpm_deps
+elif test -f /etc/debian_version ; then
+ $SUDO apt install -y $deb_deps
+fi
+
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+
+# Only do actual work when we are an RPM distro
+if test "$DISTRO" != "fedora" -a "$DISTRO" != "centos" -a "$DISTRO" != "rhel"; then
+ exit 0
+fi
+
+
+## Install any setup-time deps (to make dist package)
+sudo yum install -y mock git wget
+
+# Run the install-deps.sh upstream script if it exists
+if [ -x install-deps.sh ]; then
+ echo "Ensuring dependencies are installed"
+ sudo ./install-deps.sh
+fi
+
+
+## Get some basic information about the system and the repository
+get_rpm_dist
+VERSION="0.1"
+REVISION="$(git rev-list --count HEAD)-g$(git rev-parse --short HEAD)"
+RPM_RELEASE=$(echo $REVISION | tr '-' '_') # the '-' has a special meaning
+
+
+## Build the source tarball
+echo "Building source distribution"
+git archive --format=zip --prefix=cephmetrics-${VERSION}/ HEAD > dist/cephmetrics-${VERSION}.zip
+wget https://grafana.com/api/plugins/vonage-status-panel/versions/1.0.4/download -O dist/vonage-status-panel-1.0.4.zip
+wget https://grafana.com/api/plugins/grafana-piechart-panel/versions/1.1.5/download -O dist/grafana-piechart-panel-1.1.5.zip
+
+
+## Prepare the spec file for build
+sed -e "s/@VERSION@/${VERSION}/g" -e "s/@RELEASE@/${RPM_RELEASE}/g" < cephmetrics.spec.in > dist/cephmetrics.spec
+
+
+## Create the source rpm
+echo "Building SRPM"
+rpmbuild \
+ --define "_sourcedir ./dist" \
+ --define "_specdir ." \
+ --define "_builddir ." \
+ --define "_srcrpmdir ." \
+ --define "_rpmdir ." \
+ --define "dist .any" \
+ --define "fedora 21" \
+ --define "rhel 7" \
+ --nodeps -bs dist/cephmetrics.spec
+SRPM=$(readlink -f *.src.rpm)
+
+
+## Build the binaries with mock
+echo "Building RPMs"
+sudo mock -r ${MOCK_TARGET}-${RELEASE}-${ARCH} --resultdir=./dist/rpm/ ${SRPM}
+
+
+## Upload the created RPMs to chacra
+chacra_endpoint="cephmetrics/${BRANCH}/${GIT_COMMIT}/${DISTRO}/${RELEASE}"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+find ./dist/rpm/ | egrep '\.rpm$' | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/$ARCH/
+
+# start repo creation
+$VENV/chacractl repo update ${chacra_endpoint}
+
+echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
--- /dev/null
+#! /usr/bin/bash
+#
+# Ceph distributed storage system
+#
+# Copyright (C) 2016 Red Hat <contact@redhat.com>
+#
+# Author: Boris Ranto <branto@redhat.com>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+set -ex
+
+# Make sure we execute at the top level directory before we do anything
+cd $WORKSPACE
+
+# This will set the DISTRO and MOCK_TARGET variables
+get_distro_and_target
+
+# Perform a clean-up
+sudo git clean -fxd
+
+# Make sure the dist directory is clean
+sudo rm -rf dist
+mkdir -p dist
+
+# Print some basic system info
+HOST=$(hostname --short)
+echo "Building on $(hostname) with the following env"
+echo "*****"
+env
+echo "*****"
+
+export LC_ALL=C # the following is vulnerable to i18n
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# create the .chacractl config file using global variables
+make_chacractl_config
--- /dev/null
+- job:
+ name: cephmetrics-release
+ project-type: matrix
+ defaults: global
+ display-name: 'cephmetrics-release'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ properties:
+ - github:
+ url: https://github.com/ceph/cephmetrics
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: "main"
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos7, centos6"
+ default: "centos7"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64, and arm64"
+ default: "x86_64"
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ execution-strategy:
+ combination-filter: DIST==AVAILABLE_DIST && ARCH==AVAILABLE_ARCH && (ARCH=="x86_64" || (ARCH == "arm64" && (DIST == "xenial" || DIST == "centos7")))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - small
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - arm64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos6
+ - centos7
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+ scm:
+ - git:
+ url: git@github.com:ceph/cephmetrics.git
+ # Use the SSH key attached to the ceph-jenkins GitHub account.
+ credentials-id: 'jenkins-build'
+ skip-tag: true
+ branches:
+ - $BRANCH
+ wipe-workspace: false
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ sudo rm -rf dist
+ sudo rm -rf venv
+ sudo rm -rf release
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_rpm
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+# Sanity-check:
+[ -z "$GIT_BRANCH" ] && echo Missing GIT_BRANCH variable && exit 1
+
+BRANCH=`branch_slash_filter $GIT_BRANCH`
+
+# Only do actual work when we are an RPM distro
+if test "$DISTRO" != "fedora" -a "$DISTRO" != "centos" -a "$DISTRO" != "rhel"; then
+ exit 0
+fi
+
+
+## Install any setup-time deps (to make dist package)
+sudo yum install -y mock git wget
+
+# Run the install-deps.sh upstream script if it exists
+if [ -x install-deps.sh ]; then
+ echo "Ensuring dependencies are installed"
+ sudo ./install-deps.sh
+fi
+
+
+## Get some basic information about the system and the repository
+get_rpm_dist
+DESCRIBE="$(git describe --tags 2>/dev/null | cut -b 2-)"
+test -z "$DESCRIBE" && DESCRIBE="0.1-$(git rev-list --count HEAD)-g$(git rev-parse --short HEAD)"
+VERSION="$(echo $DESCRIBE | cut -d - -f 1)"
+REVISION="$(echo $DESCRIBE | cut -s -d - -f 2-)"
+test -z "$REVISION" && REVISION=0
+RPM_RELEASE=$(echo $REVISION | tr '-' '_') # the '-' has a special meaning
+
+
+## Build the source tarball
+echo "Building source distribution"
+git archive --format=zip --prefix=cephmetrics-${VERSION}/ HEAD > dist/cephmetrics-${VERSION}.zip
+wget https://grafana.com/api/plugins/vonage-status-panel/versions/1.0.4/download -O dist/vonage-status-panel-1.0.4.zip
+wget https://grafana.com/api/plugins/grafana-piechart-panel/versions/1.1.5/download -O dist/grafana-piechart-panel-1.1.5.zip
+
+
+## Prepare the spec file for build
+sed -e "s/@VERSION@/${VERSION}/g" -e "s/@RELEASE@/${RPM_RELEASE}/g" < cephmetrics.spec.in > dist/cephmetrics.spec
+
+
+## Create the source rpm
+echo "Building SRPM"
+rpmbuild \
+ --define "_sourcedir ./dist" \
+ --define "_specdir ." \
+ --define "_builddir ." \
+ --define "_srcrpmdir ." \
+ --define "_rpmdir ." \
+ --define "dist .any" \
+ --define "fedora 21" \
+ --define "rhel 7" \
+ --nodeps -bs dist/cephmetrics.spec
+SRPM=$(readlink -f *.src.rpm)
+
+
+## Build the binaries with mock
+echo "Building RPMs"
+sudo mock -r ${MOCK_TARGET}-${RELEASE}-${ARCH} --resultdir=./dist/rpm/ ${SRPM}
+
+
+## Upload the created RPMs to chacra
+chacra_endpoint="cephmetrics/${BRANCH}/${GIT_COMMIT}/${DISTRO}/${RELEASE}"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+find ./dist/rpm/ | egrep '\.rpm$' | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/$ARCH/
+
+# start repo creation
+$VENV/chacractl repo update ${chacra_endpoint}
+
+echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
--- /dev/null
+#! /usr/bin/bash
+#
+# Ceph distributed storage system
+#
+# Copyright (C) 2016 Red Hat <contact@redhat.com>
+#
+# Author: Boris Ranto <branto@redhat.com>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+set -ex
+
+# Make sure we execute at the top level directory before we do anything
+cd $WORKSPACE
+
+# This will set the DISTRO and MOCK_TARGET variables
+get_distro_and_target
+
+# Perform a clean-up
+sudo git clean -fxd
+
+# Make sure the dist directory is clean
+sudo rm -rf dist
+mkdir -p dist
+
+# Print some basic system info
+HOST=$(hostname --short)
+echo "Building on $(hostname) with the following env"
+echo "*****"
+env
+echo "*****"
+
+export LC_ALL=C # the following is vulnerable to i18n
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -f -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
--- /dev/null
+- job:
+ name: cephmetrics
+ project-type: matrix
+ defaults: global
+ display-name: 'cephmetrics'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ properties:
+ - github:
+ url: https://github.com/ceph/cephmetrics
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: "main"
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos7, centos6"
+ default: "centos7"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64, and arm64"
+ default: "x86_64"
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ execution-strategy:
+ combination-filter: DIST==AVAILABLE_DIST && ARCH==AVAILABLE_ARCH && (ARCH=="x86_64" || (ARCH == "arm64" && (DIST == "xenial" || DIST == "centos7")))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - small
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - arm64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos6
+ - centos7
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+ scm:
+ - git:
+ url: git@github.com:ceph/cephmetrics.git
+ # Use the SSH key attached to the ceph-jenkins GitHub account.
+ credentials-id: 'jenkins-build'
+ skip-tag: true
+ wipe-workspace: false
+
+ triggers:
+ - github
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ sudo rm -rf dist
+ sudo rm -rf venv
+ sudo rm -rf release
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_rpm
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+#!/bin/bash
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "ansible" "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# run ansible to get this current host to meet our requirements, specifying
+# a local connection and 'localhost' as the host where to execute
+cd "$WORKSPACE/ceph-build/chacra-pull-requests/setup/playbooks"
+
+# make sure any shaman list file is removed. At some point if all nodes
+# are clean this will not be needed.
+sudo rm -f /etc/apt/sources.list.d/shaman*
+
+$VENV/ansible-playbook -i "localhost," -c local setup.yml
+
+cd "$WORKSPACE/chacra"
+$VENV/tox -rv
--- /dev/null
+- scm:
+ name: chacra
+ scm:
+ - git:
+ url: https://github.com/ceph/chacra
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ basedir: "chacra"
+ skip-tag: true
+ wipe-workspace: true
+
+- scm:
+ name: ceph-build
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build.git
+ browser-url: https://github.com/ceph/ceph-build
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+ basedir: "ceph-build"
+ branches:
+ - origin/main
+
+
+- job:
+ name: chacra-pull-requests
+ description: Runs tox tests for chacra on each GitHub PR
+ project-type: freestyle
+ node: trusty && small
+ block-downstream: false
+ block-upstream: false
+ defaults: global
+ display-name: 'chacra: Pull Requests'
+ quiet-period: 5
+ retry-count: 3
+
+
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: 15
+ artifact-num-to-keep: 15
+ - github:
+ url: https://github.com/ceph/chacra/
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ org-list:
+ - ceph
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: false
+ auto-close-on-fail: false
+
+ scm:
+ - chacra
+ - ceph-build
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
--- /dev/null
+[ssh_connection]
+pipelining=True
--- /dev/null
+---
+
+- hosts: localhost
+ user: jenkins-build
+ become: True
+
+ tasks:
+ - import_tasks: tasks/postgresql.yml
--- /dev/null
+---
+- name: update apt cache
+ apt:
+ update_cache: yes
+ become: yes
+
+# I don't see why this is important; why can't we
+# just install the latest version
+
+- name: set postgresql version on trusty
+ set_fact:
+ postgresql_version: "9.3"
+ when: ansible_distribution_release == "trusty"
+
+- name: set postgresql version on xenial
+ set_fact:
+ postgresql_version: "9.5"
+ when: ansible_distribution_release == "xenial"
+
+- name: set postgresql version on bionic
+ set_fact:
+ postgresql_version: "10"
+ python_version: "3"
+ when: ansible_distribution_release == "bionic"
+
+- name: set postgresql version on focal
+ set_fact:
+ postgresql_version: "12"
+ python_version: "3"
+ when: ansible_distribution_release == "focal"
+
+- name: set postgresql version on jammy
+ set_fact:
+ postgresql_version: "14"
+ python_version: "3"
+ when: ansible_distribution_release == "jammy"
+
+- name: install postgresql requirements
+ become: yes
+ apt:
+ name: "{{ item }}"
+ state: present
+ with_items:
+ - postgresql
+ - postgresql-common
+ - postgresql-contrib
+ - "postgresql-server-dev-{{ postgresql_version }}"
+ - "python{{ python_version|default('') }}-psycopg2"
+ tags:
+ - packages
+
+- name: ensure database service is up
+ service:
+ name: postgresql
+ state: started
+ enabled: yes
+ become: yes
+
+- name: allow users to connect locally
+ become: yes
+ lineinfile:
+ dest: "/etc/postgresql/{{ postgresql_version }}/main/pg_hba.conf"
+ # allow all ipv4 local connections without a password
+ line: 'host all all 0.0.0.0/0 trust'
+ register: pg_hba_conf
+
+# This is the only task that actually cares about the python interpreter so we set it here
+- set_fact:
+ ansible_python_interpreter: "/usr/bin/python3"
+ when:
+ - python_version is defined
+ - python_version == "3"
+
+- name: make jenkins-build user
+ postgresql_user:
+ name: "jenkins-build"
+ password: "secret"
+ role_attr_flags: SUPERUSER
+ login_user: postgres
+ become_user: postgres
+ become: yes
+
+- service:
+ name: postgresql
+ state: restarted
+ become: true
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+BRANCH=`branch_slash_filter $BRANCH`
+
+# Only do actual work when we are an RPM distro
+if test "$DISTRO" != "fedora" -a "$DISTRO" != "centos" -a "$DISTRO" != "rhel"; then
+ exit 0
+fi
+
+## Install any setup-time deps
+# We need these for the build
+sudo yum install -y python-devel epydoc python-setuptools systemd-units
+
+# We use fpm to create the package
+sudo yum install -y rubygems ruby-devel
+sudo gem install fpm
+
+
+## Get some basic information about the system and the repository
+# Get version
+get_rpm_dist
+VERSION="$(git describe --abbrev=0 --tags HEAD | sed -e 's/v//1;')"
+REVISION="$(git describe --tags HEAD | sed -e 's/v//1;' | cut -d - -f 2- | sed 's/-/./')"
+if [ "$VERSION" = "$REVISION" ]; then
+ REVISION="1"
+fi
+
+## Create the package
+# Make sure there are no other packages, first
+rm -f *.rpm
+
+# Create the package
+fpm -s python -t rpm -n python-configshell -v ${VERSION} --iteration ${REVISION} -d python-kmod -d python-six -d python-pyudev setup.py
+
+
+## Upload the created RPMs to chacra
+chacra_endpoint="python-configshell/${BRANCH}/${GIT_COMMIT}/${DISTRO}/${RELEASE}"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+find *.rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/noarch/
+PACKAGE_MANAGER_VERSION=$(rpm --queryformat '%{VERSION}-%{RELEASE}\n' -qp $(find *.rpm | egrep "\.noarch\.rpm" | head -1))
+
+# write json file with build info
+cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$VERSION",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+# post the json to repo-extra json to chacra
+curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_endpoint}/extra/
+
+# start repo creation
+$VENV/chacractl repo update ${chacra_endpoint}
+
+echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
--- /dev/null
+#! /usr/bin/bash
+#
+# Ceph distributed storage system
+#
+# Copyright (C) 2016 Red Hat <contact@redhat.com>
+#
+# Author: Boris Ranto <branto@redhat.com>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+set -ex
+
+# Make sure we execute at the top level directory before we do anything
+cd $WORKSPACE
+
+# This will set the DISTRO and MOCK_TARGET variables
+get_distro_and_target
+
+# Perform a clean-up
+git clean -fxd
+
+# Make sure the dist directory is clean
+rm -rf dist
+mkdir -p dist
+
+# Print some basic system info
+HOST=$(hostname --short)
+echo "Building on $(hostname) with the following env"
+echo "*****"
+env
+echo "*****"
+
+export LC_ALL=C # the following is vulnerable to i18n
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -f -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
--- /dev/null
+- job:
+ name: configshell-fb
+ project-type: matrix
+ defaults: global
+ display-name: 'configshell-fb'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: main
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: xenial, centos7, centos6, trusty-pbuilder, precise, wheezy, and jessie"
+ default: "centos7"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64"
+ default: "x86_64"
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ execution-strategy:
+ combination-filter: DIST==AVAILABLE_DIST && ARCH==AVAILABLE_ARCH && (ARCH=="x86_64" || (ARCH == "arm64" && (DIST == "xenial" || DIST == "centos7")))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - small
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos7
+ - axis:
+ type: dynamic
+ name: DIST
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+ scm:
+ - git:
+ url: git@github.com:open-iscsi/configshell-fb.git
+ # Use the SSH key attached to the ceph-jenkins GitHub account.
+ credentials-id: 'jenkins-build'
+ branches:
+ - $BRANCH
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ rm -rf dist
+ rm -rf venv
+ rm -rf release
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_rpm
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+#!/usr/bin/env bash
+
+#
+# This script uses Jenkins Job Builder to generate the configuration for its own
+# job so that it automatically configures all other jobs that have their YAML
+# definitions.
+#
+
+set -euxo pipefail
+
+# the jjb version is an unreleased patch from dmick to add support for 'ancestry' build choosers
+# for ceph-dev-new-trigger, to allow it to be configured to only look at branches up to N days
+# old. This is just arguably better, but also useful when jenkins (or that plugin) decides it
+# needs to build every branch.
+#
+# when jenkins-job-builder 6.5 or later is released, this requirement should change to
+# jenkins-job-builder>=6.5
+
+pkgs=( "dataclasses" "jenkins-job-builder>=6.4.3" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]" latest
+
+# Wipe out JJB's cache if $FORCE is set.
+[ "$FORCE" = true ] && rm -rf "$HOME/.cache/jenkins_jobs/"
+
+# Each jenkins controller will write its own config file when the
+# jenkins-job-builder gets run
+JENKINS_FQDN=$(echo $JENKINS_URL | awk -F/ '{print $3}')
+JJB_CONFIG="$HOME/.jenkins_jobs.$JENKINS_FQDN.ini"
+
+# slap the programatically computed JJB config using env vars from Jenkins
+cat > $JJB_CONFIG << EOF
+[jenkins]
+user=$JOB_BUILDER_USER
+password=$JOB_BUILDER_PASS
+url=$JENKINS_URL
+EOF
+
+# Make a temp dir to store job configs created using `jenkins-jobs test`
+TEMPDIR=$(mktemp -d)
+
+# Test every definition if available in the current repository and update the jobs
+# if they do define one (they should always define their definitions)
+for dir in `find . -maxdepth 1 -path ./.git -prune -o -type d -print`; do
+ definitions_dir="$dir/config/definitions"
+ if [ -d "$definitions_dir" ]; then
+ echo "found definitions directory: $definitions_dir"
+
+ # Set with JJB config file we should use based on if an override
+ # file is present in the job's config dir
+ if [ -f "$dir/config/JENKINS_URL" ]; then
+ JENKINS_URL_OVERRIDE=$(cat $dir/config/JENKINS_URL)
+ echo "found JENKINS_URL override file. using $JENKINS_URL_OVERRIDE"
+ JJB_CONFIG="$HOME/.jenkins_jobs.$JENKINS_URL_OVERRIDE.ini"
+ else
+ JJB_CONFIG="$HOME/.jenkins_jobs.jenkins.ceph.com.ini"
+ fi
+
+ # Each jenkins-job-builder job should only update the controller
+ # that started it. This prevents collisions.
+ if [[ "$JJB_CONFIG" == "$HOME/.jenkins_jobs.$JENKINS_FQDN.ini" ]]; then
+ # Test the definitions first
+ $VENV/jenkins-jobs --log_level DEBUG --conf $JJB_CONFIG test $definitions_dir -o $TEMPDIR
+
+ # Update Jenkins with the output if they passed the test phase
+ # Note that this needs proper permissions with the right credentials to the
+ # correct Jenkins instance.
+ $VENV/jenkins-jobs --log_level DEBUG --conf $JJB_CONFIG update $definitions_dir
+ fi
+ fi
+done
+
+# Set JJB_CONFIG back to the one our controller wrote so the var can be used in the deletion task below
+JJB_CONFIG="$HOME/.jenkins_jobs.$JENKINS_FQDN.ini"
+
+# Delete jobs our controller has that didn't get job xml written during `jenkins-jobs test`
+# jenkins-job-builder doesn't get a config written so we `grep -v it` so it doesn't get deleted.
+# Jobs that start with 'preserve-' will also be kept, as it's assumed they're jobs under
+# development that will be deleted (perhaps renamed) once development is finished.
+for JOB in $(curl -s https://$JENKINS_FQDN/api/json | jq -r '.jobs[].name' | egrep -v 'jenkins-job-builder|^preserve-' | sort); do
+ if [ ! -f $TEMPDIR/$JOB ]; then
+ echo "Did not find job definition for $JOB. Deleting!"
+ $VENV/jenkins-jobs --log_level DEBUG --conf $JJB_CONFIG delete $JOB
+ fi
+done
--- /dev/null
+- job:
+ name: jenkins-job-builder
+ node: small
+ project-type: freestyle
+ defaults: global
+ display-name: 'Jenkins Job Builder'
+ concurrent: true
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph-build
+
+ parameters:
+ - bool:
+ name: FORCE
+ default: false
+ description: "
+If this is unchecked, then JJB will use its cache to update jobs. This makes this JJB job run faster, but it could cause JJB to fail to update some Jenkins jobs if the jobs have been changed outside of this JJB job's workflow. (This is the default.)
+
+If this is checked, JJB will wipe out its cache and force each job to align with the configurations in main."
+
+ triggers:
+ - github
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: jenkins-api-token
+ username: JOB_BUILDER_USER
+ password: JOB_BUILDER_PASS
+
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build
+ branches:
+ - main
+ browser: auto
+ timeout: 20
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - SUCCESS
+ - NOT_BUILT
+ - UNSTABLE
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell: 'rm $HOME/.jenkins_jobs.*.ini'
--- /dev/null
+- job:
+ name: kernel-trigger
+ node: built-in
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: 1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/ceph-client
+ discard-old-builds: true
+
+ triggers:
+ - github
+
+ scm:
+ - raw:
+ xml: |
+ <scm class="hudson.plugins.git.GitSCM">
+ <configVersion>2</configVersion>
+ <userRemoteConfigs>
+ <hudson.plugins.git.UserRemoteConfig>
+ <name>origin</name>
+ <refspec>+refs/heads/*:refs/remotes/origin/*</refspec>
+ <url>https://github.com/ceph/ceph-client.git</url>
+ </hudson.plugins.git.UserRemoteConfig>
+ </userRemoteConfigs>
+ <branches>
+ <hudson.plugins.git.BranchSpec>
+ <name>origin/testing</name>
+ </hudson.plugins.git.BranchSpec>
+ <hudson.plugins.git.BranchSpec>
+ <name>origin/master</name>
+ </hudson.plugins.git.BranchSpec>
+ <hudson.plugins.git.BranchSpec>
+ <name>origin/for-linus</name>
+ </hudson.plugins.git.BranchSpec>
+ <hudson.plugins.git.BranchSpec>
+ <name>*/wip*</name>
+ </hudson.plugins.git.BranchSpec>
+ <hudson.plugins.git.BranchSpec>
+ <name>origin/ceph-iscsi*</name>
+ </hudson.plugins.git.BranchSpec>
+ </branches>
+ <disableSubmodules>false</disableSubmodules>
+ <recursiveSubmodules>false</recursiveSubmodules>
+ <doGenerateSubmoduleConfigurations>false</doGenerateSubmoduleConfigurations>
+ <remotePoll>false</remotePoll>
+ <gitTool>Default</gitTool>
+ <submoduleCfg class="list"/>
+ <reference/>
+ <gitConfigName/>
+ <gitConfigEmail/>
+ <extensions>
+ <hudson.plugins.git.extensions.impl.CloneOption>
+ <shallow>false</shallow>
+ <noTags>true</noTags>
+ <timeout>120</timeout>
+ </hudson.plugins.git.extensions.impl.CloneOption>
+ <hudson.plugins.git.extensions.impl.CheckoutOption>
+ <timeout>20</timeout>
+ </hudson.plugins.git.extensions.impl.CheckoutOption>
+ <hudson.plugins.git.extensions.impl.WipeWorkspace/>
+ </extensions>
+ </scm>
+
+ builders:
+ - trigger-builds:
+ - project: 'kernel'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
--- /dev/null
+#!/bin/bash
+set -ex
+
+cd $WORKSPACE
+
+# slap -rc to the ref if we are doing a release-candidate build
+chacra_ref="$BRANCH"
+[ "$RC" = true ] && chacra_ref="$BRANCH-rc"
+[ "$TEST" = true ] && chacra_ref="test"
+
+DEB_ARCH=$(dpkg-architecture -qDEB_BUILD_ARCH)
+DISTRO=""
+case $DIST in
+ jessie|wheezy)
+ DISTRO="debian"
+ ;;
+ *)
+ DISTRO="ubuntu"
+ ;;
+esac
+
+debian_version=${VERSION}-1
+
+BPVER=`gen_debian_version $debian_version $DIST`
+
+chacra_endpoint="kernel/${BRANCH}/${GIT_COMMIT}/${DISTRO}/${DIST}"
+chacra_check_url="${chacra_endpoint}/kernel_${BPVER}_${DEB_ARCH}.deb"
+
+if [ "$THROWAWAY" = false ] ; then
+ # this exists in scripts/build_utils.sh
+ check_binary_existence $VENV $chacra_check_url
+fi
+
+HOST=$(hostname --short)
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " DEB_ARCH=${DEB_ARCH}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo "*****"
+env
+echo "*****"
+
+# Clean the upper level debs
+rm -f ../*.deb
+
+# Build the debs
+echo "Building DEBs"
+KDEB_SOURCENAME="linux-$kernelrelease" make -j$(getconf _NPROCESSORS_ONLN) bindeb-pkg
+
+# Make sure we execute at the top level directory
+cd "$WORKSPACE"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+if [ "$THROWAWAY" = false ] ; then
+ # push binaries to chacra
+ find ../*.deb | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/
+ PACKAGE_MANAGER_VERSION=$(dpkg-deb -f $(find ../*"$DEB_ARCH".deb | head -1) Version)
+
+ # write json file with build info
+ cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$kernelrelease",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+ # post the json to repo-extra json to chacra
+ curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_endpoint}/extra/
+
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_endpoint}
+
+ echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
+fi
+
+# update shaman with the completed build status
+update_build_status "completed" "kernel" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $ARCH
--- /dev/null
+#!/bin/bash
+set -euo pipefail
+set -x
+
+cd "${WORKSPACE}"
+
+# -------- target distro detection (do this ONCE) ----------
+# If the pipeline told us rocky10, keep it and don't let get_rpm_dist overwrite it.
+TARGET_DIST="${DIST:-}"
+TARGET_ARCH="${ARCH:-x86_64}"
+
+if [[ "${TARGET_DIST}" == "rocky10" ]]; then
+ DIST="rocky10"
+ DISTRO="rocky"
+ RELEASE="10"
+ NORMAL_DISTRO="rocky"
+ NORMAL_DISTRO_VERSION="10"
+else
+ # Fall back to host-derived values
+ get_rpm_dist # sets DISTRO, DIST, RELEASE, NORMAL_DISTRO, NORMAL_DISTRO_VERSION
+fi
+
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " ARCH=${TARGET_ARCH}"
+echo " WS=${WORKSPACE}"
+echo " PWD=$(pwd)"
+echo "*****"; env; echo "*****"
+
+# ---------- chacra endpoint ----------
+chacra_endpoint="kernel/${BRANCH}/${GIT_COMMIT}/${DISTRO}/${RELEASE}"
+
+# If you know RPM_RELEASE externally, export it before this script.
+# Otherwise, skip the pre-build existence check (we'll compute later).
+if [[ "${THROWAWAY:-false}" = false && -n "${RPM_RELEASE:-}" ]]; then
+ chacra_check_url="${chacra_endpoint}/${TARGET_ARCH}/kernel-${VERSION}-${RPM_RELEASE}.${DIST}.${TARGET_ARCH}.rpm"
+ check_binary_existence "${VENV}" "${chacra_check_url}"
+else
+ echo "Skipping pre-build chacra existence check (RPM_RELEASE not set)."
+fi
+
+# ---------- helpers ----------
+have() { command -v "$1" >/dev/null 2>&1; }
+
+build_in_container() {
+ local image="${KERNEL_BUILDER_IMAGE:-rockylinux:10}"
+ local runner
+ if have podman; then runner=podman; elif have docker; then runner=docker; else
+ echo "ERROR: Need podman or docker for containerized build." >&2; exit 2
+ fi
+
+ local host_uid host_gid cpus
+ host_uid="$(id -u)"
+ host_gid="$(id -g)"
+ cpus="$(getconf _NPROCESSORS_ONLN 2>/dev/null || nproc || echo 4)"
+
+ "$runner" pull "$image" || true
+
+ "$runner" run --rm -t \
+ -v "${WORKSPACE}:${WORKSPACE}:z" \
+ -w "${WORKSPACE}" \
+ -e WORKSPACE="${WORKSPACE}" \
+ -e ARCH="${TARGET_ARCH}" \
+ -e CPUS="${cpus}" \
+ -e HOST_UID="${host_uid}" \
+ -e HOST_GID="${host_gid}" \
+ "$image" bash -lc '
+ set -euo pipefail
+
+ # 1) deps as root
+ if command -v dnf >/dev/null 2>&1; then
+ dnf -y install rpm-build make gcc bc bison flex elfutils-libelf-devel openssl-devel \
+ dwarves perl findutils git util-linux which tar xz
+ else
+ yum -y install rpm-build make gcc bc bison flex elfutils-libelf-devel openssl-devel \
+ dwarves perl findutils git util-linux which tar xz
+ fi
+
+ # 2) ensure workspace owned by host uid/gid
+ chown -R "${HOST_UID}:${HOST_GID}" "${WORKSPACE}"
+
+ # 3) write build script (avoid set -u heredoc pitfalls)
+ cat > "${WORKSPACE}/.kernel-build.sh" <<'"'EOS'"
+set -euo pipefail
+cd "${WORKSPACE}"
+rm -rf ./rpmbuild/
+make olddefconfig
+make -j"${CPUS}" binrpm-pkg
+EOS
+ chmod +x "${WORKSPACE}/.kernel-build.sh"
+ chown "${HOST_UID}:${HOST_GID}" "${WORKSPACE}/.kernel-build.sh"
+
+ # 4) run as numeric uid:gid without requiring passwd entries or profiles
+ if command -v chroot >/dev/null 2>&1 && chroot --help 2>&1 | grep -q -- "--userspec"; then
+ chroot --userspec="${HOST_UID}:${HOST_GID}" / \
+ env -i HOME="${WORKSPACE}" PATH="/usr/sbin:/usr/bin:/sbin:/bin" \
+ WORKSPACE="${WORKSPACE}" ARCH="${ARCH}" CPUS="${CPUS}" \
+ bash --noprofile --norc "${WORKSPACE}/.kernel-build.sh"
+ elif command -v setpriv >/dev/null 2>&1; then
+ ( export HOME="${WORKSPACE}" PATH="/usr/sbin:/usr/bin:/sbin:/bin" \
+ WORKSPACE="${WORKSPACE}" ARCH="${ARCH}" CPUS="${CPUS}"
+ setpriv --reuid="${HOST_UID}" --regid="${HOST_GID}" --clear-groups \
+ bash --noprofile --norc "${WORKSPACE}/.kernel-build.sh"
+ )
+ else
+ # last resort: create a throwaway user
+ getent group "${HOST_GID}" >/dev/null || groupadd -g "${HOST_GID}" hostgrp
+ if ! getent passwd "${HOST_UID}" >/dev/null; then
+ useradd -u "${HOST_UID}" -g "${HOST_GID}" -M -s /bin/bash -d "${WORKSPACE}" hostuser
+ fi
+ su -s /bin/bash -l "$(getent passwd "${HOST_UID}" | cut -d: -f1)" -c \
+ "bash --noprofile --norc '${WORKSPACE}/.kernel-build.sh'"
+ fi
+ '
+}
+
+build_on_host() {
+ rm -rf ./rpmbuild/
+ make olddefconfig
+ make -j"$(getconf _NPROCESSORS_ONLN)" binrpm-pkg
+}
+
+# ---------- build ----------
+if [[ "${DIST}" == "rocky10" ]]; then
+ echo "Detected target rocky10 → building in container"
+ build_in_container
+else
+ echo "Target is ${DIST} → building on host"
+ build_on_host
+fi
+
+# ---------- publish ----------
+[ "${FORCE:-false}" = true ] && chacra_flags="--force" || chacra_flags=""
+
+if [[ "${THROWAWAY:-false}" = false ]]; then
+ # Push RPMs
+ find ./rpmbuild/ -type f -name '*.rpm' \
+ | "${VENV}/chacractl" binary ${chacra_flags} create "${chacra_endpoint}/${TARGET_ARCH}/"
+
+ # Derive PACKAGE_MANAGER_VERSION from a produced RPM
+ PACKAGE_MANAGER_VERSION="$(
+ rpm --queryformat '%{VERSION}-%{RELEASE}\n' -qp \
+ "$(find ./rpmbuild/ -type f -name "*.${TARGET_ARCH}.rpm" | head -1)"
+ )"
+
+ # If RPM_RELEASE was not set earlier, compute it now from the same RPM
+ if [[ -z "${RPM_RELEASE:-}" ]]; then
+ RPM_RELEASE="$(rpm -qp --qf '%{RELEASE}\n' \
+ "$(find ./rpmbuild/ -type f -name "*.${TARGET_ARCH}.rpm" | head -1)")"
+ fi
+
+ cat > "${WORKSPACE}/repo-extra.json" <<EOF
+{
+ "version":"${kernelrelease:-$(make -s kernelrelease)}",
+ "package_manager_version":"${PACKAGE_MANAGER_VERSION}",
+ "build_url":"${BUILD_URL}",
+ "root_build_cause":"${ROOT_BUILD_CAUSE}",
+ "node_name":"${NODE_NAME}",
+ "job_name":"${JOB_NAME}"
+}
+EOF
+
+ curl -X POST -H "Content-Type:application/json" \
+ --data "@${WORKSPACE}/repo-extra.json" \
+ -u "${CHACRACTL_USER}:${CHACRACTL_KEY}" \
+ "${chacra_url}repos/${chacra_endpoint}/extra/"
+
+ "${VENV}/chacractl" repo update "${chacra_endpoint}"
+ echo "Check repo status: https://shaman.ceph.com/api/repos/${chacra_endpoint}"
+fi
+
+# ---------- shaman status ----------
+update_build_status "completed" "kernel" "${NORMAL_DISTRO}" "${NORMAL_DISTRO_VERSION}" "${TARGET_ARCH}"
--- /dev/null
+#!/bin/bash -ex
+
+# note: the failed_build_status call relies on normalized variable names that
+# are infered by the builds themselves. If the build fails before these are
+# set, they will be posted with empty values
+BRANCH=`branch_slash_filter $BRANCH`
+
+# update shaman with the failed build status
+failed_build_status "kernel" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
--- /dev/null
+#! /usr/bin/env bash
+
+cat > .config << EOF
+#
+# Automatically generated file; DO NOT EDIT.
+# Linux/x86 5.7.0-rc5 Kernel Configuration
+#
+
+#
+# Compiler: gcc (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3)
+#
+CONFIG_CC_IS_GCC=y
+CONFIG_GCC_VERSION=80301
+CONFIG_LD_VERSION=230000000
+CONFIG_CLANG_VERSION=0
+CONFIG_CC_CAN_LINK=y
+CONFIG_CC_HAS_ASM_GOTO=y
+CONFIG_CC_HAS_ASM_INLINE=y
+CONFIG_IRQ_WORK=y
+CONFIG_BUILDTIME_TABLE_SORT=y
+CONFIG_THREAD_INFO_IN_TASK=y
+
+#
+# General setup
+#
+CONFIG_INIT_ENV_ARG_LIMIT=32
+# CONFIG_COMPILE_TEST is not set
+CONFIG_LOCALVERSION=""
+CONFIG_LOCALVERSION_AUTO=y
+CONFIG_BUILD_SALT=""
+CONFIG_HAVE_KERNEL_GZIP=y
+CONFIG_HAVE_KERNEL_BZIP2=y
+CONFIG_HAVE_KERNEL_LZMA=y
+CONFIG_HAVE_KERNEL_XZ=y
+CONFIG_HAVE_KERNEL_LZO=y
+CONFIG_HAVE_KERNEL_LZ4=y
+CONFIG_KERNEL_GZIP=y
+# CONFIG_KERNEL_BZIP2 is not set
+# CONFIG_KERNEL_LZMA is not set
+# CONFIG_KERNEL_XZ is not set
+# CONFIG_KERNEL_LZO is not set
+# CONFIG_KERNEL_LZ4 is not set
+CONFIG_DEFAULT_HOSTNAME="(none)"
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+CONFIG_SYSVIPC_SYSCTL=y
+CONFIG_POSIX_MQUEUE=y
+CONFIG_POSIX_MQUEUE_SYSCTL=y
+CONFIG_CROSS_MEMORY_ATTACH=y
+CONFIG_USELIB=y
+CONFIG_AUDIT=y
+CONFIG_HAVE_ARCH_AUDITSYSCALL=y
+CONFIG_AUDITSYSCALL=y
+
+#
+# IRQ subsystem
+#
+CONFIG_GENERIC_IRQ_PROBE=y
+CONFIG_GENERIC_IRQ_SHOW=y
+CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y
+CONFIG_GENERIC_PENDING_IRQ=y
+CONFIG_GENERIC_IRQ_MIGRATION=y
+CONFIG_HARDIRQS_SW_RESEND=y
+CONFIG_GENERIC_IRQ_CHIP=y
+CONFIG_IRQ_DOMAIN=y
+CONFIG_IRQ_DOMAIN_HIERARCHY=y
+CONFIG_GENERIC_MSI_IRQ=y
+CONFIG_GENERIC_MSI_IRQ_DOMAIN=y
+CONFIG_IRQ_MSI_IOMMU=y
+CONFIG_GENERIC_IRQ_MATRIX_ALLOCATOR=y
+CONFIG_GENERIC_IRQ_RESERVATION_MODE=y
+CONFIG_IRQ_FORCED_THREADING=y
+CONFIG_SPARSE_IRQ=y
+# CONFIG_GENERIC_IRQ_DEBUGFS is not set
+# end of IRQ subsystem
+
+CONFIG_CLOCKSOURCE_WATCHDOG=y
+CONFIG_ARCH_CLOCKSOURCE_INIT=y
+CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE=y
+CONFIG_GENERIC_TIME_VSYSCALL=y
+CONFIG_GENERIC_CLOCKEVENTS=y
+CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
+CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
+CONFIG_GENERIC_CMOS_UPDATE=y
+
+#
+# Timers subsystem
+#
+CONFIG_TICK_ONESHOT=y
+CONFIG_NO_HZ_COMMON=y
+# CONFIG_HZ_PERIODIC is not set
+CONFIG_NO_HZ_IDLE=y
+# CONFIG_NO_HZ_FULL is not set
+CONFIG_NO_HZ=y
+CONFIG_HIGH_RES_TIMERS=y
+# end of Timers subsystem
+
+# CONFIG_PREEMPT_NONE is not set
+CONFIG_PREEMPT_VOLUNTARY=y
+# CONFIG_PREEMPT is not set
+CONFIG_PREEMPT_COUNT=y
+
+#
+# CPU/Task time and stats accounting
+#
+CONFIG_TICK_CPU_ACCOUNTING=y
+# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set
+# CONFIG_IRQ_TIME_ACCOUNTING is not set
+# CONFIG_SCHED_THERMAL_PRESSURE is not set
+CONFIG_BSD_PROCESS_ACCT=y
+CONFIG_BSD_PROCESS_ACCT_V3=y
+CONFIG_TASKSTATS=y
+CONFIG_TASK_DELAY_ACCT=y
+CONFIG_TASK_XACCT=y
+CONFIG_TASK_IO_ACCOUNTING=y
+# CONFIG_PSI is not set
+# end of CPU/Task time and stats accounting
+
+CONFIG_CPU_ISOLATION=y
+
+#
+# RCU Subsystem
+#
+CONFIG_TREE_RCU=y
+# CONFIG_RCU_EXPERT is not set
+CONFIG_SRCU=y
+CONFIG_TREE_SRCU=y
+CONFIG_RCU_STALL_COMMON=y
+CONFIG_RCU_NEED_SEGCBLIST=y
+# end of RCU Subsystem
+
+CONFIG_BUILD_BIN2C=y
+# CONFIG_IKCONFIG is not set
+# CONFIG_IKHEADERS is not set
+CONFIG_LOG_BUF_SHIFT=18
+CONFIG_LOG_CPU_MAX_BUF_SHIFT=12
+CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT=13
+CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
+
+#
+# Scheduler features
+#
+# CONFIG_UCLAMP_TASK is not set
+# end of Scheduler features
+
+CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
+CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y
+CONFIG_CC_HAS_INT128=y
+CONFIG_ARCH_SUPPORTS_INT128=y
+CONFIG_NUMA_BALANCING=y
+CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
+CONFIG_CGROUPS=y
+CONFIG_PAGE_COUNTER=y
+CONFIG_MEMCG=y
+CONFIG_MEMCG_SWAP=y
+# CONFIG_MEMCG_SWAP_ENABLED is not set
+CONFIG_MEMCG_KMEM=y
+CONFIG_BLK_CGROUP=y
+CONFIG_CGROUP_WRITEBACK=y
+CONFIG_CGROUP_SCHED=y
+CONFIG_FAIR_GROUP_SCHED=y
+CONFIG_CFS_BANDWIDTH=y
+# CONFIG_RT_GROUP_SCHED is not set
+CONFIG_CGROUP_PIDS=y
+# CONFIG_CGROUP_RDMA is not set
+CONFIG_CGROUP_FREEZER=y
+CONFIG_CGROUP_HUGETLB=y
+CONFIG_CPUSETS=y
+CONFIG_PROC_PID_CPUSET=y
+CONFIG_CGROUP_DEVICE=y
+CONFIG_CGROUP_CPUACCT=y
+CONFIG_CGROUP_PERF=y
+CONFIG_CGROUP_BPF=y
+# CONFIG_CGROUP_DEBUG is not set
+CONFIG_SOCK_CGROUP_DATA=y
+CONFIG_NAMESPACES=y
+CONFIG_UTS_NS=y
+CONFIG_TIME_NS=y
+CONFIG_IPC_NS=y
+CONFIG_USER_NS=y
+CONFIG_PID_NS=y
+CONFIG_NET_NS=y
+CONFIG_CHECKPOINT_RESTORE=y
+CONFIG_SCHED_AUTOGROUP=y
+# CONFIG_SYSFS_DEPRECATED is not set
+CONFIG_RELAY=y
+CONFIG_BLK_DEV_INITRD=y
+CONFIG_INITRAMFS_SOURCE=""
+CONFIG_RD_GZIP=y
+CONFIG_RD_BZIP2=y
+CONFIG_RD_LZMA=y
+CONFIG_RD_XZ=y
+CONFIG_RD_LZO=y
+CONFIG_RD_LZ4=y
+# CONFIG_BOOT_CONFIG is not set
+CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+CONFIG_SYSCTL=y
+CONFIG_HAVE_UID16=y
+CONFIG_SYSCTL_EXCEPTION_TRACE=y
+CONFIG_HAVE_PCSPKR_PLATFORM=y
+CONFIG_BPF=y
+CONFIG_EXPERT=y
+CONFIG_UID16=y
+CONFIG_MULTIUSER=y
+CONFIG_SGETMASK_SYSCALL=y
+CONFIG_SYSFS_SYSCALL=y
+CONFIG_FHANDLE=y
+CONFIG_POSIX_TIMERS=y
+CONFIG_PRINTK=y
+CONFIG_PRINTK_NMI=y
+CONFIG_BUG=y
+CONFIG_ELF_CORE=y
+CONFIG_PCSPKR_PLATFORM=y
+CONFIG_BASE_FULL=y
+CONFIG_FUTEX=y
+CONFIG_FUTEX_PI=y
+CONFIG_EPOLL=y
+CONFIG_SIGNALFD=y
+CONFIG_TIMERFD=y
+CONFIG_EVENTFD=y
+CONFIG_SHMEM=y
+CONFIG_AIO=y
+CONFIG_IO_URING=y
+CONFIG_ADVISE_SYSCALLS=y
+CONFIG_MEMBARRIER=y
+CONFIG_KALLSYMS=y
+CONFIG_KALLSYMS_ALL=y
+CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y
+CONFIG_KALLSYMS_BASE_RELATIVE=y
+# CONFIG_BPF_LSM is not set
+CONFIG_BPF_SYSCALL=y
+CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y
+# CONFIG_BPF_JIT_ALWAYS_ON is not set
+CONFIG_BPF_JIT_DEFAULT_ON=y
+# CONFIG_USERFAULTFD is not set
+CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
+CONFIG_RSEQ=y
+# CONFIG_DEBUG_RSEQ is not set
+# CONFIG_EMBEDDED is not set
+CONFIG_HAVE_PERF_EVENTS=y
+# CONFIG_PC104 is not set
+
+#
+# Kernel Performance Events And Counters
+#
+CONFIG_PERF_EVENTS=y
+# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
+# end of Kernel Performance Events And Counters
+
+CONFIG_VM_EVENT_COUNTERS=y
+CONFIG_SLUB_DEBUG=y
+# CONFIG_SLUB_MEMCG_SYSFS_ON is not set
+# CONFIG_COMPAT_BRK is not set
+# CONFIG_SLAB is not set
+CONFIG_SLUB=y
+# CONFIG_SLOB is not set
+CONFIG_SLAB_MERGE_DEFAULT=y
+# CONFIG_SLAB_FREELIST_RANDOM is not set
+# CONFIG_SLAB_FREELIST_HARDENED is not set
+# CONFIG_SHUFFLE_PAGE_ALLOCATOR is not set
+CONFIG_SLUB_CPU_PARTIAL=y
+CONFIG_SYSTEM_DATA_VERIFICATION=y
+CONFIG_PROFILING=y
+CONFIG_TRACEPOINTS=y
+# end of General setup
+
+CONFIG_64BIT=y
+CONFIG_X86_64=y
+CONFIG_X86=y
+CONFIG_INSTRUCTION_DECODER=y
+CONFIG_OUTPUT_FORMAT="elf64-x86-64"
+CONFIG_LOCKDEP_SUPPORT=y
+CONFIG_STACKTRACE_SUPPORT=y
+CONFIG_MMU=y
+CONFIG_ARCH_MMAP_RND_BITS_MIN=28
+CONFIG_ARCH_MMAP_RND_BITS_MAX=32
+CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=8
+CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16
+CONFIG_GENERIC_ISA_DMA=y
+CONFIG_GENERIC_BUG=y
+CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
+CONFIG_ARCH_MAY_HAVE_PC_FDC=y
+CONFIG_GENERIC_CALIBRATE_DELAY=y
+CONFIG_ARCH_HAS_CPU_RELAX=y
+CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
+CONFIG_ARCH_HAS_FILTER_PGPROT=y
+CONFIG_HAVE_SETUP_PER_CPU_AREA=y
+CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
+CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
+CONFIG_ARCH_HIBERNATION_POSSIBLE=y
+CONFIG_ARCH_SUSPEND_POSSIBLE=y
+CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
+CONFIG_ZONE_DMA32=y
+CONFIG_AUDIT_ARCH=y
+CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
+CONFIG_HAVE_INTEL_TXT=y
+CONFIG_X86_64_SMP=y
+CONFIG_ARCH_SUPPORTS_UPROBES=y
+CONFIG_FIX_EARLYCON_MEM=y
+CONFIG_PGTABLE_LEVELS=5
+CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
+
+#
+# Processor type and features
+#
+CONFIG_ZONE_DMA=y
+CONFIG_SMP=y
+CONFIG_X86_FEATURE_NAMES=y
+CONFIG_X86_X2APIC=y
+CONFIG_X86_MPPARSE=y
+# CONFIG_GOLDFISH is not set
+# CONFIG_RETPOLINE is not set
+# CONFIG_X86_CPU_RESCTRL is not set
+CONFIG_X86_EXTENDED_PLATFORM=y
+CONFIG_X86_NUMACHIP=y
+# CONFIG_X86_VSMP is not set
+# CONFIG_X86_UV is not set
+# CONFIG_X86_GOLDFISH is not set
+# CONFIG_X86_INTEL_MID is not set
+CONFIG_X86_INTEL_LPSS=y
+# CONFIG_X86_AMD_PLATFORM_DEVICE is not set
+CONFIG_IOSF_MBI=y
+CONFIG_IOSF_MBI_DEBUG=y
+CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
+CONFIG_SCHED_OMIT_FRAME_POINTER=y
+CONFIG_HYPERVISOR_GUEST=y
+CONFIG_PARAVIRT=y
+CONFIG_PARAVIRT_XXL=y
+# CONFIG_PARAVIRT_DEBUG is not set
+CONFIG_PARAVIRT_SPINLOCKS=y
+CONFIG_X86_HV_CALLBACK_VECTOR=y
+CONFIG_XEN=y
+CONFIG_XEN_PV=y
+CONFIG_XEN_PV_SMP=y
+CONFIG_XEN_DOM0=y
+CONFIG_XEN_PVHVM=y
+CONFIG_XEN_PVHVM_SMP=y
+CONFIG_XEN_512GB=y
+CONFIG_XEN_SAVE_RESTORE=y
+# CONFIG_XEN_DEBUG_FS is not set
+CONFIG_XEN_PVH=y
+CONFIG_KVM_GUEST=y
+CONFIG_ARCH_CPUIDLE_HALTPOLL=y
+CONFIG_PVH=y
+CONFIG_KVM_DEBUG_FS=y
+# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set
+CONFIG_PARAVIRT_CLOCK=y
+# CONFIG_JAILHOUSE_GUEST is not set
+# CONFIG_ACRN_GUEST is not set
+# CONFIG_MK8 is not set
+# CONFIG_MPSC is not set
+# CONFIG_MCORE2 is not set
+# CONFIG_MATOM is not set
+CONFIG_GENERIC_CPU=y
+CONFIG_X86_INTERNODE_CACHE_SHIFT=6
+CONFIG_X86_L1_CACHE_SHIFT=6
+CONFIG_X86_TSC=y
+CONFIG_X86_CMPXCHG64=y
+CONFIG_X86_CMOV=y
+CONFIG_X86_MINIMUM_CPU_FAMILY=64
+CONFIG_X86_DEBUGCTLMSR=y
+CONFIG_IA32_FEAT_CTL=y
+CONFIG_X86_VMX_FEATURE_NAMES=y
+CONFIG_PROCESSOR_SELECT=y
+CONFIG_CPU_SUP_INTEL=y
+CONFIG_CPU_SUP_AMD=y
+CONFIG_CPU_SUP_HYGON=y
+CONFIG_CPU_SUP_CENTAUR=y
+CONFIG_CPU_SUP_ZHAOXIN=y
+CONFIG_HPET_TIMER=y
+CONFIG_HPET_EMULATE_RTC=y
+CONFIG_DMI=y
+CONFIG_GART_IOMMU=y
+# CONFIG_MAXSMP is not set
+CONFIG_NR_CPUS_RANGE_BEGIN=2
+CONFIG_NR_CPUS_RANGE_END=512
+CONFIG_NR_CPUS_DEFAULT=64
+CONFIG_NR_CPUS=256
+CONFIG_SCHED_SMT=y
+CONFIG_SCHED_MC=y
+CONFIG_SCHED_MC_PRIO=y
+CONFIG_X86_LOCAL_APIC=y
+CONFIG_X86_IO_APIC=y
+CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
+CONFIG_X86_MCE=y
+# CONFIG_X86_MCELOG_LEGACY is not set
+CONFIG_X86_MCE_INTEL=y
+CONFIG_X86_MCE_AMD=y
+CONFIG_X86_MCE_THRESHOLD=y
+CONFIG_X86_MCE_INJECT=m
+CONFIG_X86_THERMAL_VECTOR=y
+
+#
+# Performance monitoring
+#
+CONFIG_PERF_EVENTS_INTEL_UNCORE=y
+CONFIG_PERF_EVENTS_INTEL_RAPL=y
+CONFIG_PERF_EVENTS_INTEL_CSTATE=y
+# CONFIG_PERF_EVENTS_AMD_POWER is not set
+# end of Performance monitoring
+
+CONFIG_X86_16BIT=y
+CONFIG_X86_ESPFIX64=y
+CONFIG_X86_VSYSCALL_EMULATION=y
+CONFIG_X86_IOPL_IOPERM=y
+CONFIG_I8K=m
+CONFIG_MICROCODE=y
+CONFIG_MICROCODE_INTEL=y
+CONFIG_MICROCODE_AMD=y
+CONFIG_MICROCODE_OLD_INTERFACE=y
+CONFIG_X86_MSR=m
+CONFIG_X86_CPUID=m
+CONFIG_X86_5LEVEL=y
+CONFIG_X86_DIRECT_GBPAGES=y
+# CONFIG_X86_CPA_STATISTICS is not set
+# CONFIG_AMD_MEM_ENCRYPT is not set
+CONFIG_NUMA=y
+CONFIG_AMD_NUMA=y
+CONFIG_X86_64_ACPI_NUMA=y
+CONFIG_NODES_SPAN_OTHER_NODES=y
+# CONFIG_NUMA_EMU is not set
+CONFIG_NODES_SHIFT=6
+CONFIG_ARCH_SPARSEMEM_ENABLE=y
+CONFIG_ARCH_SPARSEMEM_DEFAULT=y
+CONFIG_ARCH_SELECT_MEMORY_MODEL=y
+CONFIG_ARCH_MEMORY_PROBE=y
+CONFIG_ARCH_PROC_KCORE_TEXT=y
+CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
+# CONFIG_X86_PMEM_LEGACY is not set
+CONFIG_X86_CHECK_BIOS_CORRUPTION=y
+CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK=y
+CONFIG_X86_RESERVE_LOW=64
+CONFIG_MTRR=y
+CONFIG_MTRR_SANITIZER=y
+CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=1
+CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
+CONFIG_X86_PAT=y
+CONFIG_ARCH_USES_PG_UNCACHED=y
+CONFIG_ARCH_RANDOM=y
+CONFIG_X86_SMAP=y
+CONFIG_X86_UMIP=y
+CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=y
+CONFIG_X86_INTEL_TSX_MODE_OFF=y
+# CONFIG_X86_INTEL_TSX_MODE_ON is not set
+# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set
+CONFIG_EFI=y
+CONFIG_EFI_STUB=y
+CONFIG_EFI_MIXED=y
+CONFIG_SECCOMP=y
+# CONFIG_HZ_100 is not set
+CONFIG_HZ_250=y
+# CONFIG_HZ_300 is not set
+# CONFIG_HZ_1000 is not set
+CONFIG_HZ=250
+CONFIG_SCHED_HRTICK=y
+CONFIG_KEXEC=y
+CONFIG_KEXEC_FILE=y
+CONFIG_ARCH_HAS_KEXEC_PURGATORY=y
+# CONFIG_KEXEC_SIG is not set
+CONFIG_CRASH_DUMP=y
+CONFIG_KEXEC_JUMP=y
+CONFIG_PHYSICAL_START=0x1000000
+CONFIG_RELOCATABLE=y
+CONFIG_RANDOMIZE_BASE=y
+CONFIG_X86_NEED_RELOCS=y
+CONFIG_PHYSICAL_ALIGN=0x1000000
+CONFIG_DYNAMIC_MEMORY_LAYOUT=y
+CONFIG_RANDOMIZE_MEMORY=y
+CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING=0xa
+CONFIG_HOTPLUG_CPU=y
+# CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set
+# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
+# CONFIG_COMPAT_VDSO is not set
+# CONFIG_LEGACY_VSYSCALL_EMULATE is not set
+CONFIG_LEGACY_VSYSCALL_XONLY=y
+# CONFIG_LEGACY_VSYSCALL_NONE is not set
+# CONFIG_CMDLINE_BOOL is not set
+CONFIG_MODIFY_LDT_SYSCALL=y
+CONFIG_HAVE_LIVEPATCH=y
+# CONFIG_LIVEPATCH is not set
+# end of Processor type and features
+
+CONFIG_ARCH_HAS_ADD_PAGES=y
+CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
+CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
+CONFIG_USE_PERCPU_NUMA_NODE_ID=y
+CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y
+CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=y
+CONFIG_ARCH_ENABLE_THP_MIGRATION=y
+
+#
+# Power management and ACPI options
+#
+CONFIG_ARCH_HIBERNATION_HEADER=y
+CONFIG_SUSPEND=y
+CONFIG_SUSPEND_FREEZER=y
+# CONFIG_SUSPEND_SKIP_SYNC is not set
+CONFIG_HIBERNATE_CALLBACKS=y
+CONFIG_HIBERNATION=y
+CONFIG_PM_STD_PARTITION=""
+CONFIG_PM_SLEEP=y
+CONFIG_PM_SLEEP_SMP=y
+# CONFIG_PM_AUTOSLEEP is not set
+CONFIG_PM_WAKELOCKS=y
+CONFIG_PM_WAKELOCKS_LIMIT=100
+CONFIG_PM_WAKELOCKS_GC=y
+CONFIG_PM=y
+CONFIG_PM_DEBUG=y
+CONFIG_PM_ADVANCED_DEBUG=y
+# CONFIG_PM_TEST_SUSPEND is not set
+CONFIG_PM_SLEEP_DEBUG=y
+# CONFIG_DPM_WATCHDOG is not set
+CONFIG_PM_TRACE=y
+CONFIG_PM_TRACE_RTC=y
+CONFIG_PM_CLK=y
+CONFIG_WQ_POWER_EFFICIENT_DEFAULT=y
+# CONFIG_ENERGY_MODEL is not set
+CONFIG_ARCH_SUPPORTS_ACPI=y
+CONFIG_ACPI=y
+CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y
+CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=y
+CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=y
+# CONFIG_ACPI_DEBUGGER is not set
+CONFIG_ACPI_SPCR_TABLE=y
+CONFIG_ACPI_LPIT=y
+CONFIG_ACPI_SLEEP=y
+# CONFIG_ACPI_PROCFS_POWER is not set
+CONFIG_ACPI_REV_OVERRIDE_POSSIBLE=y
+CONFIG_ACPI_EC_DEBUGFS=m
+CONFIG_ACPI_AC=y
+CONFIG_ACPI_BATTERY=y
+CONFIG_ACPI_BUTTON=y
+CONFIG_ACPI_VIDEO=m
+CONFIG_ACPI_FAN=y
+# CONFIG_ACPI_TAD is not set
+CONFIG_ACPI_DOCK=y
+CONFIG_ACPI_CPU_FREQ_PSS=y
+CONFIG_ACPI_PROCESSOR_CSTATE=y
+CONFIG_ACPI_PROCESSOR_IDLE=y
+CONFIG_ACPI_CPPC_LIB=y
+CONFIG_ACPI_PROCESSOR=y
+CONFIG_ACPI_IPMI=m
+CONFIG_ACPI_HOTPLUG_CPU=y
+CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
+CONFIG_ACPI_THERMAL=y
+CONFIG_ACPI_CUSTOM_DSDT_FILE=""
+CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=y
+CONFIG_ACPI_TABLE_UPGRADE=y
+# CONFIG_ACPI_DEBUG is not set
+CONFIG_ACPI_PCI_SLOT=y
+CONFIG_ACPI_CONTAINER=y
+CONFIG_ACPI_HOTPLUG_MEMORY=y
+CONFIG_ACPI_HOTPLUG_IOAPIC=y
+CONFIG_ACPI_SBS=m
+CONFIG_ACPI_HED=y
+# CONFIG_ACPI_CUSTOM_METHOD is not set
+CONFIG_ACPI_BGRT=y
+# CONFIG_ACPI_REDUCED_HARDWARE_ONLY is not set
+# CONFIG_ACPI_NFIT is not set
+CONFIG_ACPI_NUMA=y
+# CONFIG_ACPI_HMAT is not set
+CONFIG_HAVE_ACPI_APEI=y
+CONFIG_HAVE_ACPI_APEI_NMI=y
+CONFIG_ACPI_APEI=y
+CONFIG_ACPI_APEI_GHES=y
+CONFIG_ACPI_APEI_PCIEAER=y
+CONFIG_ACPI_APEI_MEMORY_FAILURE=y
+CONFIG_ACPI_APEI_EINJ=m
+# CONFIG_ACPI_APEI_ERST_DEBUG is not set
+# CONFIG_DPTF_POWER is not set
+CONFIG_ACPI_EXTLOG=m
+# CONFIG_PMIC_OPREGION is not set
+# CONFIG_ACPI_CONFIGFS is not set
+CONFIG_X86_PM_TIMER=y
+CONFIG_SFI=y
+
+#
+# CPU Frequency scaling
+#
+CONFIG_CPU_FREQ=y
+CONFIG_CPU_FREQ_GOV_ATTR_SET=y
+CONFIG_CPU_FREQ_GOV_COMMON=y
+CONFIG_CPU_FREQ_STAT=y
+CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
+# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
+# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
+# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set
+# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
+# CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set
+CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
+CONFIG_CPU_FREQ_GOV_POWERSAVE=y
+CONFIG_CPU_FREQ_GOV_USERSPACE=y
+CONFIG_CPU_FREQ_GOV_ONDEMAND=y
+CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
+CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y
+
+#
+# CPU frequency scaling drivers
+#
+CONFIG_X86_INTEL_PSTATE=y
+CONFIG_X86_PCC_CPUFREQ=y
+CONFIG_X86_ACPI_CPUFREQ=y
+CONFIG_X86_ACPI_CPUFREQ_CPB=y
+CONFIG_X86_POWERNOW_K8=y
+CONFIG_X86_AMD_FREQ_SENSITIVITY=m
+CONFIG_X86_SPEEDSTEP_CENTRINO=y
+CONFIG_X86_P4_CLOCKMOD=m
+
+#
+# shared options
+#
+CONFIG_X86_SPEEDSTEP_LIB=m
+# end of CPU Frequency scaling
+
+#
+# CPU Idle
+#
+CONFIG_CPU_IDLE=y
+CONFIG_CPU_IDLE_GOV_LADDER=y
+CONFIG_CPU_IDLE_GOV_MENU=y
+# CONFIG_CPU_IDLE_GOV_TEO is not set
+# CONFIG_CPU_IDLE_GOV_HALTPOLL is not set
+CONFIG_HALTPOLL_CPUIDLE=y
+# end of CPU Idle
+
+CONFIG_INTEL_IDLE=y
+# end of Power management and ACPI options
+
+#
+# Bus options (PCI etc.)
+#
+CONFIG_PCI_DIRECT=y
+CONFIG_PCI_MMCONFIG=y
+CONFIG_PCI_XEN=y
+CONFIG_MMCONF_FAM10H=y
+# CONFIG_PCI_CNB20LE_QUIRK is not set
+# CONFIG_ISA_BUS is not set
+CONFIG_ISA_DMA_API=y
+CONFIG_AMD_NB=y
+# CONFIG_X86_SYSFB is not set
+# end of Bus options (PCI etc.)
+
+#
+# Binary Emulations
+#
+CONFIG_IA32_EMULATION=y
+CONFIG_X86_X32=y
+CONFIG_COMPAT_32=y
+CONFIG_COMPAT=y
+CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
+CONFIG_SYSVIPC_COMPAT=y
+# end of Binary Emulations
+
+#
+# Firmware Drivers
+#
+CONFIG_EDD=y
+CONFIG_EDD_OFF=y
+CONFIG_FIRMWARE_MEMMAP=y
+CONFIG_DMIID=y
+CONFIG_DMI_SYSFS=m
+CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y
+CONFIG_ISCSI_IBFT_FIND=y
+CONFIG_ISCSI_IBFT=m
+# CONFIG_FW_CFG_SYSFS is not set
+# CONFIG_GOOGLE_FIRMWARE is not set
+
+#
+# EFI (Extensible Firmware Interface) Support
+#
+CONFIG_EFI_VARS=y
+CONFIG_EFI_ESRT=y
+CONFIG_EFI_VARS_PSTORE=m
+# CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE is not set
+CONFIG_EFI_RUNTIME_MAP=y
+# CONFIG_EFI_FAKE_MEMMAP is not set
+CONFIG_EFI_RUNTIME_WRAPPERS=y
+# CONFIG_EFI_BOOTLOADER_CONTROL is not set
+# CONFIG_EFI_CAPSULE_LOADER is not set
+# CONFIG_EFI_TEST is not set
+# CONFIG_APPLE_PROPERTIES is not set
+# CONFIG_RESET_ATTACK_MITIGATION is not set
+# CONFIG_EFI_RCI2_TABLE is not set
+# CONFIG_EFI_DISABLE_PCI_DMA is not set
+# end of EFI (Extensible Firmware Interface) Support
+
+CONFIG_UEFI_CPER=y
+CONFIG_UEFI_CPER_X86=y
+CONFIG_EFI_EARLYCON=y
+
+#
+# Tegra firmware driver
+#
+# end of Tegra firmware driver
+# end of Firmware Drivers
+
+CONFIG_HAVE_KVM=y
+CONFIG_HAVE_KVM_IRQCHIP=y
+CONFIG_HAVE_KVM_IRQFD=y
+CONFIG_HAVE_KVM_IRQ_ROUTING=y
+CONFIG_HAVE_KVM_EVENTFD=y
+CONFIG_KVM_MMIO=y
+CONFIG_KVM_ASYNC_PF=y
+CONFIG_HAVE_KVM_MSI=y
+CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
+CONFIG_KVM_VFIO=y
+CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y
+CONFIG_KVM_COMPAT=y
+CONFIG_HAVE_KVM_IRQ_BYPASS=y
+CONFIG_HAVE_KVM_NO_POLL=y
+CONFIG_VIRTUALIZATION=y
+CONFIG_KVM=m
+CONFIG_KVM_WERROR=y
+CONFIG_KVM_INTEL=m
+CONFIG_KVM_AMD=m
+CONFIG_KVM_AMD_SEV=y
+# CONFIG_KVM_MMU_AUDIT is not set
+CONFIG_AS_AVX512=y
+CONFIG_AS_SHA1_NI=y
+CONFIG_AS_SHA256_NI=y
+
+#
+# General architecture-dependent options
+#
+CONFIG_CRASH_CORE=y
+CONFIG_KEXEC_CORE=y
+CONFIG_HOTPLUG_SMT=y
+CONFIG_OPROFILE=m
+# CONFIG_OPROFILE_EVENT_MULTIPLEX is not set
+CONFIG_HAVE_OPROFILE=y
+CONFIG_OPROFILE_NMI_TIMER=y
+CONFIG_KPROBES=y
+CONFIG_JUMP_LABEL=y
+# CONFIG_STATIC_KEYS_SELFTEST is not set
+CONFIG_OPTPROBES=y
+CONFIG_KPROBES_ON_FTRACE=y
+CONFIG_UPROBES=y
+CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
+CONFIG_ARCH_USE_BUILTIN_BSWAP=y
+CONFIG_KRETPROBES=y
+CONFIG_USER_RETURN_NOTIFIER=y
+CONFIG_HAVE_IOREMAP_PROT=y
+CONFIG_HAVE_KPROBES=y
+CONFIG_HAVE_KRETPROBES=y
+CONFIG_HAVE_OPTPROBES=y
+CONFIG_HAVE_KPROBES_ON_FTRACE=y
+CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y
+CONFIG_HAVE_NMI=y
+CONFIG_HAVE_ARCH_TRACEHOOK=y
+CONFIG_HAVE_DMA_CONTIGUOUS=y
+CONFIG_GENERIC_SMP_IDLE_THREAD=y
+CONFIG_ARCH_HAS_FORTIFY_SOURCE=y
+CONFIG_ARCH_HAS_SET_MEMORY=y
+CONFIG_ARCH_HAS_SET_DIRECT_MAP=y
+CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y
+CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT=y
+CONFIG_HAVE_ASM_MODVERSIONS=y
+CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
+CONFIG_HAVE_RSEQ=y
+CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y
+CONFIG_HAVE_CLK=y
+CONFIG_HAVE_HW_BREAKPOINT=y
+CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
+CONFIG_HAVE_USER_RETURN_NOTIFIER=y
+CONFIG_HAVE_PERF_EVENTS_NMI=y
+CONFIG_HAVE_HARDLOCKUP_DETECTOR_PERF=y
+CONFIG_HAVE_PERF_REGS=y
+CONFIG_HAVE_PERF_USER_STACK_DUMP=y
+CONFIG_HAVE_ARCH_JUMP_LABEL=y
+CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y
+CONFIG_MMU_GATHER_TABLE_FREE=y
+CONFIG_MMU_GATHER_RCU_TABLE_FREE=y
+CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
+CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
+CONFIG_HAVE_CMPXCHG_LOCAL=y
+CONFIG_HAVE_CMPXCHG_DOUBLE=y
+CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
+CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
+CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
+CONFIG_SECCOMP_FILTER=y
+CONFIG_HAVE_ARCH_STACKLEAK=y
+CONFIG_HAVE_STACKPROTECTOR=y
+CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
+CONFIG_STACKPROTECTOR=y
+CONFIG_STACKPROTECTOR_STRONG=y
+CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES=y
+CONFIG_HAVE_CONTEXT_TRACKING=y
+CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
+CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
+CONFIG_HAVE_MOVE_PMD=y
+CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
+CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD=y
+CONFIG_HAVE_ARCH_HUGE_VMAP=y
+CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
+CONFIG_HAVE_ARCH_SOFT_DIRTY=y
+CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
+CONFIG_MODULES_USE_ELF_RELA=y
+CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK=y
+CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
+CONFIG_HAVE_ARCH_MMAP_RND_BITS=y
+CONFIG_HAVE_EXIT_THREAD=y
+CONFIG_ARCH_MMAP_RND_BITS=28
+CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS=y
+CONFIG_ARCH_MMAP_RND_COMPAT_BITS=8
+CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES=y
+CONFIG_HAVE_COPY_THREAD_TLS=y
+CONFIG_HAVE_STACK_VALIDATION=y
+CONFIG_HAVE_RELIABLE_STACKTRACE=y
+CONFIG_OLD_SIGSUSPEND3=y
+CONFIG_COMPAT_OLD_SIGACTION=y
+CONFIG_COMPAT_32BIT_TIME=y
+CONFIG_HAVE_ARCH_VMAP_STACK=y
+CONFIG_VMAP_STACK=y
+CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
+CONFIG_STRICT_KERNEL_RWX=y
+CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
+CONFIG_STRICT_MODULE_RWX=y
+CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y
+CONFIG_ARCH_USE_MEMREMAP_PROT=y
+# CONFIG_LOCK_EVENT_COUNTS is not set
+CONFIG_ARCH_HAS_MEM_ENCRYPT=y
+
+#
+# GCOV-based kernel profiling
+#
+# CONFIG_GCOV_KERNEL is not set
+CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
+# end of GCOV-based kernel profiling
+
+CONFIG_HAVE_GCC_PLUGINS=y
+# end of General architecture-dependent options
+
+CONFIG_RT_MUTEXES=y
+CONFIG_BASE_SMALL=0
+CONFIG_MODULE_SIG_FORMAT=y
+CONFIG_MODULES=y
+# CONFIG_MODULE_FORCE_LOAD is not set
+CONFIG_MODULE_UNLOAD=y
+# CONFIG_MODULE_FORCE_UNLOAD is not set
+CONFIG_MODVERSIONS=y
+CONFIG_ASM_MODVERSIONS=y
+CONFIG_MODULE_SRCVERSION_ALL=y
+CONFIG_MODULE_SIG=y
+# CONFIG_MODULE_SIG_FORCE is not set
+CONFIG_MODULE_SIG_ALL=y
+# CONFIG_MODULE_SIG_SHA1 is not set
+# CONFIG_MODULE_SIG_SHA224 is not set
+# CONFIG_MODULE_SIG_SHA256 is not set
+# CONFIG_MODULE_SIG_SHA384 is not set
+CONFIG_MODULE_SIG_SHA512=y
+CONFIG_MODULE_SIG_HASH="sha512"
+# CONFIG_MODULE_COMPRESS is not set
+# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set
+CONFIG_UNUSED_SYMBOLS=y
+CONFIG_MODULES_TREE_LOOKUP=y
+CONFIG_BLOCK=y
+CONFIG_BLK_SCSI_REQUEST=y
+CONFIG_BLK_CGROUP_RWSTAT=y
+CONFIG_BLK_DEV_BSG=y
+CONFIG_BLK_DEV_BSGLIB=y
+CONFIG_BLK_DEV_INTEGRITY=y
+CONFIG_BLK_DEV_INTEGRITY_T10=y
+# CONFIG_BLK_DEV_ZONED is not set
+CONFIG_BLK_DEV_THROTTLING=y
+# CONFIG_BLK_DEV_THROTTLING_LOW is not set
+CONFIG_BLK_CMDLINE_PARSER=y
+# CONFIG_BLK_WBT is not set
+# CONFIG_BLK_CGROUP_IOLATENCY is not set
+# CONFIG_BLK_CGROUP_IOCOST is not set
+CONFIG_BLK_DEBUG_FS=y
+# CONFIG_BLK_SED_OPAL is not set
+
+#
+# Partition Types
+#
+CONFIG_PARTITION_ADVANCED=y
+CONFIG_ACORN_PARTITION=y
+CONFIG_ACORN_PARTITION_CUMANA=y
+CONFIG_ACORN_PARTITION_EESOX=y
+CONFIG_ACORN_PARTITION_ICS=y
+CONFIG_ACORN_PARTITION_ADFS=y
+CONFIG_ACORN_PARTITION_POWERTEC=y
+CONFIG_ACORN_PARTITION_RISCIX=y
+CONFIG_AIX_PARTITION=y
+CONFIG_OSF_PARTITION=y
+CONFIG_AMIGA_PARTITION=y
+CONFIG_ATARI_PARTITION=y
+CONFIG_MAC_PARTITION=y
+CONFIG_MSDOS_PARTITION=y
+CONFIG_BSD_DISKLABEL=y
+CONFIG_MINIX_SUBPARTITION=y
+CONFIG_SOLARIS_X86_PARTITION=y
+CONFIG_UNIXWARE_DISKLABEL=y
+CONFIG_LDM_PARTITION=y
+# CONFIG_LDM_DEBUG is not set
+CONFIG_SGI_PARTITION=y
+CONFIG_ULTRIX_PARTITION=y
+CONFIG_SUN_PARTITION=y
+CONFIG_KARMA_PARTITION=y
+CONFIG_EFI_PARTITION=y
+CONFIG_SYSV68_PARTITION=y
+CONFIG_CMDLINE_PARTITION=y
+# end of Partition Types
+
+CONFIG_BLOCK_COMPAT=y
+CONFIG_BLK_MQ_PCI=y
+CONFIG_BLK_MQ_VIRTIO=y
+CONFIG_BLK_MQ_RDMA=y
+CONFIG_BLK_PM=y
+
+#
+# IO Schedulers
+#
+CONFIG_MQ_IOSCHED_DEADLINE=y
+CONFIG_MQ_IOSCHED_KYBER=y
+# CONFIG_IOSCHED_BFQ is not set
+# end of IO Schedulers
+
+CONFIG_PREEMPT_NOTIFIERS=y
+CONFIG_PADATA=y
+CONFIG_ASN1=y
+CONFIG_UNINLINE_SPIN_UNLOCK=y
+CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y
+CONFIG_MUTEX_SPIN_ON_OWNER=y
+CONFIG_RWSEM_SPIN_ON_OWNER=y
+CONFIG_LOCK_SPIN_ON_OWNER=y
+CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y
+CONFIG_QUEUED_SPINLOCKS=y
+CONFIG_ARCH_USE_QUEUED_RWLOCKS=y
+CONFIG_QUEUED_RWLOCKS=y
+CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE=y
+CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y
+CONFIG_FREEZER=y
+
+#
+# Executable file formats
+#
+CONFIG_BINFMT_ELF=y
+CONFIG_COMPAT_BINFMT_ELF=y
+CONFIG_ELFCORE=y
+CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
+CONFIG_BINFMT_SCRIPT=y
+CONFIG_BINFMT_MISC=m
+CONFIG_COREDUMP=y
+# end of Executable file formats
+
+#
+# Memory Management options
+#
+CONFIG_SELECT_MEMORY_MODEL=y
+CONFIG_SPARSEMEM_MANUAL=y
+CONFIG_SPARSEMEM=y
+CONFIG_NEED_MULTIPLE_NODES=y
+CONFIG_HAVE_MEMORY_PRESENT=y
+CONFIG_SPARSEMEM_EXTREME=y
+CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
+CONFIG_SPARSEMEM_VMEMMAP=y
+CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
+CONFIG_HAVE_FAST_GUP=y
+CONFIG_NUMA_KEEP_MEMINFO=y
+CONFIG_MEMORY_ISOLATION=y
+CONFIG_HAVE_BOOTMEM_INFO_NODE=y
+CONFIG_MEMORY_HOTPLUG=y
+CONFIG_MEMORY_HOTPLUG_SPARSE=y
+# CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set
+CONFIG_MEMORY_HOTREMOVE=y
+CONFIG_SPLIT_PTLOCK_CPUS=4
+CONFIG_MEMORY_BALLOON=y
+CONFIG_BALLOON_COMPACTION=y
+CONFIG_COMPACTION=y
+CONFIG_PAGE_REPORTING=y
+CONFIG_MIGRATION=y
+CONFIG_CONTIG_ALLOC=y
+CONFIG_PHYS_ADDR_T_64BIT=y
+CONFIG_BOUNCE=y
+CONFIG_VIRT_TO_BUS=y
+CONFIG_MMU_NOTIFIER=y
+CONFIG_KSM=y
+CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
+CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
+CONFIG_MEMORY_FAILURE=y
+CONFIG_HWPOISON_INJECT=m
+CONFIG_TRANSPARENT_HUGEPAGE=y
+CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
+# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
+CONFIG_ARCH_WANTS_THP_SWAP=y
+CONFIG_THP_SWAP=y
+CONFIG_CLEANCACHE=y
+CONFIG_FRONTSWAP=y
+CONFIG_CMA=y
+# CONFIG_CMA_DEBUG is not set
+# CONFIG_CMA_DEBUGFS is not set
+CONFIG_CMA_AREAS=7
+CONFIG_MEM_SOFT_DIRTY=y
+CONFIG_ZSWAP=y
+# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set
+CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO=y
+# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set
+# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4 is not set
+# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4HC is not set
+# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_ZSTD is not set
+CONFIG_ZSWAP_COMPRESSOR_DEFAULT="lzo"
+CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y
+# CONFIG_ZSWAP_ZPOOL_DEFAULT_Z3FOLD is not set
+# CONFIG_ZSWAP_ZPOOL_DEFAULT_ZSMALLOC is not set
+CONFIG_ZSWAP_ZPOOL_DEFAULT="zbud"
+# CONFIG_ZSWAP_DEFAULT_ON is not set
+CONFIG_ZPOOL=y
+CONFIG_ZBUD=y
+# CONFIG_Z3FOLD is not set
+CONFIG_ZSMALLOC=y
+CONFIG_PGTABLE_MAPPING=y
+# CONFIG_ZSMALLOC_STAT is not set
+CONFIG_GENERIC_EARLY_IOREMAP=y
+# CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set
+# CONFIG_IDLE_PAGE_TRACKING is not set
+CONFIG_ARCH_HAS_PTE_DEVMAP=y
+# CONFIG_ZONE_DEVICE is not set
+CONFIG_ARCH_USES_HIGH_VMA_FLAGS=y
+CONFIG_ARCH_HAS_PKEYS=y
+# CONFIG_PERCPU_STATS is not set
+# CONFIG_GUP_BENCHMARK is not set
+# CONFIG_READ_ONLY_THP_FOR_FS is not set
+CONFIG_ARCH_HAS_PTE_SPECIAL=y
+# end of Memory Management options
+
+CONFIG_NET=y
+CONFIG_NET_INGRESS=y
+CONFIG_NET_EGRESS=y
+CONFIG_NET_REDIRECT=y
+CONFIG_SKB_EXTENSIONS=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=y
+CONFIG_PACKET_DIAG=m
+CONFIG_UNIX=y
+CONFIG_UNIX_SCM=y
+CONFIG_UNIX_DIAG=m
+# CONFIG_TLS is not set
+CONFIG_XFRM=y
+CONFIG_XFRM_ALGO=m
+CONFIG_XFRM_USER=m
+# CONFIG_XFRM_INTERFACE is not set
+# CONFIG_XFRM_SUB_POLICY is not set
+# CONFIG_XFRM_MIGRATE is not set
+CONFIG_XFRM_STATISTICS=y
+CONFIG_XFRM_IPCOMP=m
+CONFIG_NET_KEY=m
+# CONFIG_NET_KEY_MIGRATE is not set
+# CONFIG_SMC is not set
+# CONFIG_XDP_SOCKETS is not set
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+CONFIG_IP_ADVANCED_ROUTER=y
+CONFIG_IP_FIB_TRIE_STATS=y
+CONFIG_IP_MULTIPLE_TABLES=y
+CONFIG_IP_ROUTE_MULTIPATH=y
+CONFIG_IP_ROUTE_VERBOSE=y
+CONFIG_IP_ROUTE_CLASSID=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+# CONFIG_IP_PNP_BOOTP is not set
+# CONFIG_IP_PNP_RARP is not set
+CONFIG_NET_IPIP=m
+CONFIG_NET_IPGRE_DEMUX=m
+CONFIG_NET_IP_TUNNEL=m
+CONFIG_NET_IPGRE=m
+CONFIG_NET_IPGRE_BROADCAST=y
+CONFIG_IP_MROUTE_COMMON=y
+CONFIG_IP_MROUTE=y
+# CONFIG_IP_MROUTE_MULTIPLE_TABLES is not set
+CONFIG_IP_PIMSM_V1=y
+CONFIG_IP_PIMSM_V2=y
+CONFIG_SYN_COOKIES=y
+CONFIG_NET_IPVTI=m
+CONFIG_NET_UDP_TUNNEL=m
+CONFIG_NET_FOU=m
+CONFIG_NET_FOU_IP_TUNNELS=y
+CONFIG_INET_AH=m
+CONFIG_INET_ESP=m
+# CONFIG_INET_ESP_OFFLOAD is not set
+# CONFIG_INET_ESPINTCP is not set
+CONFIG_INET_IPCOMP=m
+CONFIG_INET_XFRM_TUNNEL=m
+CONFIG_INET_TUNNEL=m
+CONFIG_INET_DIAG=m
+CONFIG_INET_TCP_DIAG=m
+CONFIG_INET_UDP_DIAG=m
+# CONFIG_INET_RAW_DIAG is not set
+# CONFIG_INET_DIAG_DESTROY is not set
+CONFIG_TCP_CONG_ADVANCED=y
+CONFIG_TCP_CONG_BIC=m
+CONFIG_TCP_CONG_CUBIC=y
+CONFIG_TCP_CONG_WESTWOOD=m
+CONFIG_TCP_CONG_HTCP=m
+CONFIG_TCP_CONG_HSTCP=m
+CONFIG_TCP_CONG_HYBLA=m
+CONFIG_TCP_CONG_VEGAS=m
+# CONFIG_TCP_CONG_NV is not set
+CONFIG_TCP_CONG_SCALABLE=m
+CONFIG_TCP_CONG_LP=m
+CONFIG_TCP_CONG_VENO=m
+CONFIG_TCP_CONG_YEAH=m
+CONFIG_TCP_CONG_ILLINOIS=m
+CONFIG_TCP_CONG_DCTCP=m
+# CONFIG_TCP_CONG_CDG is not set
+# CONFIG_TCP_CONG_BBR is not set
+CONFIG_DEFAULT_CUBIC=y
+# CONFIG_DEFAULT_RENO is not set
+CONFIG_DEFAULT_TCP_CONG="cubic"
+CONFIG_TCP_MD5SIG=y
+CONFIG_IPV6=y
+CONFIG_IPV6_ROUTER_PREF=y
+CONFIG_IPV6_ROUTE_INFO=y
+# CONFIG_IPV6_OPTIMISTIC_DAD is not set
+CONFIG_INET6_AH=m
+CONFIG_INET6_ESP=m
+# CONFIG_INET6_ESP_OFFLOAD is not set
+CONFIG_INET6_IPCOMP=m
+CONFIG_IPV6_MIP6=m
+# CONFIG_IPV6_ILA is not set
+CONFIG_INET6_XFRM_TUNNEL=m
+CONFIG_INET6_TUNNEL=m
+CONFIG_IPV6_VTI=m
+CONFIG_IPV6_SIT=m
+CONFIG_IPV6_SIT_6RD=y
+CONFIG_IPV6_NDISC_NODETYPE=y
+CONFIG_IPV6_TUNNEL=m
+CONFIG_IPV6_GRE=m
+CONFIG_IPV6_FOU=m
+CONFIG_IPV6_FOU_TUNNEL=m
+CONFIG_IPV6_MULTIPLE_TABLES=y
+CONFIG_IPV6_SUBTREES=y
+CONFIG_IPV6_MROUTE=y
+CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
+CONFIG_IPV6_PIMSM_V2=y
+# CONFIG_IPV6_SEG6_LWTUNNEL is not set
+# CONFIG_IPV6_SEG6_HMAC is not set
+# CONFIG_IPV6_RPL_LWTUNNEL is not set
+CONFIG_NETLABEL=y
+# CONFIG_MPTCP is not set
+CONFIG_NETWORK_SECMARK=y
+CONFIG_NET_PTP_CLASSIFY=y
+# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
+CONFIG_NETFILTER=y
+CONFIG_NETFILTER_ADVANCED=y
+CONFIG_BRIDGE_NETFILTER=m
+
+#
+# Core Netfilter Configuration
+#
+CONFIG_NETFILTER_INGRESS=y
+CONFIG_NETFILTER_NETLINK=m
+CONFIG_NETFILTER_FAMILY_BRIDGE=y
+CONFIG_NETFILTER_FAMILY_ARP=y
+CONFIG_NETFILTER_NETLINK_ACCT=m
+CONFIG_NETFILTER_NETLINK_QUEUE=m
+CONFIG_NETFILTER_NETLINK_LOG=m
+CONFIG_NETFILTER_NETLINK_OSF=m
+CONFIG_NF_CONNTRACK=m
+CONFIG_NF_LOG_COMMON=m
+# CONFIG_NF_LOG_NETDEV is not set
+CONFIG_NETFILTER_CONNCOUNT=m
+CONFIG_NF_CONNTRACK_MARK=y
+CONFIG_NF_CONNTRACK_SECMARK=y
+CONFIG_NF_CONNTRACK_ZONES=y
+# CONFIG_NF_CONNTRACK_PROCFS is not set
+CONFIG_NF_CONNTRACK_EVENTS=y
+CONFIG_NF_CONNTRACK_TIMEOUT=y
+CONFIG_NF_CONNTRACK_TIMESTAMP=y
+CONFIG_NF_CONNTRACK_LABELS=y
+CONFIG_NF_CT_PROTO_DCCP=y
+CONFIG_NF_CT_PROTO_GRE=y
+CONFIG_NF_CT_PROTO_SCTP=y
+CONFIG_NF_CT_PROTO_UDPLITE=y
+CONFIG_NF_CONNTRACK_AMANDA=m
+CONFIG_NF_CONNTRACK_FTP=m
+CONFIG_NF_CONNTRACK_H323=m
+CONFIG_NF_CONNTRACK_IRC=m
+CONFIG_NF_CONNTRACK_BROADCAST=m
+CONFIG_NF_CONNTRACK_NETBIOS_NS=m
+CONFIG_NF_CONNTRACK_SNMP=m
+CONFIG_NF_CONNTRACK_PPTP=m
+CONFIG_NF_CONNTRACK_SANE=m
+CONFIG_NF_CONNTRACK_SIP=m
+CONFIG_NF_CONNTRACK_TFTP=m
+CONFIG_NF_CT_NETLINK=m
+CONFIG_NF_CT_NETLINK_TIMEOUT=m
+# CONFIG_NETFILTER_NETLINK_GLUE_CT is not set
+CONFIG_NF_NAT=m
+CONFIG_NF_NAT_AMANDA=m
+CONFIG_NF_NAT_FTP=m
+CONFIG_NF_NAT_IRC=m
+CONFIG_NF_NAT_SIP=m
+CONFIG_NF_NAT_TFTP=m
+CONFIG_NF_NAT_REDIRECT=y
+CONFIG_NF_NAT_MASQUERADE=y
+CONFIG_NETFILTER_SYNPROXY=m
+CONFIG_NF_TABLES=m
+CONFIG_NF_TABLES_INET=y
+CONFIG_NF_TABLES_NETDEV=y
+CONFIG_NFT_NUMGEN=m
+CONFIG_NFT_CT=m
+# CONFIG_NFT_FLOW_OFFLOAD is not set
+CONFIG_NFT_COUNTER=m
+# CONFIG_NFT_CONNLIMIT is not set
+CONFIG_NFT_LOG=m
+CONFIG_NFT_LIMIT=m
+CONFIG_NFT_MASQ=m
+CONFIG_NFT_REDIR=m
+CONFIG_NFT_NAT=m
+# CONFIG_NFT_TUNNEL is not set
+# CONFIG_NFT_OBJREF is not set
+CONFIG_NFT_QUEUE=m
+# CONFIG_NFT_QUOTA is not set
+CONFIG_NFT_REJECT=m
+CONFIG_NFT_REJECT_INET=m
+CONFIG_NFT_COMPAT=m
+CONFIG_NFT_HASH=m
+CONFIG_NFT_FIB=m
+# CONFIG_NFT_FIB_INET is not set
+# CONFIG_NFT_XFRM is not set
+CONFIG_NFT_SOCKET=m
+CONFIG_NFT_OSF=m
+CONFIG_NFT_TPROXY=m
+CONFIG_NFT_SYNPROXY=m
+CONFIG_NF_DUP_NETDEV=m
+CONFIG_NFT_DUP_NETDEV=m
+CONFIG_NFT_FWD_NETDEV=m
+# CONFIG_NFT_FIB_NETDEV is not set
+CONFIG_NF_FLOW_TABLE_INET=m
+CONFIG_NF_FLOW_TABLE=m
+CONFIG_NETFILTER_XTABLES=m
+
+#
+# Xtables combined modules
+#
+CONFIG_NETFILTER_XT_MARK=m
+CONFIG_NETFILTER_XT_CONNMARK=m
+CONFIG_NETFILTER_XT_SET=m
+
+#
+# Xtables targets
+#
+CONFIG_NETFILTER_XT_TARGET_AUDIT=m
+CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
+CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
+CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
+CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
+CONFIG_NETFILTER_XT_TARGET_CT=m
+CONFIG_NETFILTER_XT_TARGET_DSCP=m
+CONFIG_NETFILTER_XT_TARGET_HL=m
+CONFIG_NETFILTER_XT_TARGET_HMARK=m
+CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
+CONFIG_NETFILTER_XT_TARGET_LED=m
+CONFIG_NETFILTER_XT_TARGET_LOG=m
+CONFIG_NETFILTER_XT_TARGET_MARK=m
+CONFIG_NETFILTER_XT_NAT=m
+CONFIG_NETFILTER_XT_TARGET_NETMAP=m
+CONFIG_NETFILTER_XT_TARGET_NFLOG=m
+CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
+# CONFIG_NETFILTER_XT_TARGET_NOTRACK is not set
+CONFIG_NETFILTER_XT_TARGET_RATEEST=m
+CONFIG_NETFILTER_XT_TARGET_REDIRECT=m
+CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m
+CONFIG_NETFILTER_XT_TARGET_TEE=m
+CONFIG_NETFILTER_XT_TARGET_TPROXY=m
+CONFIG_NETFILTER_XT_TARGET_TRACE=m
+CONFIG_NETFILTER_XT_TARGET_SECMARK=m
+CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
+CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m
+
+#
+# Xtables matches
+#
+CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
+CONFIG_NETFILTER_XT_MATCH_BPF=m
+CONFIG_NETFILTER_XT_MATCH_CGROUP=m
+CONFIG_NETFILTER_XT_MATCH_CLUSTER=m
+CONFIG_NETFILTER_XT_MATCH_COMMENT=m
+CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
+CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m
+CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
+CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
+CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
+CONFIG_NETFILTER_XT_MATCH_CPU=m
+CONFIG_NETFILTER_XT_MATCH_DCCP=m
+CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
+CONFIG_NETFILTER_XT_MATCH_DSCP=m
+CONFIG_NETFILTER_XT_MATCH_ECN=m
+CONFIG_NETFILTER_XT_MATCH_ESP=m
+CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
+CONFIG_NETFILTER_XT_MATCH_HELPER=m
+CONFIG_NETFILTER_XT_MATCH_HL=m
+CONFIG_NETFILTER_XT_MATCH_IPCOMP=m
+CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
+CONFIG_NETFILTER_XT_MATCH_IPVS=m
+CONFIG_NETFILTER_XT_MATCH_L2TP=m
+CONFIG_NETFILTER_XT_MATCH_LENGTH=m
+CONFIG_NETFILTER_XT_MATCH_LIMIT=m
+CONFIG_NETFILTER_XT_MATCH_MAC=m
+CONFIG_NETFILTER_XT_MATCH_MARK=m
+CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
+CONFIG_NETFILTER_XT_MATCH_NFACCT=m
+CONFIG_NETFILTER_XT_MATCH_OSF=m
+CONFIG_NETFILTER_XT_MATCH_OWNER=m
+CONFIG_NETFILTER_XT_MATCH_POLICY=m
+CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m
+CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
+CONFIG_NETFILTER_XT_MATCH_QUOTA=m
+CONFIG_NETFILTER_XT_MATCH_RATEEST=m
+CONFIG_NETFILTER_XT_MATCH_REALM=m
+CONFIG_NETFILTER_XT_MATCH_RECENT=m
+CONFIG_NETFILTER_XT_MATCH_SCTP=m
+CONFIG_NETFILTER_XT_MATCH_SOCKET=m
+CONFIG_NETFILTER_XT_MATCH_STATE=m
+CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
+CONFIG_NETFILTER_XT_MATCH_STRING=m
+CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
+CONFIG_NETFILTER_XT_MATCH_TIME=m
+CONFIG_NETFILTER_XT_MATCH_U32=m
+# end of Core Netfilter Configuration
+
+CONFIG_IP_SET=m
+CONFIG_IP_SET_MAX=256
+CONFIG_IP_SET_BITMAP_IP=m
+CONFIG_IP_SET_BITMAP_IPMAC=m
+CONFIG_IP_SET_BITMAP_PORT=m
+CONFIG_IP_SET_HASH_IP=m
+CONFIG_IP_SET_HASH_IPMARK=m
+CONFIG_IP_SET_HASH_IPPORT=m
+CONFIG_IP_SET_HASH_IPPORTIP=m
+CONFIG_IP_SET_HASH_IPPORTNET=m
+# CONFIG_IP_SET_HASH_IPMAC is not set
+CONFIG_IP_SET_HASH_MAC=m
+CONFIG_IP_SET_HASH_NETPORTNET=m
+CONFIG_IP_SET_HASH_NET=m
+CONFIG_IP_SET_HASH_NETNET=m
+CONFIG_IP_SET_HASH_NETPORT=m
+CONFIG_IP_SET_HASH_NETIFACE=m
+CONFIG_IP_SET_LIST_SET=m
+CONFIG_IP_VS=m
+CONFIG_IP_VS_IPV6=y
+# CONFIG_IP_VS_DEBUG is not set
+CONFIG_IP_VS_TAB_BITS=12
+
+#
+# IPVS transport protocol load balancing support
+#
+CONFIG_IP_VS_PROTO_TCP=y
+CONFIG_IP_VS_PROTO_UDP=y
+CONFIG_IP_VS_PROTO_AH_ESP=y
+CONFIG_IP_VS_PROTO_ESP=y
+CONFIG_IP_VS_PROTO_AH=y
+CONFIG_IP_VS_PROTO_SCTP=y
+
+#
+# IPVS scheduler
+#
+CONFIG_IP_VS_RR=m
+CONFIG_IP_VS_WRR=m
+CONFIG_IP_VS_LC=m
+CONFIG_IP_VS_WLC=m
+CONFIG_IP_VS_FO=m
+# CONFIG_IP_VS_OVF is not set
+CONFIG_IP_VS_LBLC=m
+CONFIG_IP_VS_LBLCR=m
+CONFIG_IP_VS_DH=m
+CONFIG_IP_VS_SH=m
+# CONFIG_IP_VS_MH is not set
+CONFIG_IP_VS_SED=m
+CONFIG_IP_VS_NQ=m
+
+#
+# IPVS SH scheduler
+#
+CONFIG_IP_VS_SH_TAB_BITS=8
+
+#
+# IPVS MH scheduler
+#
+CONFIG_IP_VS_MH_TAB_INDEX=12
+
+#
+# IPVS application helper
+#
+CONFIG_IP_VS_FTP=m
+CONFIG_IP_VS_NFCT=y
+CONFIG_IP_VS_PE_SIP=m
+
+#
+# IP: Netfilter Configuration
+#
+CONFIG_NF_DEFRAG_IPV4=m
+CONFIG_NF_SOCKET_IPV4=m
+CONFIG_NF_TPROXY_IPV4=m
+CONFIG_NF_TABLES_IPV4=y
+CONFIG_NFT_REJECT_IPV4=m
+CONFIG_NFT_DUP_IPV4=m
+CONFIG_NFT_FIB_IPV4=m
+CONFIG_NF_TABLES_ARP=y
+CONFIG_NF_FLOW_TABLE_IPV4=m
+CONFIG_NF_DUP_IPV4=m
+CONFIG_NF_LOG_ARP=m
+CONFIG_NF_LOG_IPV4=m
+CONFIG_NF_REJECT_IPV4=m
+CONFIG_NF_NAT_SNMP_BASIC=m
+CONFIG_NF_NAT_PPTP=m
+CONFIG_NF_NAT_H323=m
+CONFIG_IP_NF_IPTABLES=m
+CONFIG_IP_NF_MATCH_AH=m
+CONFIG_IP_NF_MATCH_ECN=m
+CONFIG_IP_NF_MATCH_RPFILTER=m
+CONFIG_IP_NF_MATCH_TTL=m
+CONFIG_IP_NF_FILTER=m
+CONFIG_IP_NF_TARGET_REJECT=m
+CONFIG_IP_NF_TARGET_SYNPROXY=m
+CONFIG_IP_NF_NAT=m
+CONFIG_IP_NF_TARGET_MASQUERADE=m
+CONFIG_IP_NF_TARGET_NETMAP=m
+CONFIG_IP_NF_TARGET_REDIRECT=m
+CONFIG_IP_NF_MANGLE=m
+CONFIG_IP_NF_TARGET_CLUSTERIP=m
+CONFIG_IP_NF_TARGET_ECN=m
+CONFIG_IP_NF_TARGET_TTL=m
+CONFIG_IP_NF_RAW=m
+CONFIG_IP_NF_SECURITY=m
+CONFIG_IP_NF_ARPTABLES=m
+CONFIG_IP_NF_ARPFILTER=m
+CONFIG_IP_NF_ARP_MANGLE=m
+# end of IP: Netfilter Configuration
+
+#
+# IPv6: Netfilter Configuration
+#
+CONFIG_NF_SOCKET_IPV6=m
+CONFIG_NF_TPROXY_IPV6=m
+CONFIG_NF_TABLES_IPV6=y
+CONFIG_NFT_REJECT_IPV6=m
+CONFIG_NFT_DUP_IPV6=m
+CONFIG_NFT_FIB_IPV6=m
+CONFIG_NF_FLOW_TABLE_IPV6=m
+CONFIG_NF_DUP_IPV6=m
+CONFIG_NF_REJECT_IPV6=m
+CONFIG_NF_LOG_IPV6=m
+CONFIG_IP6_NF_IPTABLES=m
+CONFIG_IP6_NF_MATCH_AH=m
+CONFIG_IP6_NF_MATCH_EUI64=m
+CONFIG_IP6_NF_MATCH_FRAG=m
+CONFIG_IP6_NF_MATCH_OPTS=m
+CONFIG_IP6_NF_MATCH_HL=m
+CONFIG_IP6_NF_MATCH_IPV6HEADER=m
+CONFIG_IP6_NF_MATCH_MH=m
+CONFIG_IP6_NF_MATCH_RPFILTER=m
+CONFIG_IP6_NF_MATCH_RT=m
+# CONFIG_IP6_NF_MATCH_SRH is not set
+CONFIG_IP6_NF_TARGET_HL=m
+CONFIG_IP6_NF_FILTER=m
+CONFIG_IP6_NF_TARGET_REJECT=m
+CONFIG_IP6_NF_TARGET_SYNPROXY=m
+CONFIG_IP6_NF_MANGLE=m
+CONFIG_IP6_NF_RAW=m
+CONFIG_IP6_NF_SECURITY=m
+CONFIG_IP6_NF_NAT=m
+CONFIG_IP6_NF_TARGET_MASQUERADE=m
+CONFIG_IP6_NF_TARGET_NPT=m
+# end of IPv6: Netfilter Configuration
+
+CONFIG_NF_DEFRAG_IPV6=m
+
+#
+# DECnet: Netfilter Configuration
+#
+CONFIG_DECNET_NF_GRABULATOR=m
+# end of DECnet: Netfilter Configuration
+
+CONFIG_NF_TABLES_BRIDGE=m
+CONFIG_NFT_BRIDGE_META=m
+CONFIG_NFT_BRIDGE_REJECT=m
+CONFIG_NF_LOG_BRIDGE=m
+# CONFIG_NF_CONNTRACK_BRIDGE is not set
+CONFIG_BRIDGE_NF_EBTABLES=m
+CONFIG_BRIDGE_EBT_BROUTE=m
+CONFIG_BRIDGE_EBT_T_FILTER=m
+CONFIG_BRIDGE_EBT_T_NAT=m
+CONFIG_BRIDGE_EBT_802_3=m
+CONFIG_BRIDGE_EBT_AMONG=m
+CONFIG_BRIDGE_EBT_ARP=m
+CONFIG_BRIDGE_EBT_IP=m
+CONFIG_BRIDGE_EBT_IP6=m
+CONFIG_BRIDGE_EBT_LIMIT=m
+CONFIG_BRIDGE_EBT_MARK=m
+CONFIG_BRIDGE_EBT_PKTTYPE=m
+CONFIG_BRIDGE_EBT_STP=m
+CONFIG_BRIDGE_EBT_VLAN=m
+CONFIG_BRIDGE_EBT_ARPREPLY=m
+CONFIG_BRIDGE_EBT_DNAT=m
+CONFIG_BRIDGE_EBT_MARK_T=m
+CONFIG_BRIDGE_EBT_REDIRECT=m
+CONFIG_BRIDGE_EBT_SNAT=m
+CONFIG_BRIDGE_EBT_LOG=m
+CONFIG_BRIDGE_EBT_NFLOG=m
+# CONFIG_BPFILTER is not set
+CONFIG_IP_DCCP=m
+CONFIG_INET_DCCP_DIAG=m
+
+#
+# DCCP CCIDs Configuration
+#
+# CONFIG_IP_DCCP_CCID2_DEBUG is not set
+# CONFIG_IP_DCCP_CCID3 is not set
+# end of DCCP CCIDs Configuration
+
+#
+# DCCP Kernel Hacking
+#
+# CONFIG_IP_DCCP_DEBUG is not set
+# end of DCCP Kernel Hacking
+
+CONFIG_IP_SCTP=m
+# CONFIG_SCTP_DBG_OBJCNT is not set
+# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5 is not set
+CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1=y
+# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
+CONFIG_SCTP_COOKIE_HMAC_MD5=y
+CONFIG_SCTP_COOKIE_HMAC_SHA1=y
+CONFIG_INET_SCTP_DIAG=m
+CONFIG_RDS=m
+CONFIG_RDS_RDMA=m
+CONFIG_RDS_TCP=m
+# CONFIG_RDS_DEBUG is not set
+CONFIG_TIPC=m
+CONFIG_TIPC_MEDIA_IB=y
+CONFIG_TIPC_MEDIA_UDP=y
+CONFIG_TIPC_CRYPTO=y
+CONFIG_TIPC_DIAG=m
+# CONFIG_ATM is not set
+CONFIG_L2TP=m
+CONFIG_L2TP_DEBUGFS=m
+CONFIG_L2TP_V3=y
+CONFIG_L2TP_IP=m
+CONFIG_L2TP_ETH=m
+CONFIG_STP=m
+CONFIG_GARP=m
+CONFIG_MRP=m
+CONFIG_BRIDGE=m
+CONFIG_BRIDGE_IGMP_SNOOPING=y
+CONFIG_BRIDGE_VLAN_FILTERING=y
+CONFIG_HAVE_NET_DSA=y
+CONFIG_NET_DSA=m
+# CONFIG_NET_DSA_TAG_AR9331 is not set
+CONFIG_NET_DSA_TAG_BRCM_COMMON=m
+CONFIG_NET_DSA_TAG_BRCM=m
+CONFIG_NET_DSA_TAG_BRCM_PREPEND=m
+# CONFIG_NET_DSA_TAG_GSWIP is not set
+CONFIG_NET_DSA_TAG_DSA=m
+CONFIG_NET_DSA_TAG_EDSA=m
+# CONFIG_NET_DSA_TAG_MTK is not set
+# CONFIG_NET_DSA_TAG_KSZ is not set
+# CONFIG_NET_DSA_TAG_OCELOT is not set
+# CONFIG_NET_DSA_TAG_QCA is not set
+# CONFIG_NET_DSA_TAG_LAN9303 is not set
+# CONFIG_NET_DSA_TAG_SJA1105 is not set
+CONFIG_NET_DSA_TAG_TRAILER=m
+CONFIG_VLAN_8021Q=m
+CONFIG_VLAN_8021Q_GVRP=y
+CONFIG_VLAN_8021Q_MVRP=y
+CONFIG_DECNET=m
+# CONFIG_DECNET_ROUTER is not set
+CONFIG_LLC=m
+CONFIG_LLC2=m
+# CONFIG_ATALK is not set
+CONFIG_X25=m
+CONFIG_LAPB=m
+CONFIG_PHONET=m
+CONFIG_6LOWPAN=m
+# CONFIG_6LOWPAN_DEBUGFS is not set
+CONFIG_6LOWPAN_NHC=m
+CONFIG_6LOWPAN_NHC_DEST=m
+CONFIG_6LOWPAN_NHC_FRAGMENT=m
+CONFIG_6LOWPAN_NHC_HOP=m
+CONFIG_6LOWPAN_NHC_IPV6=m
+CONFIG_6LOWPAN_NHC_MOBILITY=m
+CONFIG_6LOWPAN_NHC_ROUTING=m
+CONFIG_6LOWPAN_NHC_UDP=m
+# CONFIG_6LOWPAN_GHC_EXT_HDR_HOP is not set
+# CONFIG_6LOWPAN_GHC_UDP is not set
+# CONFIG_6LOWPAN_GHC_ICMPV6 is not set
+# CONFIG_6LOWPAN_GHC_EXT_HDR_DEST is not set
+# CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG is not set
+# CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE is not set
+CONFIG_IEEE802154=m
+# CONFIG_IEEE802154_NL802154_EXPERIMENTAL is not set
+CONFIG_IEEE802154_SOCKET=m
+CONFIG_IEEE802154_6LOWPAN=m
+CONFIG_MAC802154=m
+CONFIG_NET_SCHED=y
+
+#
+# Queueing/Scheduling
+#
+CONFIG_NET_SCH_CBQ=m
+CONFIG_NET_SCH_HTB=m
+CONFIG_NET_SCH_HFSC=m
+CONFIG_NET_SCH_PRIO=m
+CONFIG_NET_SCH_MULTIQ=m
+CONFIG_NET_SCH_RED=m
+CONFIG_NET_SCH_SFB=m
+CONFIG_NET_SCH_SFQ=m
+CONFIG_NET_SCH_TEQL=m
+CONFIG_NET_SCH_TBF=m
+# CONFIG_NET_SCH_CBS is not set
+# CONFIG_NET_SCH_ETF is not set
+# CONFIG_NET_SCH_TAPRIO is not set
+CONFIG_NET_SCH_GRED=m
+CONFIG_NET_SCH_DSMARK=m
+CONFIG_NET_SCH_NETEM=m
+CONFIG_NET_SCH_DRR=m
+CONFIG_NET_SCH_MQPRIO=m
+# CONFIG_NET_SCH_SKBPRIO is not set
+CONFIG_NET_SCH_CHOKE=m
+CONFIG_NET_SCH_QFQ=m
+CONFIG_NET_SCH_CODEL=m
+CONFIG_NET_SCH_FQ_CODEL=m
+# CONFIG_NET_SCH_CAKE is not set
+CONFIG_NET_SCH_FQ=m
+CONFIG_NET_SCH_HHF=m
+CONFIG_NET_SCH_PIE=m
+# CONFIG_NET_SCH_FQ_PIE is not set
+CONFIG_NET_SCH_INGRESS=m
+CONFIG_NET_SCH_PLUG=m
+# CONFIG_NET_SCH_ETS is not set
+# CONFIG_NET_SCH_DEFAULT is not set
+
+#
+# Classification
+#
+CONFIG_NET_CLS=y
+CONFIG_NET_CLS_BASIC=m
+CONFIG_NET_CLS_TCINDEX=m
+CONFIG_NET_CLS_ROUTE4=m
+CONFIG_NET_CLS_FW=m
+CONFIG_NET_CLS_U32=m
+# CONFIG_CLS_U32_PERF is not set
+CONFIG_CLS_U32_MARK=y
+CONFIG_NET_CLS_RSVP=m
+CONFIG_NET_CLS_RSVP6=m
+CONFIG_NET_CLS_FLOW=m
+CONFIG_NET_CLS_CGROUP=m
+CONFIG_NET_CLS_BPF=m
+# CONFIG_NET_CLS_FLOWER is not set
+# CONFIG_NET_CLS_MATCHALL is not set
+CONFIG_NET_EMATCH=y
+CONFIG_NET_EMATCH_STACK=32
+CONFIG_NET_EMATCH_CMP=m
+CONFIG_NET_EMATCH_NBYTE=m
+CONFIG_NET_EMATCH_U32=m
+CONFIG_NET_EMATCH_META=m
+CONFIG_NET_EMATCH_TEXT=m
+CONFIG_NET_EMATCH_IPSET=m
+# CONFIG_NET_EMATCH_IPT is not set
+CONFIG_NET_CLS_ACT=y
+CONFIG_NET_ACT_POLICE=m
+CONFIG_NET_ACT_GACT=m
+CONFIG_GACT_PROB=y
+CONFIG_NET_ACT_MIRRED=m
+# CONFIG_NET_ACT_SAMPLE is not set
+CONFIG_NET_ACT_IPT=m
+CONFIG_NET_ACT_NAT=m
+CONFIG_NET_ACT_PEDIT=m
+CONFIG_NET_ACT_SIMP=m
+CONFIG_NET_ACT_SKBEDIT=m
+CONFIG_NET_ACT_CSUM=m
+# CONFIG_NET_ACT_MPLS is not set
+CONFIG_NET_ACT_VLAN=m
+# CONFIG_NET_ACT_BPF is not set
+# CONFIG_NET_ACT_CONNMARK is not set
+# CONFIG_NET_ACT_CTINFO is not set
+# CONFIG_NET_ACT_SKBMOD is not set
+# CONFIG_NET_ACT_IFE is not set
+# CONFIG_NET_ACT_TUNNEL_KEY is not set
+# CONFIG_NET_ACT_CT is not set
+# CONFIG_NET_TC_SKB_EXT is not set
+CONFIG_NET_SCH_FIFO=y
+CONFIG_DCB=y
+CONFIG_DNS_RESOLVER=y
+CONFIG_BATMAN_ADV=m
+CONFIG_BATMAN_ADV_BATMAN_V=y
+CONFIG_BATMAN_ADV_BLA=y
+CONFIG_BATMAN_ADV_DAT=y
+CONFIG_BATMAN_ADV_NC=y
+CONFIG_BATMAN_ADV_MCAST=y
+# CONFIG_BATMAN_ADV_DEBUGFS is not set
+# CONFIG_BATMAN_ADV_DEBUG is not set
+# CONFIG_BATMAN_ADV_SYSFS is not set
+# CONFIG_BATMAN_ADV_TRACING is not set
+CONFIG_OPENVSWITCH=m
+CONFIG_OPENVSWITCH_GRE=m
+CONFIG_OPENVSWITCH_VXLAN=m
+CONFIG_OPENVSWITCH_GENEVE=m
+CONFIG_VSOCKETS=m
+CONFIG_VSOCKETS_DIAG=m
+CONFIG_VSOCKETS_LOOPBACK=m
+CONFIG_VMWARE_VMCI_VSOCKETS=m
+# CONFIG_VIRTIO_VSOCKETS is not set
+CONFIG_VIRTIO_VSOCKETS_COMMON=m
+# CONFIG_HYPERV_VSOCKETS is not set
+CONFIG_NETLINK_DIAG=m
+CONFIG_MPLS=y
+CONFIG_NET_MPLS_GSO=m
+# CONFIG_MPLS_ROUTING is not set
+CONFIG_NET_NSH=m
+CONFIG_HSR=m
+CONFIG_NET_SWITCHDEV=y
+CONFIG_NET_L3_MASTER_DEV=y
+# CONFIG_NET_NCSI is not set
+CONFIG_RPS=y
+CONFIG_RFS_ACCEL=y
+CONFIG_XPS=y
+CONFIG_CGROUP_NET_PRIO=y
+CONFIG_CGROUP_NET_CLASSID=y
+CONFIG_NET_RX_BUSY_POLL=y
+CONFIG_BQL=y
+CONFIG_BPF_JIT=y
+CONFIG_NET_FLOW_LIMIT=y
+
+#
+# Network testing
+#
+CONFIG_NET_PKTGEN=m
+# CONFIG_NET_DROP_MONITOR is not set
+# end of Network testing
+# end of Networking options
+
+# CONFIG_HAMRADIO is not set
+# CONFIG_CAN is not set
+# CONFIG_BT is not set
+CONFIG_AF_RXRPC=m
+# CONFIG_AF_RXRPC_IPV6 is not set
+# CONFIG_AF_RXRPC_INJECT_LOSS is not set
+# CONFIG_AF_RXRPC_DEBUG is not set
+# CONFIG_RXKAD is not set
+# CONFIG_AF_KCM is not set
+CONFIG_FIB_RULES=y
+# CONFIG_WIRELESS is not set
+# CONFIG_WIMAX is not set
+# CONFIG_RFKILL is not set
+CONFIG_NET_9P=m
+CONFIG_NET_9P_VIRTIO=m
+# CONFIG_NET_9P_XEN is not set
+CONFIG_NET_9P_RDMA=m
+# CONFIG_NET_9P_DEBUG is not set
+# CONFIG_CAIF is not set
+CONFIG_CEPH_LIB=m
+CONFIG_CEPH_LIB_PRETTYDEBUG=y
+CONFIG_CEPH_LIB_USE_DNS_RESOLVER=y
+# CONFIG_NFC is not set
+# CONFIG_PSAMPLE is not set
+# CONFIG_NET_IFE is not set
+# CONFIG_LWTUNNEL is not set
+CONFIG_DST_CACHE=y
+CONFIG_GRO_CELLS=y
+CONFIG_NET_DEVLINK=y
+CONFIG_PAGE_POOL=y
+CONFIG_FAILOVER=y
+CONFIG_ETHTOOL_NETLINK=y
+CONFIG_HAVE_EBPF_JIT=y
+
+#
+# Device Drivers
+#
+CONFIG_HAVE_EISA=y
+# CONFIG_EISA is not set
+CONFIG_HAVE_PCI=y
+CONFIG_PCI=y
+CONFIG_PCI_DOMAINS=y
+CONFIG_PCIEPORTBUS=y
+CONFIG_HOTPLUG_PCI_PCIE=y
+CONFIG_PCIEAER=y
+# CONFIG_PCIEAER_INJECT is not set
+# CONFIG_PCIE_ECRC is not set
+CONFIG_PCIEASPM=y
+CONFIG_PCIEASPM_DEFAULT=y
+# CONFIG_PCIEASPM_POWERSAVE is not set
+# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set
+# CONFIG_PCIEASPM_PERFORMANCE is not set
+CONFIG_PCIE_PME=y
+# CONFIG_PCIE_DPC is not set
+# CONFIG_PCIE_PTM is not set
+# CONFIG_PCIE_BW is not set
+CONFIG_PCI_MSI=y
+CONFIG_PCI_MSI_IRQ_DOMAIN=y
+CONFIG_PCI_QUIRKS=y
+# CONFIG_PCI_DEBUG is not set
+CONFIG_PCI_REALLOC_ENABLE_AUTO=y
+CONFIG_PCI_STUB=m
+# CONFIG_PCI_PF_STUB is not set
+CONFIG_XEN_PCIDEV_FRONTEND=m
+CONFIG_PCI_ATS=y
+CONFIG_PCI_LOCKLESS_CONFIG=y
+CONFIG_PCI_IOV=y
+CONFIG_PCI_PRI=y
+CONFIG_PCI_PASID=y
+CONFIG_PCI_LABEL=y
+# CONFIG_PCI_HYPERV is not set
+CONFIG_HOTPLUG_PCI=y
+CONFIG_HOTPLUG_PCI_ACPI=y
+CONFIG_HOTPLUG_PCI_ACPI_IBM=m
+CONFIG_HOTPLUG_PCI_CPCI=y
+CONFIG_HOTPLUG_PCI_CPCI_ZT5550=m
+CONFIG_HOTPLUG_PCI_CPCI_GENERIC=m
+# CONFIG_HOTPLUG_PCI_SHPC is not set
+
+#
+# PCI controller drivers
+#
+# CONFIG_VMD is not set
+# CONFIG_PCI_HYPERV_INTERFACE is not set
+
+#
+# DesignWare PCI Core Support
+#
+# CONFIG_PCIE_DW_PLAT_HOST is not set
+# CONFIG_PCI_MESON is not set
+# end of DesignWare PCI Core Support
+
+#
+# Mobiveil PCIe Core Support
+#
+# end of Mobiveil PCIe Core Support
+
+#
+# Cadence PCIe controllers support
+#
+# end of Cadence PCIe controllers support
+# end of PCI controller drivers
+
+#
+# PCI Endpoint
+#
+# CONFIG_PCI_ENDPOINT is not set
+# end of PCI Endpoint
+
+#
+# PCI switch controller drivers
+#
+# CONFIG_PCI_SW_SWITCHTEC is not set
+# end of PCI switch controller drivers
+
+CONFIG_PCCARD=m
+CONFIG_PCMCIA=m
+CONFIG_PCMCIA_LOAD_CIS=y
+CONFIG_CARDBUS=y
+
+#
+# PC-card bridges
+#
+CONFIG_YENTA=m
+CONFIG_YENTA_O2=y
+CONFIG_YENTA_RICOH=y
+CONFIG_YENTA_TI=y
+CONFIG_YENTA_ENE_TUNE=y
+CONFIG_YENTA_TOSHIBA=y
+CONFIG_PD6729=m
+CONFIG_I82092=m
+CONFIG_PCCARD_NONSTATIC=y
+# CONFIG_RAPIDIO is not set
+
+#
+# Generic Driver Options
+#
+CONFIG_UEVENT_HELPER=y
+CONFIG_UEVENT_HELPER_PATH=""
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_STANDALONE is not set
+CONFIG_PREVENT_FIRMWARE_BUILD=y
+
+#
+# Firmware loader
+#
+CONFIG_FW_LOADER=y
+CONFIG_FW_LOADER_PAGED_BUF=y
+CONFIG_EXTRA_FIRMWARE=""
+CONFIG_FW_LOADER_USER_HELPER=y
+# CONFIG_FW_LOADER_USER_HELPER_FALLBACK is not set
+# CONFIG_FW_LOADER_COMPRESS is not set
+CONFIG_FW_CACHE=y
+# end of Firmware loader
+
+CONFIG_ALLOW_DEV_COREDUMP=y
+# CONFIG_DEBUG_DRIVER is not set
+# CONFIG_DEBUG_DEVRES is not set
+# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
+# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
+CONFIG_SYS_HYPERVISOR=y
+CONFIG_GENERIC_CPU_AUTOPROBE=y
+CONFIG_GENERIC_CPU_VULNERABILITIES=y
+CONFIG_REGMAP=y
+CONFIG_REGMAP_I2C=y
+CONFIG_REGMAP_MMIO=y
+CONFIG_REGMAP_IRQ=y
+# end of Generic Driver Options
+
+#
+# Bus devices
+#
+# CONFIG_MHI_BUS is not set
+# end of Bus devices
+
+CONFIG_CONNECTOR=y
+CONFIG_PROC_EVENTS=y
+# CONFIG_GNSS is not set
+# CONFIG_MTD is not set
+# CONFIG_OF is not set
+CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
+CONFIG_PARPORT=m
+CONFIG_PARPORT_PC=m
+CONFIG_PARPORT_SERIAL=m
+CONFIG_PARPORT_PC_FIFO=y
+# CONFIG_PARPORT_PC_SUPERIO is not set
+CONFIG_PARPORT_PC_PCMCIA=m
+CONFIG_PARPORT_AX88796=m
+CONFIG_PARPORT_1284=y
+CONFIG_PARPORT_NOT_PC=y
+CONFIG_PNP=y
+# CONFIG_PNP_DEBUG_MESSAGES is not set
+
+#
+# Protocols
+#
+CONFIG_PNPACPI=y
+CONFIG_BLK_DEV=y
+CONFIG_BLK_DEV_NULL_BLK=m
+CONFIG_BLK_DEV_FD=m
+CONFIG_CDROM=y
+# CONFIG_PARIDE is not set
+CONFIG_BLK_DEV_PCIESSD_MTIP32XX=m
+CONFIG_ZRAM=m
+# CONFIG_ZRAM_WRITEBACK is not set
+# CONFIG_ZRAM_MEMORY_TRACKING is not set
+CONFIG_BLK_DEV_UMEM=m
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
+CONFIG_BLK_DEV_CRYPTOLOOP=m
+CONFIG_BLK_DEV_DRBD=m
+# CONFIG_DRBD_FAULT_INJECTION is not set
+CONFIG_BLK_DEV_NBD=m
+CONFIG_BLK_DEV_SKD=m
+CONFIG_BLK_DEV_SX8=m
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
+CONFIG_BLK_DEV_RAM_SIZE=65536
+CONFIG_CDROM_PKTCDVD=m
+CONFIG_CDROM_PKTCDVD_BUFFERS=8
+# CONFIG_CDROM_PKTCDVD_WCACHE is not set
+CONFIG_ATA_OVER_ETH=m
+CONFIG_XEN_BLKDEV_FRONTEND=y
+CONFIG_XEN_BLKDEV_BACKEND=m
+CONFIG_VIRTIO_BLK=y
+CONFIG_BLK_DEV_RBD=m
+CONFIG_BLK_DEV_RSXX=m
+
+#
+# NVME Support
+#
+CONFIG_NVME_CORE=m
+CONFIG_BLK_DEV_NVME=m
+# CONFIG_NVME_MULTIPATH is not set
+# CONFIG_NVME_HWMON is not set
+# CONFIG_NVME_RDMA is not set
+# CONFIG_NVME_FC is not set
+# CONFIG_NVME_TCP is not set
+# CONFIG_NVME_TARGET is not set
+# end of NVME Support
+
+#
+# Misc devices
+#
+CONFIG_SENSORS_LIS3LV02D=m
+CONFIG_AD525X_DPOT=m
+CONFIG_AD525X_DPOT_I2C=m
+CONFIG_DUMMY_IRQ=m
+CONFIG_IBM_ASM=m
+CONFIG_PHANTOM=m
+CONFIG_TIFM_CORE=m
+CONFIG_TIFM_7XX1=m
+CONFIG_ICS932S401=m
+CONFIG_ENCLOSURE_SERVICES=m
+CONFIG_HP_ILO=m
+CONFIG_APDS9802ALS=m
+CONFIG_ISL29003=m
+CONFIG_ISL29020=m
+CONFIG_SENSORS_TSL2550=m
+CONFIG_SENSORS_BH1770=m
+CONFIG_SENSORS_APDS990X=m
+CONFIG_HMC6352=m
+CONFIG_DS1682=m
+CONFIG_VMWARE_BALLOON=m
+CONFIG_SRAM=y
+# CONFIG_PCI_ENDPOINT_TEST is not set
+# CONFIG_XILINX_SDFEC is not set
+CONFIG_PVPANIC=m
+CONFIG_C2PORT=m
+CONFIG_C2PORT_DURAMAR_2150=m
+
+#
+# EEPROM support
+#
+CONFIG_EEPROM_AT24=m
+CONFIG_EEPROM_LEGACY=m
+CONFIG_EEPROM_MAX6875=m
+CONFIG_EEPROM_93CX6=m
+# CONFIG_EEPROM_IDT_89HPESX is not set
+# CONFIG_EEPROM_EE1004 is not set
+# end of EEPROM support
+
+CONFIG_CB710_CORE=m
+# CONFIG_CB710_DEBUG is not set
+CONFIG_CB710_DEBUG_ASSUMPTIONS=y
+
+#
+# Texas Instruments shared transport line discipline
+#
+CONFIG_TI_ST=m
+# end of Texas Instruments shared transport line discipline
+
+CONFIG_SENSORS_LIS3_I2C=m
+CONFIG_ALTERA_STAPL=m
+CONFIG_INTEL_MEI=m
+CONFIG_INTEL_MEI_ME=m
+CONFIG_INTEL_MEI_TXE=m
+CONFIG_VMWARE_VMCI=m
+
+#
+# Intel MIC & related support
+#
+CONFIG_INTEL_MIC_BUS=m
+# CONFIG_SCIF_BUS is not set
+# CONFIG_VOP_BUS is not set
+# end of Intel MIC & related support
+
+CONFIG_GENWQE=m
+CONFIG_GENWQE_PLATFORM_ERROR_RECOVERY=0
+CONFIG_ECHO=m
+# CONFIG_MISC_ALCOR_PCI is not set
+# CONFIG_MISC_RTSX_PCI is not set
+# CONFIG_MISC_RTSX_USB is not set
+# CONFIG_HABANA_AI is not set
+# CONFIG_UACCE is not set
+# end of Misc devices
+
+CONFIG_HAVE_IDE=y
+# CONFIG_IDE is not set
+
+#
+# SCSI device support
+#
+CONFIG_SCSI_MOD=y
+CONFIG_RAID_ATTRS=m
+CONFIG_SCSI=y
+CONFIG_SCSI_DMA=y
+CONFIG_SCSI_NETLINK=y
+CONFIG_SCSI_PROC_FS=y
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
+CONFIG_BLK_DEV_SD=y
+CONFIG_CHR_DEV_ST=m
+CONFIG_BLK_DEV_SR=y
+CONFIG_CHR_DEV_SG=y
+CONFIG_CHR_DEV_SCH=m
+CONFIG_SCSI_ENCLOSURE=m
+CONFIG_SCSI_CONSTANTS=y
+CONFIG_SCSI_LOGGING=y
+CONFIG_SCSI_SCAN_ASYNC=y
+
+#
+# SCSI Transports
+#
+CONFIG_SCSI_SPI_ATTRS=m
+CONFIG_SCSI_FC_ATTRS=m
+CONFIG_SCSI_ISCSI_ATTRS=m
+CONFIG_SCSI_SAS_ATTRS=m
+CONFIG_SCSI_SAS_LIBSAS=m
+CONFIG_SCSI_SAS_ATA=y
+CONFIG_SCSI_SAS_HOST_SMP=y
+CONFIG_SCSI_SRP_ATTRS=m
+# end of SCSI Transports
+
+CONFIG_SCSI_LOWLEVEL=y
+CONFIG_ISCSI_TCP=m
+CONFIG_ISCSI_BOOT_SYSFS=m
+CONFIG_SCSI_CXGB3_ISCSI=m
+CONFIG_SCSI_CXGB4_ISCSI=m
+CONFIG_SCSI_BNX2_ISCSI=m
+CONFIG_SCSI_BNX2X_FCOE=m
+CONFIG_BE2ISCSI=m
+CONFIG_BLK_DEV_3W_XXXX_RAID=m
+CONFIG_SCSI_HPSA=m
+CONFIG_SCSI_3W_9XXX=m
+CONFIG_SCSI_3W_SAS=m
+CONFIG_SCSI_ACARD=m
+CONFIG_SCSI_AACRAID=m
+CONFIG_SCSI_AIC7XXX=m
+CONFIG_AIC7XXX_CMDS_PER_DEVICE=8
+CONFIG_AIC7XXX_RESET_DELAY_MS=5000
+# CONFIG_AIC7XXX_DEBUG_ENABLE is not set
+CONFIG_AIC7XXX_DEBUG_MASK=0
+CONFIG_AIC7XXX_REG_PRETTY_PRINT=y
+CONFIG_SCSI_AIC79XX=m
+CONFIG_AIC79XX_CMDS_PER_DEVICE=32
+CONFIG_AIC79XX_RESET_DELAY_MS=5000
+# CONFIG_AIC79XX_DEBUG_ENABLE is not set
+CONFIG_AIC79XX_DEBUG_MASK=0
+CONFIG_AIC79XX_REG_PRETTY_PRINT=y
+CONFIG_SCSI_AIC94XX=m
+# CONFIG_AIC94XX_DEBUG is not set
+CONFIG_SCSI_MVSAS=m
+# CONFIG_SCSI_MVSAS_DEBUG is not set
+# CONFIG_SCSI_MVSAS_TASKLET is not set
+CONFIG_SCSI_MVUMI=m
+CONFIG_SCSI_DPT_I2O=m
+CONFIG_SCSI_ADVANSYS=m
+CONFIG_SCSI_ARCMSR=m
+CONFIG_SCSI_ESAS2R=m
+CONFIG_MEGARAID_NEWGEN=y
+CONFIG_MEGARAID_MM=m
+CONFIG_MEGARAID_MAILBOX=m
+CONFIG_MEGARAID_LEGACY=m
+CONFIG_MEGARAID_SAS=m
+CONFIG_SCSI_MPT3SAS=m
+CONFIG_SCSI_MPT2SAS_MAX_SGE=128
+CONFIG_SCSI_MPT3SAS_MAX_SGE=128
+CONFIG_SCSI_MPT2SAS=m
+# CONFIG_SCSI_SMARTPQI is not set
+CONFIG_SCSI_UFSHCD=m
+CONFIG_SCSI_UFSHCD_PCI=m
+# CONFIG_SCSI_UFS_DWC_TC_PCI is not set
+CONFIG_SCSI_UFSHCD_PLATFORM=m
+# CONFIG_SCSI_UFS_CDNS_PLATFORM is not set
+# CONFIG_SCSI_UFS_DWC_TC_PLATFORM is not set
+# CONFIG_SCSI_UFS_BSG is not set
+CONFIG_SCSI_HPTIOP=m
+CONFIG_SCSI_BUSLOGIC=m
+CONFIG_SCSI_FLASHPOINT=y
+# CONFIG_SCSI_MYRB is not set
+# CONFIG_SCSI_MYRS is not set
+CONFIG_VMWARE_PVSCSI=m
+CONFIG_XEN_SCSI_FRONTEND=m
+CONFIG_HYPERV_STORAGE=m
+CONFIG_LIBFC=m
+CONFIG_LIBFCOE=m
+CONFIG_FCOE=m
+CONFIG_FCOE_FNIC=m
+# CONFIG_SCSI_SNIC is not set
+CONFIG_SCSI_DMX3191D=m
+CONFIG_SCSI_FDOMAIN=m
+# CONFIG_SCSI_FDOMAIN_PCI is not set
+CONFIG_SCSI_GDTH=m
+CONFIG_SCSI_ISCI=m
+CONFIG_SCSI_IPS=m
+CONFIG_SCSI_INITIO=m
+CONFIG_SCSI_INIA100=m
+CONFIG_SCSI_PPA=m
+CONFIG_SCSI_IMM=m
+# CONFIG_SCSI_IZIP_EPP16 is not set
+# CONFIG_SCSI_IZIP_SLOW_CTR is not set
+CONFIG_SCSI_STEX=m
+CONFIG_SCSI_SYM53C8XX_2=m
+CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
+CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
+CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
+CONFIG_SCSI_SYM53C8XX_MMIO=y
+CONFIG_SCSI_IPR=m
+CONFIG_SCSI_IPR_TRACE=y
+CONFIG_SCSI_IPR_DUMP=y
+CONFIG_SCSI_QLOGIC_1280=m
+CONFIG_SCSI_QLA_FC=m
+CONFIG_TCM_QLA2XXX=m
+# CONFIG_TCM_QLA2XXX_DEBUG is not set
+CONFIG_SCSI_QLA_ISCSI=m
+CONFIG_SCSI_LPFC=m
+# CONFIG_SCSI_LPFC_DEBUG_FS is not set
+CONFIG_SCSI_DC395x=m
+CONFIG_SCSI_AM53C974=m
+CONFIG_SCSI_WD719X=m
+CONFIG_SCSI_DEBUG=m
+CONFIG_SCSI_PMCRAID=m
+CONFIG_SCSI_PM8001=m
+CONFIG_SCSI_BFA_FC=m
+CONFIG_SCSI_VIRTIO=m
+CONFIG_SCSI_CHELSIO_FCOE=m
+CONFIG_SCSI_LOWLEVEL_PCMCIA=y
+CONFIG_PCMCIA_AHA152X=m
+CONFIG_PCMCIA_FDOMAIN=m
+CONFIG_PCMCIA_QLOGIC=m
+CONFIG_PCMCIA_SYM53C500=m
+# CONFIG_SCSI_DH is not set
+# end of SCSI device support
+
+CONFIG_ATA=y
+CONFIG_SATA_HOST=y
+CONFIG_PATA_TIMINGS=y
+CONFIG_ATA_VERBOSE_ERROR=y
+CONFIG_ATA_FORCE=y
+CONFIG_ATA_ACPI=y
+CONFIG_SATA_ZPODD=y
+CONFIG_SATA_PMP=y
+
+#
+# Controllers with non-SFF native interface
+#
+CONFIG_SATA_AHCI=m
+CONFIG_SATA_MOBILE_LPM_POLICY=0
+CONFIG_SATA_AHCI_PLATFORM=m
+CONFIG_SATA_INIC162X=m
+CONFIG_SATA_ACARD_AHCI=m
+CONFIG_SATA_SIL24=m
+CONFIG_ATA_SFF=y
+
+#
+# SFF controllers with custom DMA interface
+#
+CONFIG_PDC_ADMA=m
+CONFIG_SATA_QSTOR=m
+CONFIG_SATA_SX4=m
+CONFIG_ATA_BMDMA=y
+
+#
+# SATA SFF controllers with BMDMA
+#
+CONFIG_ATA_PIIX=y
+# CONFIG_SATA_DWC is not set
+CONFIG_SATA_MV=m
+CONFIG_SATA_NV=m
+CONFIG_SATA_PROMISE=m
+CONFIG_SATA_SIL=m
+CONFIG_SATA_SIS=m
+CONFIG_SATA_SVW=m
+CONFIG_SATA_ULI=m
+CONFIG_SATA_VIA=m
+CONFIG_SATA_VITESSE=m
+
+#
+# PATA SFF controllers with BMDMA
+#
+CONFIG_PATA_ALI=m
+CONFIG_PATA_AMD=m
+CONFIG_PATA_ARTOP=m
+CONFIG_PATA_ATIIXP=m
+CONFIG_PATA_ATP867X=m
+CONFIG_PATA_CMD64X=m
+CONFIG_PATA_CYPRESS=m
+CONFIG_PATA_EFAR=m
+CONFIG_PATA_HPT366=m
+CONFIG_PATA_HPT37X=m
+CONFIG_PATA_HPT3X2N=m
+CONFIG_PATA_HPT3X3=m
+# CONFIG_PATA_HPT3X3_DMA is not set
+CONFIG_PATA_IT8213=m
+CONFIG_PATA_IT821X=m
+CONFIG_PATA_JMICRON=m
+CONFIG_PATA_MARVELL=m
+CONFIG_PATA_NETCELL=m
+CONFIG_PATA_NINJA32=m
+CONFIG_PATA_NS87415=m
+CONFIG_PATA_OLDPIIX=m
+CONFIG_PATA_OPTIDMA=m
+CONFIG_PATA_PDC2027X=m
+CONFIG_PATA_PDC_OLD=m
+CONFIG_PATA_RADISYS=m
+CONFIG_PATA_RDC=m
+CONFIG_PATA_SCH=m
+CONFIG_PATA_SERVERWORKS=m
+CONFIG_PATA_SIL680=m
+CONFIG_PATA_SIS=y
+CONFIG_PATA_TOSHIBA=m
+CONFIG_PATA_TRIFLEX=m
+CONFIG_PATA_VIA=m
+CONFIG_PATA_WINBOND=m
+
+#
+# PIO-only SFF controllers
+#
+CONFIG_PATA_CMD640_PCI=m
+CONFIG_PATA_MPIIX=m
+CONFIG_PATA_NS87410=m
+CONFIG_PATA_OPTI=m
+CONFIG_PATA_PCMCIA=m
+CONFIG_PATA_PLATFORM=m
+CONFIG_PATA_RZ1000=m
+
+#
+# Generic fallback / legacy drivers
+#
+CONFIG_PATA_ACPI=m
+CONFIG_ATA_GENERIC=y
+CONFIG_PATA_LEGACY=m
+CONFIG_MD=y
+CONFIG_BLK_DEV_MD=y
+CONFIG_MD_AUTODETECT=y
+CONFIG_MD_LINEAR=m
+CONFIG_MD_RAID0=m
+CONFIG_MD_RAID1=m
+CONFIG_MD_RAID10=m
+CONFIG_MD_RAID456=m
+CONFIG_MD_MULTIPATH=m
+CONFIG_MD_FAULTY=m
+# CONFIG_MD_CLUSTER is not set
+CONFIG_BCACHE=m
+# CONFIG_BCACHE_DEBUG is not set
+# CONFIG_BCACHE_CLOSURES_DEBUG is not set
+CONFIG_BLK_DEV_DM_BUILTIN=y
+CONFIG_BLK_DEV_DM=y
+# CONFIG_DM_DEBUG is not set
+CONFIG_DM_BUFIO=m
+# CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING is not set
+CONFIG_DM_BIO_PRISON=m
+CONFIG_DM_PERSISTENT_DATA=m
+# CONFIG_DM_UNSTRIPED is not set
+CONFIG_DM_CRYPT=m
+CONFIG_DM_SNAPSHOT=m
+CONFIG_DM_THIN_PROVISIONING=m
+CONFIG_DM_CACHE=m
+CONFIG_DM_CACHE_SMQ=m
+# CONFIG_DM_WRITECACHE is not set
+CONFIG_DM_ERA=m
+# CONFIG_DM_CLONE is not set
+CONFIG_DM_MIRROR=m
+CONFIG_DM_LOG_USERSPACE=m
+CONFIG_DM_RAID=m
+CONFIG_DM_ZERO=m
+CONFIG_DM_MULTIPATH=m
+CONFIG_DM_MULTIPATH_QL=m
+CONFIG_DM_MULTIPATH_ST=m
+CONFIG_DM_DELAY=m
+# CONFIG_DM_DUST is not set
+# CONFIG_DM_INIT is not set
+CONFIG_DM_UEVENT=y
+CONFIG_DM_FLAKEY=m
+CONFIG_DM_VERITY=m
+# CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG is not set
+# CONFIG_DM_VERITY_FEC is not set
+CONFIG_DM_SWITCH=m
+# CONFIG_DM_LOG_WRITES is not set
+# CONFIG_DM_INTEGRITY is not set
+CONFIG_TARGET_CORE=m
+CONFIG_TCM_IBLOCK=m
+CONFIG_TCM_FILEIO=m
+CONFIG_TCM_PSCSI=m
+CONFIG_TCM_USER2=m
+CONFIG_LOOPBACK_TARGET=m
+CONFIG_TCM_FC=m
+CONFIG_ISCSI_TARGET=m
+# CONFIG_ISCSI_TARGET_CXGB4 is not set
+CONFIG_FUSION=y
+CONFIG_FUSION_SPI=m
+CONFIG_FUSION_FC=m
+CONFIG_FUSION_SAS=m
+CONFIG_FUSION_MAX_SGE=128
+CONFIG_FUSION_CTL=m
+CONFIG_FUSION_LAN=m
+CONFIG_FUSION_LOGGING=y
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_FIREWIRE is not set
+# CONFIG_FIREWIRE_NOSY is not set
+# end of IEEE 1394 (FireWire) support
+
+# CONFIG_MACINTOSH_DRIVERS is not set
+CONFIG_NETDEVICES=y
+CONFIG_MII=m
+CONFIG_NET_CORE=y
+CONFIG_BONDING=m
+CONFIG_DUMMY=m
+# CONFIG_WIREGUARD is not set
+CONFIG_EQUALIZER=m
+CONFIG_NET_FC=y
+CONFIG_IFB=m
+CONFIG_NET_TEAM=m
+CONFIG_NET_TEAM_MODE_BROADCAST=m
+CONFIG_NET_TEAM_MODE_ROUNDROBIN=m
+CONFIG_NET_TEAM_MODE_RANDOM=m
+CONFIG_NET_TEAM_MODE_ACTIVEBACKUP=m
+CONFIG_NET_TEAM_MODE_LOADBALANCE=m
+CONFIG_MACVLAN=m
+CONFIG_MACVTAP=m
+CONFIG_IPVLAN_L3S=y
+CONFIG_IPVLAN=m
+# CONFIG_IPVTAP is not set
+CONFIG_VXLAN=m
+CONFIG_GENEVE=m
+# CONFIG_BAREUDP is not set
+# CONFIG_GTP is not set
+# CONFIG_MACSEC is not set
+CONFIG_NETCONSOLE=m
+CONFIG_NETCONSOLE_DYNAMIC=y
+CONFIG_NETPOLL=y
+CONFIG_NET_POLL_CONTROLLER=y
+CONFIG_TUN=y
+CONFIG_TAP=m
+# CONFIG_TUN_VNET_CROSS_LE is not set
+CONFIG_VETH=m
+CONFIG_VIRTIO_NET=y
+CONFIG_NLMON=m
+# CONFIG_NET_VRF is not set
+CONFIG_SUNGEM_PHY=m
+# CONFIG_ARCNET is not set
+
+#
+# Distributed Switch Architecture drivers
+#
+CONFIG_B53=m
+# CONFIG_B53_MDIO_DRIVER is not set
+# CONFIG_B53_MMAP_DRIVER is not set
+# CONFIG_B53_SRAB_DRIVER is not set
+# CONFIG_B53_SERDES is not set
+CONFIG_NET_DSA_BCM_SF2=m
+# CONFIG_NET_DSA_LOOP is not set
+# CONFIG_NET_DSA_LANTIQ_GSWIP is not set
+# CONFIG_NET_DSA_MT7530 is not set
+CONFIG_NET_DSA_MV88E6060=m
+# CONFIG_NET_DSA_MICROCHIP_KSZ9477 is not set
+# CONFIG_NET_DSA_MICROCHIP_KSZ8795 is not set
+CONFIG_NET_DSA_MV88E6XXX=m
+CONFIG_NET_DSA_MV88E6XXX_GLOBAL2=y
+# CONFIG_NET_DSA_MV88E6XXX_PTP is not set
+# CONFIG_NET_DSA_AR9331 is not set
+# CONFIG_NET_DSA_QCA8K is not set
+# CONFIG_NET_DSA_REALTEK_SMI is not set
+# CONFIG_NET_DSA_SMSC_LAN9303_I2C is not set
+# CONFIG_NET_DSA_SMSC_LAN9303_MDIO is not set
+# CONFIG_NET_DSA_VITESSE_VSC73XX_PLATFORM is not set
+# end of Distributed Switch Architecture drivers
+
+CONFIG_ETHERNET=y
+CONFIG_MDIO=m
+CONFIG_NET_VENDOR_3COM=y
+CONFIG_PCMCIA_3C574=m
+CONFIG_PCMCIA_3C589=m
+CONFIG_VORTEX=m
+CONFIG_TYPHOON=m
+CONFIG_NET_VENDOR_ADAPTEC=y
+CONFIG_ADAPTEC_STARFIRE=m
+CONFIG_NET_VENDOR_AGERE=y
+CONFIG_ET131X=m
+CONFIG_NET_VENDOR_ALACRITECH=y
+# CONFIG_SLICOSS is not set
+CONFIG_NET_VENDOR_ALTEON=y
+CONFIG_ACENIC=m
+# CONFIG_ACENIC_OMIT_TIGON_I is not set
+CONFIG_ALTERA_TSE=m
+CONFIG_NET_VENDOR_AMAZON=y
+# CONFIG_ENA_ETHERNET is not set
+CONFIG_NET_VENDOR_AMD=y
+CONFIG_AMD8111_ETH=m
+CONFIG_PCNET32=m
+CONFIG_PCMCIA_NMCLAN=m
+# CONFIG_AMD_XGBE is not set
+CONFIG_NET_VENDOR_AQUANTIA=y
+# CONFIG_AQTION is not set
+CONFIG_NET_VENDOR_ARC=y
+CONFIG_NET_VENDOR_ATHEROS=y
+CONFIG_ATL2=m
+CONFIG_ATL1=m
+CONFIG_ATL1E=m
+CONFIG_ATL1C=m
+CONFIG_ALX=m
+CONFIG_NET_VENDOR_AURORA=y
+# CONFIG_AURORA_NB8800 is not set
+CONFIG_NET_VENDOR_BROADCOM=y
+CONFIG_B44=m
+CONFIG_B44_PCI_AUTOSELECT=y
+CONFIG_B44_PCICORE_AUTOSELECT=y
+CONFIG_B44_PCI=y
+CONFIG_BCMGENET=m
+CONFIG_BNX2=m
+CONFIG_CNIC=m
+CONFIG_TIGON3=m
+CONFIG_TIGON3_HWMON=y
+CONFIG_BNX2X=m
+CONFIG_BNX2X_SRIOV=y
+# CONFIG_SYSTEMPORT is not set
+# CONFIG_BNXT is not set
+CONFIG_NET_VENDOR_BROCADE=y
+CONFIG_BNA=m
+CONFIG_NET_VENDOR_CADENCE=y
+# CONFIG_MACB is not set
+CONFIG_NET_VENDOR_CAVIUM=y
+# CONFIG_THUNDER_NIC_PF is not set
+# CONFIG_THUNDER_NIC_VF is not set
+# CONFIG_THUNDER_NIC_BGX is not set
+# CONFIG_THUNDER_NIC_RGX is not set
+# CONFIG_CAVIUM_PTP is not set
+# CONFIG_LIQUIDIO is not set
+# CONFIG_LIQUIDIO_VF is not set
+CONFIG_NET_VENDOR_CHELSIO=y
+CONFIG_CHELSIO_T1=m
+CONFIG_CHELSIO_T1_1G=y
+CONFIG_CHELSIO_T3=m
+CONFIG_CHELSIO_T4=m
+CONFIG_CHELSIO_T4_DCB=y
+# CONFIG_CHELSIO_T4_FCOE is not set
+CONFIG_CHELSIO_T4VF=m
+CONFIG_CHELSIO_LIB=m
+CONFIG_NET_VENDOR_CISCO=y
+CONFIG_ENIC=m
+CONFIG_NET_VENDOR_CORTINA=y
+CONFIG_CX_ECAT=m
+CONFIG_DNET=m
+CONFIG_NET_VENDOR_DEC=y
+CONFIG_NET_TULIP=y
+CONFIG_DE2104X=m
+CONFIG_DE2104X_DSL=0
+CONFIG_TULIP=m
+# CONFIG_TULIP_MWI is not set
+# CONFIG_TULIP_MMIO is not set
+# CONFIG_TULIP_NAPI is not set
+CONFIG_DE4X5=m
+CONFIG_WINBOND_840=m
+CONFIG_DM9102=m
+CONFIG_ULI526X=m
+CONFIG_PCMCIA_XIRCOM=m
+CONFIG_NET_VENDOR_DLINK=y
+CONFIG_DL2K=m
+CONFIG_SUNDANCE=m
+# CONFIG_SUNDANCE_MMIO is not set
+CONFIG_NET_VENDOR_EMULEX=y
+CONFIG_BE2NET=m
+CONFIG_BE2NET_HWMON=y
+CONFIG_BE2NET_BE2=y
+CONFIG_BE2NET_BE3=y
+CONFIG_BE2NET_LANCER=y
+CONFIG_BE2NET_SKYHAWK=y
+CONFIG_NET_VENDOR_EZCHIP=y
+CONFIG_NET_VENDOR_FUJITSU=y
+CONFIG_PCMCIA_FMVJ18X=m
+CONFIG_NET_VENDOR_GOOGLE=y
+# CONFIG_GVE is not set
+CONFIG_NET_VENDOR_HUAWEI=y
+# CONFIG_HINIC is not set
+CONFIG_NET_VENDOR_I825XX=y
+CONFIG_NET_VENDOR_INTEL=y
+CONFIG_E100=m
+CONFIG_E1000=m
+CONFIG_E1000E=m
+CONFIG_E1000E_HWTS=y
+CONFIG_IGB=m
+CONFIG_IGB_HWMON=y
+CONFIG_IGB_DCA=y
+CONFIG_IGBVF=m
+CONFIG_IXGB=m
+CONFIG_IXGBE=m
+CONFIG_IXGBE_HWMON=y
+CONFIG_IXGBE_DCA=y
+CONFIG_IXGBE_DCB=y
+CONFIG_IXGBEVF=m
+CONFIG_I40E=m
+CONFIG_I40E_DCB=y
+CONFIG_IAVF=m
+CONFIG_I40EVF=m
+# CONFIG_ICE is not set
+CONFIG_FM10K=m
+# CONFIG_IGC is not set
+CONFIG_JME=m
+CONFIG_NET_VENDOR_MARVELL=y
+CONFIG_MVMDIO=m
+CONFIG_SKGE=m
+# CONFIG_SKGE_DEBUG is not set
+CONFIG_SKGE_GENESIS=y
+CONFIG_SKY2=m
+# CONFIG_SKY2_DEBUG is not set
+CONFIG_NET_VENDOR_MELLANOX=y
+CONFIG_MLX4_EN=m
+CONFIG_MLX4_EN_DCB=y
+CONFIG_MLX4_CORE=m
+CONFIG_MLX4_DEBUG=y
+CONFIG_MLX4_CORE_GEN2=y
+CONFIG_MLX5_CORE=m
+# CONFIG_MLX5_FPGA is not set
+# CONFIG_MLX5_CORE_EN is not set
+# CONFIG_MLXSW_CORE is not set
+# CONFIG_MLXFW is not set
+CONFIG_NET_VENDOR_MICREL=y
+CONFIG_KS8842=m
+CONFIG_KS8851_MLL=m
+CONFIG_KSZ884X_PCI=m
+CONFIG_NET_VENDOR_MICROCHIP=y
+# CONFIG_LAN743X is not set
+CONFIG_NET_VENDOR_MICROSEMI=y
+# CONFIG_MSCC_OCELOT_SWITCH is not set
+CONFIG_NET_VENDOR_MYRI=y
+CONFIG_MYRI10GE=m
+CONFIG_MYRI10GE_DCA=y
+CONFIG_FEALNX=m
+CONFIG_NET_VENDOR_NATSEMI=y
+CONFIG_NATSEMI=m
+CONFIG_NS83820=m
+CONFIG_NET_VENDOR_NETERION=y
+CONFIG_S2IO=m
+CONFIG_VXGE=m
+# CONFIG_VXGE_DEBUG_TRACE_ALL is not set
+CONFIG_NET_VENDOR_NETRONOME=y
+# CONFIG_NFP is not set
+CONFIG_NET_VENDOR_NI=y
+# CONFIG_NI_XGE_MANAGEMENT_ENET is not set
+CONFIG_NET_VENDOR_8390=y
+CONFIG_PCMCIA_AXNET=m
+CONFIG_NE2K_PCI=m
+CONFIG_PCMCIA_PCNET=m
+CONFIG_NET_VENDOR_NVIDIA=y
+CONFIG_FORCEDETH=m
+CONFIG_NET_VENDOR_OKI=y
+CONFIG_ETHOC=m
+CONFIG_NET_VENDOR_PACKET_ENGINES=y
+CONFIG_HAMACHI=m
+CONFIG_YELLOWFIN=m
+CONFIG_NET_VENDOR_PENSANDO=y
+# CONFIG_IONIC is not set
+CONFIG_NET_VENDOR_QLOGIC=y
+CONFIG_QLA3XXX=m
+CONFIG_QLCNIC=m
+CONFIG_QLCNIC_SRIOV=y
+CONFIG_QLCNIC_DCB=y
+CONFIG_QLCNIC_HWMON=y
+CONFIG_NETXEN_NIC=m
+# CONFIG_QED is not set
+CONFIG_NET_VENDOR_QUALCOMM=y
+# CONFIG_QCOM_EMAC is not set
+# CONFIG_RMNET is not set
+CONFIG_NET_VENDOR_RDC=y
+CONFIG_R6040=m
+CONFIG_NET_VENDOR_REALTEK=y
+CONFIG_ATP=m
+CONFIG_8139CP=m
+CONFIG_8139TOO=m
+CONFIG_8139TOO_PIO=y
+# CONFIG_8139TOO_TUNE_TWISTER is not set
+CONFIG_8139TOO_8129=y
+# CONFIG_8139_OLD_RX_RESET is not set
+CONFIG_R8169=m
+CONFIG_NET_VENDOR_RENESAS=y
+CONFIG_NET_VENDOR_ROCKER=y
+# CONFIG_ROCKER is not set
+CONFIG_NET_VENDOR_SAMSUNG=y
+CONFIG_SXGBE_ETH=m
+CONFIG_NET_VENDOR_SEEQ=y
+CONFIG_NET_VENDOR_SOLARFLARE=y
+CONFIG_SFC=m
+CONFIG_SFC_MCDI_MON=y
+CONFIG_SFC_SRIOV=y
+CONFIG_SFC_MCDI_LOGGING=y
+# CONFIG_SFC_FALCON is not set
+CONFIG_NET_VENDOR_SILAN=y
+CONFIG_SC92031=m
+CONFIG_NET_VENDOR_SIS=y
+CONFIG_SIS900=m
+CONFIG_SIS190=m
+CONFIG_NET_VENDOR_SMSC=y
+CONFIG_PCMCIA_SMC91C92=m
+CONFIG_EPIC100=m
+CONFIG_SMSC911X=m
+CONFIG_SMSC9420=m
+CONFIG_NET_VENDOR_SOCIONEXT=y
+CONFIG_NET_VENDOR_STMICRO=y
+CONFIG_STMMAC_ETH=m
+# CONFIG_STMMAC_SELFTESTS is not set
+CONFIG_STMMAC_PLATFORM=m
+CONFIG_DWMAC_GENERIC=m
+CONFIG_DWMAC_INTEL=m
+# CONFIG_STMMAC_PCI is not set
+CONFIG_NET_VENDOR_SUN=y
+CONFIG_HAPPYMEAL=m
+CONFIG_SUNGEM=m
+CONFIG_CASSINI=m
+CONFIG_NIU=m
+CONFIG_NET_VENDOR_SYNOPSYS=y
+# CONFIG_DWC_XLGMAC is not set
+CONFIG_NET_VENDOR_TEHUTI=y
+CONFIG_TEHUTI=m
+CONFIG_NET_VENDOR_TI=y
+# CONFIG_TI_CPSW_PHY_SEL is not set
+CONFIG_TLAN=m
+CONFIG_NET_VENDOR_VIA=y
+CONFIG_VIA_RHINE=m
+CONFIG_VIA_RHINE_MMIO=y
+CONFIG_VIA_VELOCITY=m
+CONFIG_NET_VENDOR_WIZNET=y
+CONFIG_WIZNET_W5100=m
+CONFIG_WIZNET_W5300=m
+# CONFIG_WIZNET_BUS_DIRECT is not set
+# CONFIG_WIZNET_BUS_INDIRECT is not set
+CONFIG_WIZNET_BUS_ANY=y
+CONFIG_NET_VENDOR_XILINX=y
+# CONFIG_XILINX_AXI_EMAC is not set
+# CONFIG_XILINX_LL_TEMAC is not set
+CONFIG_NET_VENDOR_XIRCOM=y
+CONFIG_PCMCIA_XIRC2PS=m
+CONFIG_FDDI=y
+CONFIG_DEFXX=m
+# CONFIG_DEFXX_MMIO is not set
+CONFIG_SKFP=m
+# CONFIG_HIPPI is not set
+CONFIG_NET_SB1000=m
+CONFIG_MDIO_DEVICE=y
+CONFIG_MDIO_BUS=y
+CONFIG_MDIO_BCM_UNIMAC=m
+CONFIG_MDIO_BITBANG=m
+CONFIG_MDIO_GPIO=m
+# CONFIG_MDIO_MSCC_MIIM is not set
+# CONFIG_MDIO_MVUSB is not set
+# CONFIG_MDIO_THUNDER is not set
+CONFIG_MDIO_XPCS=m
+CONFIG_PHYLINK=m
+CONFIG_PHYLIB=y
+CONFIG_SWPHY=y
+# CONFIG_LED_TRIGGER_PHY is not set
+
+#
+# MII PHY device drivers
+#
+# CONFIG_SFP is not set
+# CONFIG_ADIN_PHY is not set
+CONFIG_AMD_PHY=m
+# CONFIG_AQUANTIA_PHY is not set
+# CONFIG_AX88796B_PHY is not set
+CONFIG_BCM7XXX_PHY=m
+CONFIG_BCM87XX_PHY=m
+CONFIG_BCM_NET_PHYLIB=m
+CONFIG_BROADCOM_PHY=m
+# CONFIG_BCM84881_PHY is not set
+CONFIG_CICADA_PHY=m
+# CONFIG_CORTINA_PHY is not set
+CONFIG_DAVICOM_PHY=m
+# CONFIG_DP83822_PHY is not set
+# CONFIG_DP83TC811_PHY is not set
+# CONFIG_DP83848_PHY is not set
+# CONFIG_DP83867_PHY is not set
+# CONFIG_DP83869_PHY is not set
+CONFIG_FIXED_PHY=y
+CONFIG_ICPLUS_PHY=m
+# CONFIG_INTEL_XWAY_PHY is not set
+CONFIG_LSI_ET1011C_PHY=m
+CONFIG_LXT_PHY=m
+CONFIG_MARVELL_PHY=m
+# CONFIG_MARVELL_10G_PHY is not set
+CONFIG_MICREL_PHY=m
+# CONFIG_MICROCHIP_PHY is not set
+# CONFIG_MICROCHIP_T1_PHY is not set
+# CONFIG_MICROSEMI_PHY is not set
+CONFIG_NATIONAL_PHY=m
+# CONFIG_NXP_TJA11XX_PHY is not set
+CONFIG_QSEMI_PHY=m
+CONFIG_REALTEK_PHY=m
+# CONFIG_RENESAS_PHY is not set
+# CONFIG_ROCKCHIP_PHY is not set
+CONFIG_SMSC_PHY=m
+CONFIG_STE10XP=m
+# CONFIG_TERANETICS_PHY is not set
+CONFIG_VITESSE_PHY=m
+# CONFIG_XILINX_GMII2RGMII is not set
+CONFIG_PLIP=m
+CONFIG_PPP=y
+CONFIG_PPP_BSDCOMP=m
+CONFIG_PPP_DEFLATE=m
+CONFIG_PPP_FILTER=y
+CONFIG_PPP_MPPE=m
+CONFIG_PPP_MULTILINK=y
+CONFIG_PPPOE=m
+CONFIG_PPTP=m
+CONFIG_PPPOL2TP=m
+CONFIG_PPP_ASYNC=m
+CONFIG_PPP_SYNC_TTY=m
+CONFIG_SLIP=m
+CONFIG_SLHC=y
+CONFIG_SLIP_COMPRESSED=y
+CONFIG_SLIP_SMART=y
+CONFIG_SLIP_MODE_SLIP6=y
+# CONFIG_USB_NET_DRIVERS is not set
+# CONFIG_WLAN is not set
+
+#
+# Enable WiMAX (Networking options) to see the WiMAX drivers
+#
+# CONFIG_WAN is not set
+CONFIG_IEEE802154_DRIVERS=m
+CONFIG_IEEE802154_FAKELB=m
+# CONFIG_IEEE802154_ATUSB is not set
+# CONFIG_IEEE802154_HWSIM is not set
+CONFIG_XEN_NETDEV_FRONTEND=y
+CONFIG_XEN_NETDEV_BACKEND=m
+CONFIG_VMXNET3=m
+# CONFIG_FUJITSU_ES is not set
+CONFIG_HYPERV_NET=m
+# CONFIG_NETDEVSIM is not set
+CONFIG_NET_FAILOVER=y
+# CONFIG_ISDN is not set
+# CONFIG_NVM is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+CONFIG_INPUT_LEDS=y
+CONFIG_INPUT_FF_MEMLESS=m
+CONFIG_INPUT_POLLDEV=m
+CONFIG_INPUT_SPARSEKMAP=m
+CONFIG_INPUT_MATRIXKMAP=m
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+CONFIG_INPUT_JOYDEV=m
+CONFIG_INPUT_EVDEV=y
+CONFIG_INPUT_EVBUG=m
+
+#
+# Input Device Drivers
+#
+CONFIG_INPUT_KEYBOARD=y
+CONFIG_KEYBOARD_ADP5520=m
+CONFIG_KEYBOARD_ADP5588=m
+CONFIG_KEYBOARD_ADP5589=m
+CONFIG_KEYBOARD_ATKBD=y
+# CONFIG_KEYBOARD_QT1050 is not set
+CONFIG_KEYBOARD_QT1070=m
+CONFIG_KEYBOARD_QT2160=m
+# CONFIG_KEYBOARD_DLINK_DIR685 is not set
+CONFIG_KEYBOARD_LKKBD=m
+CONFIG_KEYBOARD_GPIO=m
+CONFIG_KEYBOARD_GPIO_POLLED=m
+CONFIG_KEYBOARD_TCA6416=m
+CONFIG_KEYBOARD_TCA8418=m
+CONFIG_KEYBOARD_MATRIX=m
+CONFIG_KEYBOARD_LM8323=m
+CONFIG_KEYBOARD_LM8333=m
+CONFIG_KEYBOARD_MAX7359=m
+CONFIG_KEYBOARD_MCS=m
+CONFIG_KEYBOARD_MPR121=m
+CONFIG_KEYBOARD_NEWTON=m
+CONFIG_KEYBOARD_OPENCORES=m
+CONFIG_KEYBOARD_SAMSUNG=m
+CONFIG_KEYBOARD_STOWAWAY=m
+CONFIG_KEYBOARD_SUNKBD=m
+# CONFIG_KEYBOARD_TM2_TOUCHKEY is not set
+CONFIG_KEYBOARD_TWL4030=m
+CONFIG_KEYBOARD_XTKBD=m
+CONFIG_KEYBOARD_CROS_EC=m
+CONFIG_INPUT_MOUSE=y
+CONFIG_MOUSE_PS2=m
+CONFIG_MOUSE_PS2_ALPS=y
+CONFIG_MOUSE_PS2_BYD=y
+CONFIG_MOUSE_PS2_LOGIPS2PP=y
+CONFIG_MOUSE_PS2_SYNAPTICS=y
+CONFIG_MOUSE_PS2_SYNAPTICS_SMBUS=y
+CONFIG_MOUSE_PS2_CYPRESS=y
+CONFIG_MOUSE_PS2_LIFEBOOK=y
+CONFIG_MOUSE_PS2_TRACKPOINT=y
+CONFIG_MOUSE_PS2_ELANTECH=y
+CONFIG_MOUSE_PS2_ELANTECH_SMBUS=y
+CONFIG_MOUSE_PS2_SENTELIC=y
+CONFIG_MOUSE_PS2_TOUCHKIT=y
+CONFIG_MOUSE_PS2_FOCALTECH=y
+# CONFIG_MOUSE_PS2_VMMOUSE is not set
+CONFIG_MOUSE_PS2_SMBUS=y
+CONFIG_MOUSE_SERIAL=m
+CONFIG_MOUSE_APPLETOUCH=m
+CONFIG_MOUSE_BCM5974=m
+CONFIG_MOUSE_CYAPA=m
+CONFIG_MOUSE_ELAN_I2C=m
+CONFIG_MOUSE_ELAN_I2C_I2C=y
+CONFIG_MOUSE_ELAN_I2C_SMBUS=y
+CONFIG_MOUSE_VSXXXAA=m
+CONFIG_MOUSE_GPIO=m
+CONFIG_MOUSE_SYNAPTICS_I2C=m
+CONFIG_MOUSE_SYNAPTICS_USB=m
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TABLET is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+CONFIG_RMI4_CORE=m
+# CONFIG_RMI4_I2C is not set
+# CONFIG_RMI4_SMB is not set
+CONFIG_RMI4_F03=y
+CONFIG_RMI4_F03_SERIO=m
+CONFIG_RMI4_2D_SENSOR=y
+CONFIG_RMI4_F11=y
+CONFIG_RMI4_F12=y
+CONFIG_RMI4_F30=y
+# CONFIG_RMI4_F34 is not set
+# CONFIG_RMI4_F55 is not set
+
+#
+# Hardware I/O ports
+#
+CONFIG_SERIO=y
+CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
+CONFIG_SERIO_I8042=y
+CONFIG_SERIO_SERPORT=m
+CONFIG_SERIO_CT82C710=m
+CONFIG_SERIO_PARKBD=m
+CONFIG_SERIO_PCIPS2=m
+CONFIG_SERIO_LIBPS2=y
+CONFIG_SERIO_RAW=m
+CONFIG_SERIO_ALTERA_PS2=m
+CONFIG_SERIO_PS2MULT=m
+CONFIG_SERIO_ARC_PS2=m
+CONFIG_HYPERV_KEYBOARD=m
+# CONFIG_SERIO_GPIO_PS2 is not set
+# CONFIG_USERIO is not set
+CONFIG_GAMEPORT=m
+CONFIG_GAMEPORT_NS558=m
+CONFIG_GAMEPORT_L4=m
+CONFIG_GAMEPORT_EMU10K1=m
+CONFIG_GAMEPORT_FM801=m
+# end of Hardware I/O ports
+# end of Input device support
+
+#
+# Character devices
+#
+CONFIG_TTY=y
+CONFIG_VT=y
+CONFIG_CONSOLE_TRANSLATIONS=y
+CONFIG_VT_CONSOLE=y
+CONFIG_VT_CONSOLE_SLEEP=y
+CONFIG_HW_CONSOLE=y
+CONFIG_VT_HW_CONSOLE_BINDING=y
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=0
+CONFIG_LDISC_AUTOLOAD=y
+
+#
+# Serial drivers
+#
+CONFIG_SERIAL_EARLYCON=y
+CONFIG_SERIAL_8250=y
+# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
+CONFIG_SERIAL_8250_PNP=y
+# CONFIG_SERIAL_8250_16550A_VARIANTS is not set
+# CONFIG_SERIAL_8250_FINTEK is not set
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_DMA=y
+CONFIG_SERIAL_8250_PCI=y
+CONFIG_SERIAL_8250_EXAR=y
+CONFIG_SERIAL_8250_CS=m
+CONFIG_SERIAL_8250_NR_UARTS=48
+CONFIG_SERIAL_8250_RUNTIME_UARTS=32
+CONFIG_SERIAL_8250_EXTENDED=y
+CONFIG_SERIAL_8250_MANY_PORTS=y
+CONFIG_SERIAL_8250_SHARE_IRQ=y
+# CONFIG_SERIAL_8250_DETECT_IRQ is not set
+CONFIG_SERIAL_8250_RSA=y
+CONFIG_SERIAL_8250_DWLIB=y
+CONFIG_SERIAL_8250_DW=m
+# CONFIG_SERIAL_8250_RT288X is not set
+CONFIG_SERIAL_8250_LPSS=y
+CONFIG_SERIAL_8250_MID=y
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_KGDB_NMI=y
+# CONFIG_SERIAL_UARTLITE is not set
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_CONSOLE_POLL=y
+CONFIG_SERIAL_JSM=m
+CONFIG_SERIAL_SCCNXP=y
+CONFIG_SERIAL_SCCNXP_CONSOLE=y
+CONFIG_SERIAL_SC16IS7XX_CORE=m
+CONFIG_SERIAL_SC16IS7XX=m
+CONFIG_SERIAL_SC16IS7XX_I2C=y
+CONFIG_SERIAL_ALTERA_JTAGUART=m
+CONFIG_SERIAL_ALTERA_UART=m
+CONFIG_SERIAL_ALTERA_UART_MAXPORTS=4
+CONFIG_SERIAL_ALTERA_UART_BAUDRATE=115200
+CONFIG_SERIAL_ARC=m
+CONFIG_SERIAL_ARC_NR_PORTS=1
+CONFIG_SERIAL_RP2=m
+CONFIG_SERIAL_RP2_NR_UARTS=32
+CONFIG_SERIAL_FSL_LPUART=m
+# CONFIG_SERIAL_FSL_LINFLEXUART is not set
+# CONFIG_SERIAL_SPRD is not set
+# end of Serial drivers
+
+CONFIG_SERIAL_MCTRL_GPIO=y
+CONFIG_SERIAL_NONSTANDARD=y
+CONFIG_ROCKETPORT=m
+CONFIG_CYCLADES=m
+# CONFIG_CYZ_INTR is not set
+CONFIG_MOXA_INTELLIO=m
+CONFIG_MOXA_SMARTIO=m
+CONFIG_SYNCLINK=m
+CONFIG_SYNCLINKMP=m
+CONFIG_SYNCLINK_GT=m
+CONFIG_ISI=m
+CONFIG_N_HDLC=m
+CONFIG_N_GSM=m
+CONFIG_NOZOMI=m
+# CONFIG_NULL_TTY is not set
+CONFIG_TRACE_ROUTER=m
+CONFIG_TRACE_SINK=m
+CONFIG_HVC_DRIVER=y
+CONFIG_HVC_IRQ=y
+CONFIG_HVC_XEN=y
+CONFIG_HVC_XEN_FRONTEND=y
+# CONFIG_SERIAL_DEV_BUS is not set
+CONFIG_TTY_PRINTK=y
+CONFIG_TTY_PRINTK_LEVEL=6
+CONFIG_PRINTER=m
+# CONFIG_LP_CONSOLE is not set
+CONFIG_PPDEV=m
+CONFIG_VIRTIO_CONSOLE=y
+CONFIG_IPMI_HANDLER=m
+CONFIG_IPMI_DMI_DECODE=y
+CONFIG_IPMI_PLAT_DATA=y
+# CONFIG_IPMI_PANIC_EVENT is not set
+CONFIG_IPMI_DEVICE_INTERFACE=m
+CONFIG_IPMI_SI=m
+CONFIG_IPMI_SSIF=m
+CONFIG_IPMI_WATCHDOG=m
+CONFIG_IPMI_POWEROFF=m
+CONFIG_HW_RANDOM=y
+CONFIG_HW_RANDOM_TIMERIOMEM=m
+CONFIG_HW_RANDOM_INTEL=m
+CONFIG_HW_RANDOM_AMD=m
+CONFIG_HW_RANDOM_VIA=m
+CONFIG_HW_RANDOM_VIRTIO=m
+CONFIG_APPLICOM=m
+
+#
+# PCMCIA character devices
+#
+CONFIG_SYNCLINK_CS=m
+CONFIG_CARDMAN_4000=m
+CONFIG_CARDMAN_4040=m
+# CONFIG_SCR24X is not set
+CONFIG_IPWIRELESS=m
+# end of PCMCIA character devices
+
+CONFIG_MWAVE=m
+CONFIG_DEVMEM=y
+# CONFIG_DEVKMEM is not set
+CONFIG_NVRAM=m
+CONFIG_RAW_DRIVER=m
+CONFIG_MAX_RAW_DEVS=256
+CONFIG_DEVPORT=y
+CONFIG_HPET=y
+CONFIG_HPET_MMAP=y
+CONFIG_HPET_MMAP_DEFAULT=y
+CONFIG_HANGCHECK_TIMER=m
+CONFIG_TCG_TPM=y
+CONFIG_HW_RANDOM_TPM=y
+CONFIG_TCG_TIS_CORE=y
+CONFIG_TCG_TIS=y
+CONFIG_TCG_TIS_I2C_ATMEL=m
+CONFIG_TCG_TIS_I2C_INFINEON=m
+CONFIG_TCG_TIS_I2C_NUVOTON=m
+CONFIG_TCG_NSC=m
+CONFIG_TCG_ATMEL=m
+CONFIG_TCG_INFINEON=m
+CONFIG_TCG_XEN=m
+CONFIG_TCG_CRB=y
+# CONFIG_TCG_VTPM_PROXY is not set
+# CONFIG_TCG_TIS_ST33ZP24_I2C is not set
+CONFIG_TELCLOCK=m
+CONFIG_XILLYBUS=m
+CONFIG_XILLYBUS_PCIE=m
+# end of Character devices
+
+# CONFIG_RANDOM_TRUST_CPU is not set
+# CONFIG_RANDOM_TRUST_BOOTLOADER is not set
+
+#
+# I2C support
+#
+CONFIG_I2C=y
+CONFIG_ACPI_I2C_OPREGION=y
+CONFIG_I2C_BOARDINFO=y
+CONFIG_I2C_COMPAT=y
+CONFIG_I2C_CHARDEV=y
+CONFIG_I2C_MUX=m
+
+#
+# Multiplexer I2C Chip support
+#
+CONFIG_I2C_MUX_GPIO=m
+# CONFIG_I2C_MUX_LTC4306 is not set
+CONFIG_I2C_MUX_PCA9541=m
+CONFIG_I2C_MUX_PCA954x=m
+# CONFIG_I2C_MUX_REG is not set
+# CONFIG_I2C_MUX_MLXCPLD is not set
+# end of Multiplexer I2C Chip support
+
+CONFIG_I2C_HELPER_AUTO=y
+CONFIG_I2C_SMBUS=m
+CONFIG_I2C_ALGOBIT=m
+CONFIG_I2C_ALGOPCA=m
+
+#
+# I2C Hardware Bus support
+#
+
+#
+# PC SMBus host controller drivers
+#
+CONFIG_I2C_ALI1535=m
+CONFIG_I2C_ALI1563=m
+CONFIG_I2C_ALI15X3=m
+CONFIG_I2C_AMD756=m
+CONFIG_I2C_AMD756_S4882=m
+CONFIG_I2C_AMD8111=m
+# CONFIG_I2C_AMD_MP2 is not set
+CONFIG_I2C_I801=m
+CONFIG_I2C_ISCH=m
+CONFIG_I2C_ISMT=m
+CONFIG_I2C_PIIX4=m
+CONFIG_I2C_NFORCE2=m
+CONFIG_I2C_NFORCE2_S4985=m
+# CONFIG_I2C_NVIDIA_GPU is not set
+CONFIG_I2C_SIS5595=m
+CONFIG_I2C_SIS630=m
+CONFIG_I2C_SIS96X=m
+CONFIG_I2C_VIA=m
+CONFIG_I2C_VIAPRO=m
+
+#
+# ACPI drivers
+#
+CONFIG_I2C_SCMI=m
+
+#
+# I2C system bus drivers (mostly embedded / system-on-chip)
+#
+CONFIG_I2C_CBUS_GPIO=m
+CONFIG_I2C_DESIGNWARE_CORE=m
+CONFIG_I2C_DESIGNWARE_PLATFORM=m
+# CONFIG_I2C_DESIGNWARE_SLAVE is not set
+CONFIG_I2C_DESIGNWARE_PCI=m
+# CONFIG_I2C_DESIGNWARE_BAYTRAIL is not set
+# CONFIG_I2C_EMEV2 is not set
+CONFIG_I2C_GPIO=m
+# CONFIG_I2C_GPIO_FAULT_INJECTOR is not set
+CONFIG_I2C_KEMPLD=m
+CONFIG_I2C_OCORES=m
+CONFIG_I2C_PCA_PLATFORM=m
+CONFIG_I2C_SIMTEC=m
+CONFIG_I2C_XILINX=m
+
+#
+# External I2C/SMBus adapter drivers
+#
+CONFIG_I2C_DIOLAN_U2C=m
+CONFIG_I2C_DLN2=m
+CONFIG_I2C_PARPORT=m
+CONFIG_I2C_ROBOTFUZZ_OSIF=m
+CONFIG_I2C_TAOS_EVM=m
+CONFIG_I2C_TINY_USB=m
+CONFIG_I2C_VIPERBOARD=m
+
+#
+# Other I2C/SMBus bus drivers
+#
+# CONFIG_I2C_MLXCPLD is not set
+CONFIG_I2C_CROS_EC_TUNNEL=m
+# end of I2C Hardware Bus support
+
+CONFIG_I2C_STUB=m
+# CONFIG_I2C_SLAVE is not set
+# CONFIG_I2C_DEBUG_CORE is not set
+# CONFIG_I2C_DEBUG_ALGO is not set
+# CONFIG_I2C_DEBUG_BUS is not set
+# end of I2C support
+
+# CONFIG_I3C is not set
+# CONFIG_SPI is not set
+# CONFIG_SPMI is not set
+# CONFIG_HSI is not set
+CONFIG_PPS=m
+# CONFIG_PPS_DEBUG is not set
+
+#
+# PPS clients support
+#
+# CONFIG_PPS_CLIENT_KTIMER is not set
+CONFIG_PPS_CLIENT_LDISC=m
+CONFIG_PPS_CLIENT_PARPORT=m
+CONFIG_PPS_CLIENT_GPIO=m
+
+#
+# PPS generators support
+#
+
+#
+# PTP clock support
+#
+CONFIG_PTP_1588_CLOCK=m
+
+#
+# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks.
+#
+CONFIG_PTP_1588_CLOCK_KVM=m
+# CONFIG_PTP_1588_CLOCK_IDT82P33 is not set
+# CONFIG_PTP_1588_CLOCK_IDTCM is not set
+# CONFIG_PTP_1588_CLOCK_VMW is not set
+# end of PTP clock support
+
+CONFIG_PINCTRL=y
+CONFIG_PINMUX=y
+CONFIG_PINCONF=y
+CONFIG_GENERIC_PINCONF=y
+# CONFIG_DEBUG_PINCTRL is not set
+# CONFIG_PINCTRL_AMD is not set
+# CONFIG_PINCTRL_MCP23S08 is not set
+# CONFIG_PINCTRL_SX150X is not set
+CONFIG_PINCTRL_BAYTRAIL=y
+CONFIG_PINCTRL_CHERRYVIEW=m
+# CONFIG_PINCTRL_LYNXPOINT is not set
+# CONFIG_PINCTRL_BROXTON is not set
+# CONFIG_PINCTRL_CANNONLAKE is not set
+# CONFIG_PINCTRL_CEDARFORK is not set
+# CONFIG_PINCTRL_DENVERTON is not set
+# CONFIG_PINCTRL_GEMINILAKE is not set
+# CONFIG_PINCTRL_ICELAKE is not set
+# CONFIG_PINCTRL_LEWISBURG is not set
+# CONFIG_PINCTRL_SUNRISEPOINT is not set
+# CONFIG_PINCTRL_TIGERLAKE is not set
+CONFIG_GPIOLIB=y
+CONFIG_GPIOLIB_FASTPATH_LIMIT=512
+CONFIG_GPIO_ACPI=y
+CONFIG_GPIOLIB_IRQCHIP=y
+# CONFIG_DEBUG_GPIO is not set
+CONFIG_GPIO_SYSFS=y
+CONFIG_GPIO_GENERIC=m
+CONFIG_GPIO_MAX730X=m
+
+#
+# Memory mapped GPIO drivers
+#
+# CONFIG_GPIO_AMDPT is not set
+# CONFIG_GPIO_DWAPB is not set
+# CONFIG_GPIO_EXAR is not set
+CONFIG_GPIO_GENERIC_PLATFORM=m
+CONFIG_GPIO_ICH=m
+# CONFIG_GPIO_MB86S7X is not set
+CONFIG_GPIO_VX855=m
+# CONFIG_GPIO_XILINX is not set
+# CONFIG_GPIO_AMD_FCH is not set
+# end of Memory mapped GPIO drivers
+
+#
+# Port-mapped I/O GPIO drivers
+#
+CONFIG_GPIO_F7188X=m
+# CONFIG_GPIO_IT87 is not set
+CONFIG_GPIO_SCH=m
+CONFIG_GPIO_SCH311X=m
+# CONFIG_GPIO_WINBOND is not set
+# CONFIG_GPIO_WS16C48 is not set
+# end of Port-mapped I/O GPIO drivers
+
+#
+# I2C GPIO expanders
+#
+CONFIG_GPIO_ADP5588=m
+CONFIG_GPIO_MAX7300=m
+CONFIG_GPIO_MAX732X=m
+CONFIG_GPIO_PCA953X=m
+CONFIG_GPIO_PCF857X=m
+# CONFIG_GPIO_TPIC2810 is not set
+# end of I2C GPIO expanders
+
+#
+# MFD GPIO expanders
+#
+CONFIG_GPIO_ADP5520=m
+CONFIG_GPIO_ARIZONA=m
+CONFIG_GPIO_DA9052=m
+CONFIG_GPIO_DA9055=m
+CONFIG_GPIO_DLN2=m
+CONFIG_GPIO_JANZ_TTL=m
+CONFIG_GPIO_KEMPLD=m
+CONFIG_GPIO_LP3943=m
+CONFIG_GPIO_PALMAS=y
+CONFIG_GPIO_RC5T583=y
+CONFIG_GPIO_TPS6586X=y
+CONFIG_GPIO_TPS65910=y
+CONFIG_GPIO_TPS65912=m
+CONFIG_GPIO_TWL4030=m
+CONFIG_GPIO_TWL6040=m
+CONFIG_GPIO_WM831X=m
+CONFIG_GPIO_WM8350=m
+CONFIG_GPIO_WM8994=m
+# end of MFD GPIO expanders
+
+#
+# PCI GPIO expanders
+#
+CONFIG_GPIO_AMD8111=m
+# CONFIG_GPIO_BT8XX is not set
+CONFIG_GPIO_ML_IOH=m
+# CONFIG_GPIO_PCI_IDIO_16 is not set
+# CONFIG_GPIO_PCIE_IDIO_24 is not set
+CONFIG_GPIO_RDC321X=m
+# end of PCI GPIO expanders
+
+#
+# USB GPIO expanders
+#
+CONFIG_GPIO_VIPERBOARD=m
+# end of USB GPIO expanders
+
+# CONFIG_GPIO_MOCKUP is not set
+CONFIG_W1=m
+CONFIG_W1_CON=y
+
+#
+# 1-wire Bus Masters
+#
+CONFIG_W1_MASTER_MATROX=m
+CONFIG_W1_MASTER_DS2490=m
+CONFIG_W1_MASTER_DS2482=m
+CONFIG_W1_MASTER_DS1WM=m
+CONFIG_W1_MASTER_GPIO=m
+# CONFIG_W1_MASTER_SGI is not set
+# end of 1-wire Bus Masters
+
+#
+# 1-wire Slaves
+#
+CONFIG_W1_SLAVE_THERM=m
+CONFIG_W1_SLAVE_SMEM=m
+# CONFIG_W1_SLAVE_DS2405 is not set
+CONFIG_W1_SLAVE_DS2408=m
+CONFIG_W1_SLAVE_DS2408_READBACK=y
+CONFIG_W1_SLAVE_DS2413=m
+CONFIG_W1_SLAVE_DS2406=m
+CONFIG_W1_SLAVE_DS2423=m
+# CONFIG_W1_SLAVE_DS2805 is not set
+# CONFIG_W1_SLAVE_DS2430 is not set
+CONFIG_W1_SLAVE_DS2431=m
+CONFIG_W1_SLAVE_DS2433=m
+# CONFIG_W1_SLAVE_DS2433_CRC is not set
+# CONFIG_W1_SLAVE_DS2438 is not set
+# CONFIG_W1_SLAVE_DS250X is not set
+CONFIG_W1_SLAVE_DS2780=m
+CONFIG_W1_SLAVE_DS2781=m
+CONFIG_W1_SLAVE_DS28E04=m
+# CONFIG_W1_SLAVE_DS28E17 is not set
+# end of 1-wire Slaves
+
+CONFIG_POWER_AVS=y
+# CONFIG_QCOM_CPR is not set
+CONFIG_POWER_RESET=y
+# CONFIG_POWER_RESET_RESTART is not set
+CONFIG_POWER_SUPPLY=y
+# CONFIG_POWER_SUPPLY_DEBUG is not set
+CONFIG_POWER_SUPPLY_HWMON=y
+CONFIG_PDA_POWER=m
+CONFIG_MAX8925_POWER=m
+CONFIG_WM831X_BACKUP=m
+CONFIG_WM831X_POWER=m
+CONFIG_WM8350_POWER=m
+CONFIG_TEST_POWER=m
+CONFIG_BATTERY_88PM860X=m
+# CONFIG_CHARGER_ADP5061 is not set
+CONFIG_BATTERY_DS2760=m
+CONFIG_BATTERY_DS2780=m
+CONFIG_BATTERY_DS2781=m
+CONFIG_BATTERY_DS2782=m
+CONFIG_BATTERY_SBS=m
+# CONFIG_CHARGER_SBS is not set
+# CONFIG_MANAGER_SBS is not set
+# CONFIG_BATTERY_BQ27XXX is not set
+CONFIG_BATTERY_DA9030=m
+CONFIG_BATTERY_DA9052=m
+CONFIG_BATTERY_MAX17040=m
+CONFIG_BATTERY_MAX17042=m
+# CONFIG_BATTERY_MAX1721X is not set
+CONFIG_CHARGER_88PM860X=m
+CONFIG_CHARGER_PCF50633=m
+CONFIG_CHARGER_ISP1704=m
+CONFIG_CHARGER_MAX8903=m
+CONFIG_CHARGER_LP8727=m
+CONFIG_CHARGER_GPIO=m
+# CONFIG_CHARGER_LT3651 is not set
+CONFIG_CHARGER_MAX14577=m
+# CONFIG_CHARGER_MAX77693 is not set
+CONFIG_CHARGER_BQ2415X=m
+CONFIG_CHARGER_BQ24190=m
+# CONFIG_CHARGER_BQ24257 is not set
+CONFIG_CHARGER_BQ24735=m
+# CONFIG_CHARGER_BQ25890 is not set
+CONFIG_CHARGER_SMB347=m
+CONFIG_CHARGER_TPS65090=m
+# CONFIG_BATTERY_GAUGE_LTC2941 is not set
+# CONFIG_CHARGER_RT9455 is not set
+# CONFIG_CHARGER_CROS_USBPD is not set
+CONFIG_HWMON=y
+CONFIG_HWMON_VID=m
+# CONFIG_HWMON_DEBUG_CHIP is not set
+
+#
+# Native drivers
+#
+CONFIG_SENSORS_ABITUGURU=m
+CONFIG_SENSORS_ABITUGURU3=m
+CONFIG_SENSORS_AD7414=m
+CONFIG_SENSORS_AD7418=m
+CONFIG_SENSORS_ADM1021=m
+CONFIG_SENSORS_ADM1025=m
+CONFIG_SENSORS_ADM1026=m
+CONFIG_SENSORS_ADM1029=m
+CONFIG_SENSORS_ADM1031=m
+# CONFIG_SENSORS_ADM1177 is not set
+CONFIG_SENSORS_ADM9240=m
+CONFIG_SENSORS_ADT7X10=m
+CONFIG_SENSORS_ADT7410=m
+CONFIG_SENSORS_ADT7411=m
+CONFIG_SENSORS_ADT7462=m
+CONFIG_SENSORS_ADT7470=m
+CONFIG_SENSORS_ADT7475=m
+# CONFIG_SENSORS_AS370 is not set
+CONFIG_SENSORS_ASC7621=m
+# CONFIG_SENSORS_AXI_FAN_CONTROL is not set
+CONFIG_SENSORS_K8TEMP=m
+CONFIG_SENSORS_K10TEMP=m
+CONFIG_SENSORS_FAM15H_POWER=m
+CONFIG_SENSORS_APPLESMC=m
+CONFIG_SENSORS_ASB100=m
+# CONFIG_SENSORS_ASPEED is not set
+CONFIG_SENSORS_ATXP1=m
+# CONFIG_SENSORS_DRIVETEMP is not set
+CONFIG_SENSORS_DS620=m
+CONFIG_SENSORS_DS1621=m
+CONFIG_SENSORS_DELL_SMM=m
+CONFIG_SENSORS_DA9052_ADC=m
+CONFIG_SENSORS_DA9055=m
+CONFIG_SENSORS_I5K_AMB=m
+CONFIG_SENSORS_F71805F=m
+CONFIG_SENSORS_F71882FG=m
+CONFIG_SENSORS_F75375S=m
+CONFIG_SENSORS_MC13783_ADC=m
+CONFIG_SENSORS_FSCHMD=m
+# CONFIG_SENSORS_FTSTEUTATES is not set
+CONFIG_SENSORS_GL518SM=m
+CONFIG_SENSORS_GL520SM=m
+CONFIG_SENSORS_G760A=m
+CONFIG_SENSORS_G762=m
+CONFIG_SENSORS_HIH6130=m
+CONFIG_SENSORS_IBMAEM=m
+CONFIG_SENSORS_IBMPEX=m
+CONFIG_SENSORS_I5500=m
+CONFIG_SENSORS_CORETEMP=m
+CONFIG_SENSORS_IT87=m
+CONFIG_SENSORS_JC42=m
+CONFIG_SENSORS_POWR1220=m
+CONFIG_SENSORS_LINEAGE=m
+CONFIG_SENSORS_LTC2945=m
+# CONFIG_SENSORS_LTC2947_I2C is not set
+# CONFIG_SENSORS_LTC2990 is not set
+CONFIG_SENSORS_LTC4151=m
+CONFIG_SENSORS_LTC4215=m
+CONFIG_SENSORS_LTC4222=m
+CONFIG_SENSORS_LTC4245=m
+CONFIG_SENSORS_LTC4260=m
+CONFIG_SENSORS_LTC4261=m
+CONFIG_SENSORS_MAX16065=m
+CONFIG_SENSORS_MAX1619=m
+CONFIG_SENSORS_MAX1668=m
+CONFIG_SENSORS_MAX197=m
+# CONFIG_SENSORS_MAX31730 is not set
+# CONFIG_SENSORS_MAX6621 is not set
+CONFIG_SENSORS_MAX6639=m
+CONFIG_SENSORS_MAX6642=m
+CONFIG_SENSORS_MAX6650=m
+CONFIG_SENSORS_MAX6697=m
+# CONFIG_SENSORS_MAX31790 is not set
+CONFIG_SENSORS_MCP3021=m
+# CONFIG_SENSORS_TC654 is not set
+CONFIG_SENSORS_MENF21BMC_HWMON=m
+CONFIG_SENSORS_LM63=m
+CONFIG_SENSORS_LM73=m
+CONFIG_SENSORS_LM75=m
+CONFIG_SENSORS_LM77=m
+CONFIG_SENSORS_LM78=m
+CONFIG_SENSORS_LM80=m
+CONFIG_SENSORS_LM83=m
+CONFIG_SENSORS_LM85=m
+CONFIG_SENSORS_LM87=m
+CONFIG_SENSORS_LM90=m
+CONFIG_SENSORS_LM92=m
+CONFIG_SENSORS_LM93=m
+CONFIG_SENSORS_LM95234=m
+CONFIG_SENSORS_LM95241=m
+CONFIG_SENSORS_LM95245=m
+CONFIG_SENSORS_PC87360=m
+CONFIG_SENSORS_PC87427=m
+CONFIG_SENSORS_NTC_THERMISTOR=m
+CONFIG_SENSORS_NCT6683=m
+CONFIG_SENSORS_NCT6775=m
+CONFIG_SENSORS_NCT7802=m
+# CONFIG_SENSORS_NCT7904 is not set
+# CONFIG_SENSORS_NPCM7XX is not set
+CONFIG_SENSORS_PCF8591=m
+CONFIG_PMBUS=m
+CONFIG_SENSORS_PMBUS=m
+CONFIG_SENSORS_ADM1275=m
+# CONFIG_SENSORS_BEL_PFE is not set
+# CONFIG_SENSORS_IBM_CFFPS is not set
+# CONFIG_SENSORS_INSPUR_IPSPS is not set
+# CONFIG_SENSORS_IR35221 is not set
+# CONFIG_SENSORS_IR38064 is not set
+# CONFIG_SENSORS_IRPS5401 is not set
+# CONFIG_SENSORS_ISL68137 is not set
+CONFIG_SENSORS_LM25066=m
+CONFIG_SENSORS_LTC2978=m
+# CONFIG_SENSORS_LTC3815 is not set
+CONFIG_SENSORS_MAX16064=m
+# CONFIG_SENSORS_MAX20730 is not set
+# CONFIG_SENSORS_MAX20751 is not set
+# CONFIG_SENSORS_MAX31785 is not set
+CONFIG_SENSORS_MAX34440=m
+CONFIG_SENSORS_MAX8688=m
+# CONFIG_SENSORS_PXE1610 is not set
+CONFIG_SENSORS_TPS40422=m
+# CONFIG_SENSORS_TPS53679 is not set
+CONFIG_SENSORS_UCD9000=m
+CONFIG_SENSORS_UCD9200=m
+# CONFIG_SENSORS_XDPE122 is not set
+CONFIG_SENSORS_ZL6100=m
+CONFIG_SENSORS_SHT15=m
+CONFIG_SENSORS_SHT21=m
+# CONFIG_SENSORS_SHT3x is not set
+CONFIG_SENSORS_SHTC1=m
+CONFIG_SENSORS_SIS5595=m
+CONFIG_SENSORS_DME1737=m
+CONFIG_SENSORS_EMC1403=m
+CONFIG_SENSORS_EMC2103=m
+CONFIG_SENSORS_EMC6W201=m
+CONFIG_SENSORS_SMSC47M1=m
+CONFIG_SENSORS_SMSC47M192=m
+CONFIG_SENSORS_SMSC47B397=m
+CONFIG_SENSORS_SCH56XX_COMMON=m
+CONFIG_SENSORS_SCH5627=m
+CONFIG_SENSORS_SCH5636=m
+# CONFIG_SENSORS_STTS751 is not set
+CONFIG_SENSORS_SMM665=m
+CONFIG_SENSORS_ADC128D818=m
+CONFIG_SENSORS_ADS7828=m
+CONFIG_SENSORS_AMC6821=m
+CONFIG_SENSORS_INA209=m
+CONFIG_SENSORS_INA2XX=m
+# CONFIG_SENSORS_INA3221 is not set
+# CONFIG_SENSORS_TC74 is not set
+CONFIG_SENSORS_THMC50=m
+CONFIG_SENSORS_TMP102=m
+CONFIG_SENSORS_TMP103=m
+# CONFIG_SENSORS_TMP108 is not set
+CONFIG_SENSORS_TMP401=m
+CONFIG_SENSORS_TMP421=m
+# CONFIG_SENSORS_TMP513 is not set
+CONFIG_SENSORS_VIA_CPUTEMP=m
+CONFIG_SENSORS_VIA686A=m
+CONFIG_SENSORS_VT1211=m
+CONFIG_SENSORS_VT8231=m
+# CONFIG_SENSORS_W83773G is not set
+CONFIG_SENSORS_W83781D=m
+CONFIG_SENSORS_W83791D=m
+CONFIG_SENSORS_W83792D=m
+CONFIG_SENSORS_W83793=m
+CONFIG_SENSORS_W83795=m
+# CONFIG_SENSORS_W83795_FANCTRL is not set
+CONFIG_SENSORS_W83L785TS=m
+CONFIG_SENSORS_W83L786NG=m
+CONFIG_SENSORS_W83627HF=m
+CONFIG_SENSORS_W83627EHF=m
+CONFIG_SENSORS_WM831X=m
+CONFIG_SENSORS_WM8350=m
+# CONFIG_SENSORS_XGENE is not set
+
+#
+# ACPI drivers
+#
+CONFIG_SENSORS_ACPI_POWER=m
+CONFIG_SENSORS_ATK0110=m
+CONFIG_THERMAL=y
+# CONFIG_THERMAL_STATISTICS is not set
+CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0
+CONFIG_THERMAL_HWMON=y
+CONFIG_THERMAL_WRITABLE_TRIPS=y
+CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
+# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
+# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
+CONFIG_THERMAL_GOV_FAIR_SHARE=y
+CONFIG_THERMAL_GOV_STEP_WISE=y
+CONFIG_THERMAL_GOV_BANG_BANG=y
+CONFIG_THERMAL_GOV_USER_SPACE=y
+# CONFIG_CLOCK_THERMAL is not set
+# CONFIG_DEVFREQ_THERMAL is not set
+CONFIG_THERMAL_EMULATION=y
+
+#
+# Intel thermal drivers
+#
+CONFIG_INTEL_POWERCLAMP=m
+CONFIG_X86_PKG_TEMP_THERMAL=m
+CONFIG_INTEL_SOC_DTS_IOSF_CORE=m
+CONFIG_INTEL_SOC_DTS_THERMAL=m
+
+#
+# ACPI INT340X thermal drivers
+#
+CONFIG_INT340X_THERMAL=m
+CONFIG_ACPI_THERMAL_REL=m
+# CONFIG_INT3406_THERMAL is not set
+CONFIG_PROC_THERMAL_MMIO_RAPL=y
+# end of ACPI INT340X thermal drivers
+
+# CONFIG_INTEL_PCH_THERMAL is not set
+# end of Intel thermal drivers
+
+CONFIG_WATCHDOG=y
+CONFIG_WATCHDOG_CORE=y
+# CONFIG_WATCHDOG_NOWAYOUT is not set
+CONFIG_WATCHDOG_HANDLE_BOOT_ENABLED=y
+CONFIG_WATCHDOG_OPEN_TIMEOUT=0
+# CONFIG_WATCHDOG_SYSFS is not set
+
+#
+# Watchdog Pretimeout Governors
+#
+# CONFIG_WATCHDOG_PRETIMEOUT_GOV is not set
+
+#
+# Watchdog Device Drivers
+#
+CONFIG_SOFT_WATCHDOG=m
+CONFIG_DA9052_WATCHDOG=m
+CONFIG_DA9055_WATCHDOG=m
+CONFIG_DA9063_WATCHDOG=m
+CONFIG_MENF21BMC_WATCHDOG=m
+# CONFIG_WDAT_WDT is not set
+CONFIG_WM831X_WATCHDOG=m
+CONFIG_WM8350_WATCHDOG=m
+CONFIG_XILINX_WATCHDOG=m
+# CONFIG_ZIIRAVE_WATCHDOG is not set
+# CONFIG_CADENCE_WATCHDOG is not set
+CONFIG_DW_WATCHDOG=m
+CONFIG_TWL4030_WATCHDOG=m
+# CONFIG_MAX63XX_WATCHDOG is not set
+CONFIG_RETU_WATCHDOG=m
+CONFIG_ACQUIRE_WDT=m
+CONFIG_ADVANTECH_WDT=m
+CONFIG_ALIM1535_WDT=m
+CONFIG_ALIM7101_WDT=m
+# CONFIG_EBC_C384_WDT is not set
+CONFIG_F71808E_WDT=m
+CONFIG_SP5100_TCO=m
+CONFIG_SBC_FITPC2_WATCHDOG=m
+CONFIG_EUROTECH_WDT=m
+CONFIG_IB700_WDT=m
+CONFIG_IBMASR=m
+CONFIG_WAFER_WDT=m
+CONFIG_I6300ESB_WDT=m
+CONFIG_IE6XX_WDT=m
+CONFIG_ITCO_WDT=m
+CONFIG_ITCO_VENDOR_SUPPORT=y
+CONFIG_IT8712F_WDT=m
+CONFIG_IT87_WDT=m
+CONFIG_HP_WATCHDOG=m
+CONFIG_HPWDT_NMI_DECODING=y
+CONFIG_KEMPLD_WDT=m
+CONFIG_SC1200_WDT=m
+CONFIG_PC87413_WDT=m
+CONFIG_NV_TCO=m
+CONFIG_60XX_WDT=m
+CONFIG_CPU5_WDT=m
+CONFIG_SMSC_SCH311X_WDT=m
+CONFIG_SMSC37B787_WDT=m
+# CONFIG_TQMX86_WDT is not set
+CONFIG_VIA_WDT=m
+CONFIG_W83627HF_WDT=m
+CONFIG_W83877F_WDT=m
+CONFIG_W83977F_WDT=m
+CONFIG_MACHZ_WDT=m
+CONFIG_SBC_EPX_C3_WATCHDOG=m
+# CONFIG_INTEL_MEI_WDT is not set
+# CONFIG_NI903X_WDT is not set
+# CONFIG_NIC7018_WDT is not set
+CONFIG_MEN_A21_WDT=m
+CONFIG_XEN_WDT=m
+
+#
+# PCI-based Watchdog Cards
+#
+CONFIG_PCIPCWATCHDOG=m
+CONFIG_WDTPCI=m
+
+#
+# USB-based Watchdog Cards
+#
+CONFIG_USBPCWATCHDOG=m
+CONFIG_SSB_POSSIBLE=y
+CONFIG_SSB=m
+CONFIG_SSB_SPROM=y
+CONFIG_SSB_PCIHOST_POSSIBLE=y
+CONFIG_SSB_PCIHOST=y
+CONFIG_SSB_PCMCIAHOST_POSSIBLE=y
+# CONFIG_SSB_PCMCIAHOST is not set
+CONFIG_SSB_DRIVER_PCICORE_POSSIBLE=y
+CONFIG_SSB_DRIVER_PCICORE=y
+CONFIG_SSB_DRIVER_GPIO=y
+CONFIG_BCMA_POSSIBLE=y
+CONFIG_BCMA=m
+CONFIG_BCMA_HOST_PCI_POSSIBLE=y
+CONFIG_BCMA_HOST_PCI=y
+CONFIG_BCMA_HOST_SOC=y
+CONFIG_BCMA_DRIVER_PCI=y
+CONFIG_BCMA_SFLASH=y
+CONFIG_BCMA_DRIVER_GMAC_CMN=y
+CONFIG_BCMA_DRIVER_GPIO=y
+# CONFIG_BCMA_DEBUG is not set
+
+#
+# Multifunction device drivers
+#
+CONFIG_MFD_CORE=y
+CONFIG_MFD_AS3711=y
+CONFIG_PMIC_ADP5520=y
+CONFIG_MFD_AAT2870_CORE=y
+CONFIG_MFD_BCM590XX=m
+# CONFIG_MFD_BD9571MWV is not set
+# CONFIG_MFD_AXP20X_I2C is not set
+CONFIG_MFD_CROS_EC_DEV=m
+# CONFIG_MFD_MADERA is not set
+CONFIG_PMIC_DA903X=y
+CONFIG_PMIC_DA9052=y
+CONFIG_MFD_DA9052_I2C=y
+CONFIG_MFD_DA9055=y
+# CONFIG_MFD_DA9062 is not set
+CONFIG_MFD_DA9063=y
+# CONFIG_MFD_DA9150 is not set
+CONFIG_MFD_DLN2=m
+CONFIG_MFD_MC13XXX=m
+CONFIG_MFD_MC13XXX_I2C=m
+CONFIG_HTC_PASIC3=m
+CONFIG_HTC_I2CPLD=y
+# CONFIG_MFD_INTEL_QUARK_I2C_GPIO is not set
+CONFIG_LPC_ICH=m
+CONFIG_LPC_SCH=m
+# CONFIG_INTEL_SOC_PMIC_CHTDC_TI is not set
+# CONFIG_MFD_INTEL_LPSS_ACPI is not set
+# CONFIG_MFD_INTEL_LPSS_PCI is not set
+# CONFIG_MFD_IQS62X is not set
+CONFIG_MFD_JANZ_CMODIO=m
+CONFIG_MFD_KEMPLD=m
+CONFIG_MFD_88PM800=m
+CONFIG_MFD_88PM805=m
+CONFIG_MFD_88PM860X=y
+CONFIG_MFD_MAX14577=y
+CONFIG_MFD_MAX77693=y
+# CONFIG_MFD_MAX77843 is not set
+CONFIG_MFD_MAX8907=m
+CONFIG_MFD_MAX8925=y
+CONFIG_MFD_MAX8997=y
+CONFIG_MFD_MAX8998=y
+# CONFIG_MFD_MT6397 is not set
+CONFIG_MFD_MENF21BMC=m
+CONFIG_MFD_VIPERBOARD=m
+CONFIG_MFD_RETU=m
+CONFIG_MFD_PCF50633=m
+CONFIG_PCF50633_ADC=m
+CONFIG_PCF50633_GPIO=m
+CONFIG_MFD_RDC321X=m
+# CONFIG_MFD_RT5033 is not set
+CONFIG_MFD_RC5T583=y
+CONFIG_MFD_SEC_CORE=y
+CONFIG_MFD_SI476X_CORE=m
+CONFIG_MFD_SM501=m
+CONFIG_MFD_SM501_GPIO=y
+# CONFIG_MFD_SKY81452 is not set
+CONFIG_MFD_SMSC=y
+CONFIG_ABX500_CORE=y
+CONFIG_AB3100_CORE=y
+CONFIG_AB3100_OTP=m
+CONFIG_MFD_SYSCON=y
+CONFIG_MFD_TI_AM335X_TSCADC=m
+CONFIG_MFD_LP3943=m
+CONFIG_MFD_LP8788=y
+# CONFIG_MFD_TI_LMU is not set
+CONFIG_MFD_PALMAS=y
+# CONFIG_TPS6105X is not set
+CONFIG_TPS65010=m
+CONFIG_TPS6507X=m
+# CONFIG_MFD_TPS65086 is not set
+CONFIG_MFD_TPS65090=y
+# CONFIG_MFD_TI_LP873X is not set
+CONFIG_MFD_TPS6586X=y
+CONFIG_MFD_TPS65910=y
+CONFIG_MFD_TPS65912=y
+CONFIG_MFD_TPS65912_I2C=y
+CONFIG_MFD_TPS80031=y
+CONFIG_TWL4030_CORE=y
+CONFIG_MFD_TWL4030_AUDIO=y
+CONFIG_TWL6040_CORE=y
+CONFIG_MFD_WL1273_CORE=m
+CONFIG_MFD_LM3533=m
+# CONFIG_MFD_TQMX86 is not set
+CONFIG_MFD_VX855=m
+CONFIG_MFD_ARIZONA=y
+CONFIG_MFD_ARIZONA_I2C=m
+# CONFIG_MFD_CS47L24 is not set
+CONFIG_MFD_WM5102=y
+CONFIG_MFD_WM5110=y
+CONFIG_MFD_WM8997=y
+# CONFIG_MFD_WM8998 is not set
+CONFIG_MFD_WM8400=y
+CONFIG_MFD_WM831X=y
+CONFIG_MFD_WM831X_I2C=y
+CONFIG_MFD_WM8350=y
+CONFIG_MFD_WM8350_I2C=y
+CONFIG_MFD_WM8994=m
+# end of Multifunction device drivers
+
+# CONFIG_REGULATOR is not set
+# CONFIG_RC_CORE is not set
+# CONFIG_MEDIA_SUPPORT is not set
+
+#
+# Graphics support
+#
+# CONFIG_AGP is not set
+CONFIG_VGA_ARB=y
+CONFIG_VGA_ARB_MAX_GPUS=16
+# CONFIG_VGA_SWITCHEROO is not set
+# CONFIG_DRM is not set
+
+#
+# ARM devices
+#
+# end of ARM devices
+
+# CONFIG_DRM_XEN is not set
+
+#
+# Frame buffer Devices
+#
+# CONFIG_FB is not set
+# end of Frame buffer Devices
+
+#
+# Backlight & LCD device support
+#
+CONFIG_LCD_CLASS_DEVICE=m
+CONFIG_LCD_PLATFORM=m
+CONFIG_BACKLIGHT_CLASS_DEVICE=y
+CONFIG_BACKLIGHT_GENERIC=m
+CONFIG_BACKLIGHT_LM3533=m
+CONFIG_BACKLIGHT_DA903X=m
+CONFIG_BACKLIGHT_DA9052=m
+CONFIG_BACKLIGHT_MAX8925=m
+CONFIG_BACKLIGHT_APPLE=m
+# CONFIG_BACKLIGHT_QCOM_WLED is not set
+CONFIG_BACKLIGHT_SAHARA=m
+CONFIG_BACKLIGHT_WM831X=m
+CONFIG_BACKLIGHT_ADP5520=m
+CONFIG_BACKLIGHT_ADP8860=m
+CONFIG_BACKLIGHT_ADP8870=m
+CONFIG_BACKLIGHT_88PM860X=m
+CONFIG_BACKLIGHT_PCF50633=m
+CONFIG_BACKLIGHT_AAT2870=m
+CONFIG_BACKLIGHT_LM3639=m
+CONFIG_BACKLIGHT_PANDORA=m
+CONFIG_BACKLIGHT_AS3711=m
+CONFIG_BACKLIGHT_GPIO=m
+CONFIG_BACKLIGHT_LV5207LP=m
+CONFIG_BACKLIGHT_BD6107=m
+# CONFIG_BACKLIGHT_ARCXCNN is not set
+# end of Backlight & LCD device support
+
+#
+# Console display driver support
+#
+CONFIG_VGA_CONSOLE=y
+# CONFIG_VGACON_SOFT_SCROLLBACK is not set
+CONFIG_DUMMY_CONSOLE=y
+CONFIG_DUMMY_CONSOLE_COLUMNS=80
+CONFIG_DUMMY_CONSOLE_ROWS=25
+# end of Console display driver support
+# end of Graphics support
+
+# CONFIG_SOUND is not set
+
+#
+# HID support
+#
+CONFIG_HID=m
+CONFIG_HID_BATTERY_STRENGTH=y
+CONFIG_HIDRAW=y
+CONFIG_UHID=m
+CONFIG_HID_GENERIC=m
+
+#
+# Special HID drivers
+#
+CONFIG_HID_A4TECH=m
+# CONFIG_HID_ACCUTOUCH is not set
+CONFIG_HID_ACRUX=m
+CONFIG_HID_ACRUX_FF=y
+CONFIG_HID_APPLE=m
+CONFIG_HID_APPLEIR=m
+# CONFIG_HID_ASUS is not set
+CONFIG_HID_AUREAL=m
+CONFIG_HID_BELKIN=m
+# CONFIG_HID_BETOP_FF is not set
+# CONFIG_HID_BIGBEN_FF is not set
+CONFIG_HID_CHERRY=m
+CONFIG_HID_CHICONY=m
+# CONFIG_HID_CORSAIR is not set
+# CONFIG_HID_COUGAR is not set
+# CONFIG_HID_MACALLY is not set
+# CONFIG_HID_CMEDIA is not set
+CONFIG_HID_CP2112=m
+# CONFIG_HID_CREATIVE_SB0540 is not set
+CONFIG_HID_CYPRESS=m
+CONFIG_HID_DRAGONRISE=m
+CONFIG_DRAGONRISE_FF=y
+CONFIG_HID_EMS_FF=m
+# CONFIG_HID_ELAN is not set
+CONFIG_HID_ELECOM=m
+CONFIG_HID_ELO=m
+CONFIG_HID_EZKEY=m
+# CONFIG_HID_GEMBIRD is not set
+# CONFIG_HID_GFRM is not set
+# CONFIG_HID_GLORIOUS is not set
+CONFIG_HID_HOLTEK=m
+CONFIG_HOLTEK_FF=y
+# CONFIG_HID_GOOGLE_HAMMER is not set
+CONFIG_HID_GT683R=m
+CONFIG_HID_KEYTOUCH=m
+CONFIG_HID_KYE=m
+CONFIG_HID_UCLOGIC=m
+CONFIG_HID_WALTOP=m
+# CONFIG_HID_VIEWSONIC is not set
+CONFIG_HID_GYRATION=m
+CONFIG_HID_ICADE=m
+# CONFIG_HID_ITE is not set
+# CONFIG_HID_JABRA is not set
+CONFIG_HID_TWINHAN=m
+CONFIG_HID_KENSINGTON=m
+CONFIG_HID_LCPOWER=m
+CONFIG_HID_LED=m
+CONFIG_HID_LENOVO=m
+CONFIG_HID_LOGITECH=m
+CONFIG_HID_LOGITECH_DJ=m
+CONFIG_HID_LOGITECH_HIDPP=m
+CONFIG_LOGITECH_FF=y
+CONFIG_LOGIRUMBLEPAD2_FF=y
+CONFIG_LOGIG940_FF=y
+CONFIG_LOGIWHEELS_FF=y
+CONFIG_HID_MAGICMOUSE=m
+# CONFIG_HID_MALTRON is not set
+# CONFIG_HID_MAYFLASH is not set
+# CONFIG_HID_REDRAGON is not set
+CONFIG_HID_MICROSOFT=m
+CONFIG_HID_MONTEREY=m
+CONFIG_HID_MULTITOUCH=m
+# CONFIG_HID_NTI is not set
+CONFIG_HID_NTRIG=m
+CONFIG_HID_ORTEK=m
+CONFIG_HID_PANTHERLORD=m
+CONFIG_PANTHERLORD_FF=y
+CONFIG_HID_PENMOUNT=m
+CONFIG_HID_PETALYNX=m
+CONFIG_HID_PICOLCD=m
+CONFIG_HID_PICOLCD_BACKLIGHT=y
+CONFIG_HID_PICOLCD_LCD=y
+CONFIG_HID_PICOLCD_LEDS=y
+CONFIG_HID_PLANTRONICS=m
+CONFIG_HID_PRIMAX=m
+# CONFIG_HID_RETRODE is not set
+CONFIG_HID_ROCCAT=m
+CONFIG_HID_SAITEK=m
+CONFIG_HID_SAMSUNG=m
+CONFIG_HID_SONY=m
+CONFIG_SONY_FF=y
+CONFIG_HID_SPEEDLINK=m
+# CONFIG_HID_STEAM is not set
+CONFIG_HID_STEELSERIES=m
+CONFIG_HID_SUNPLUS=m
+CONFIG_HID_RMI=m
+CONFIG_HID_GREENASIA=m
+CONFIG_GREENASIA_FF=y
+CONFIG_HID_HYPERV_MOUSE=m
+CONFIG_HID_SMARTJOYPLUS=m
+CONFIG_SMARTJOYPLUS_FF=y
+CONFIG_HID_TIVO=m
+CONFIG_HID_TOPSEED=m
+CONFIG_HID_THINGM=m
+CONFIG_HID_THRUSTMASTER=m
+CONFIG_THRUSTMASTER_FF=y
+# CONFIG_HID_UDRAW_PS3 is not set
+# CONFIG_HID_U2FZERO is not set
+CONFIG_HID_WACOM=m
+CONFIG_HID_WIIMOTE=m
+CONFIG_HID_XINMO=m
+CONFIG_HID_ZEROPLUS=m
+CONFIG_ZEROPLUS_FF=y
+CONFIG_HID_ZYDACRON=m
+CONFIG_HID_SENSOR_HUB=m
+# CONFIG_HID_SENSOR_CUSTOM_SENSOR is not set
+# CONFIG_HID_ALPS is not set
+# CONFIG_HID_MCP2221 is not set
+# end of Special HID drivers
+
+#
+# USB HID support
+#
+CONFIG_USB_HID=m
+CONFIG_HID_PID=y
+CONFIG_USB_HIDDEV=y
+
+#
+# USB HID Boot Protocol drivers
+#
+CONFIG_USB_KBD=m
+CONFIG_USB_MOUSE=m
+# end of USB HID Boot Protocol drivers
+# end of USB HID support
+
+#
+# I2C HID support
+#
+CONFIG_I2C_HID=m
+# end of I2C HID support
+
+#
+# Intel ISH HID support
+#
+# CONFIG_INTEL_ISH_HID is not set
+# end of Intel ISH HID support
+# end of HID support
+
+CONFIG_USB_OHCI_LITTLE_ENDIAN=y
+CONFIG_USB_SUPPORT=y
+CONFIG_USB_COMMON=y
+CONFIG_USB_LED_TRIG=y
+CONFIG_USB_ULPI_BUS=m
+# CONFIG_USB_CONN_GPIO is not set
+CONFIG_USB_ARCH_HAS_HCD=y
+CONFIG_USB=y
+CONFIG_USB_PCI=y
+CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
+
+#
+# Miscellaneous USB options
+#
+CONFIG_USB_DEFAULT_PERSIST=y
+CONFIG_USB_DYNAMIC_MINORS=y
+# CONFIG_USB_OTG is not set
+# CONFIG_USB_OTG_WHITELIST is not set
+# CONFIG_USB_OTG_BLACKLIST_HUB is not set
+# CONFIG_USB_LEDS_TRIGGER_USBPORT is not set
+CONFIG_USB_AUTOSUSPEND_DELAY=2
+CONFIG_USB_MON=m
+
+#
+# USB Host Controller Drivers
+#
+CONFIG_USB_C67X00_HCD=m
+CONFIG_USB_XHCI_HCD=y
+# CONFIG_USB_XHCI_DBGCAP is not set
+CONFIG_USB_XHCI_PCI=y
+CONFIG_USB_XHCI_PLATFORM=m
+CONFIG_USB_EHCI_HCD=y
+CONFIG_USB_EHCI_ROOT_HUB_TT=y
+CONFIG_USB_EHCI_TT_NEWSCHED=y
+CONFIG_USB_EHCI_PCI=y
+# CONFIG_USB_EHCI_FSL is not set
+CONFIG_USB_EHCI_HCD_PLATFORM=y
+CONFIG_USB_OXU210HP_HCD=m
+CONFIG_USB_ISP116X_HCD=m
+CONFIG_USB_FOTG210_HCD=m
+CONFIG_USB_OHCI_HCD=y
+CONFIG_USB_OHCI_HCD_PCI=y
+CONFIG_USB_OHCI_HCD_PLATFORM=y
+CONFIG_USB_UHCI_HCD=y
+CONFIG_USB_U132_HCD=m
+CONFIG_USB_SL811_HCD=m
+CONFIG_USB_SL811_HCD_ISO=y
+CONFIG_USB_SL811_CS=m
+CONFIG_USB_R8A66597_HCD=m
+CONFIG_USB_HCD_BCMA=m
+CONFIG_USB_HCD_SSB=m
+# CONFIG_USB_HCD_TEST_MODE is not set
+
+#
+# USB Device Class drivers
+#
+CONFIG_USB_ACM=m
+CONFIG_USB_PRINTER=m
+CONFIG_USB_WDM=m
+CONFIG_USB_TMC=m
+
+#
+# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
+#
+
+#
+# also be needed; see USB_STORAGE Help for more info
+#
+CONFIG_USB_STORAGE=m
+# CONFIG_USB_STORAGE_DEBUG is not set
+CONFIG_USB_STORAGE_REALTEK=m
+CONFIG_REALTEK_AUTOPM=y
+CONFIG_USB_STORAGE_DATAFAB=m
+CONFIG_USB_STORAGE_FREECOM=m
+CONFIG_USB_STORAGE_ISD200=m
+CONFIG_USB_STORAGE_USBAT=m
+CONFIG_USB_STORAGE_SDDR09=m
+CONFIG_USB_STORAGE_SDDR55=m
+CONFIG_USB_STORAGE_JUMPSHOT=m
+CONFIG_USB_STORAGE_ALAUDA=m
+CONFIG_USB_STORAGE_ONETOUCH=m
+CONFIG_USB_STORAGE_KARMA=m
+CONFIG_USB_STORAGE_CYPRESS_ATACB=m
+CONFIG_USB_STORAGE_ENE_UB6250=m
+CONFIG_USB_UAS=m
+
+#
+# USB Imaging devices
+#
+CONFIG_USB_MDC800=m
+CONFIG_USB_MICROTEK=m
+CONFIG_USBIP_CORE=m
+CONFIG_USBIP_VHCI_HCD=m
+CONFIG_USBIP_VHCI_HC_PORTS=8
+CONFIG_USBIP_VHCI_NR_HCS=1
+CONFIG_USBIP_HOST=m
+# CONFIG_USBIP_VUDC is not set
+# CONFIG_USBIP_DEBUG is not set
+# CONFIG_USB_CDNS3 is not set
+CONFIG_USB_MUSB_HDRC=m
+# CONFIG_USB_MUSB_HOST is not set
+# CONFIG_USB_MUSB_GADGET is not set
+CONFIG_USB_MUSB_DUAL_ROLE=y
+
+#
+# Platform Glue Layer
+#
+
+#
+# MUSB DMA mode
+#
+CONFIG_MUSB_PIO_ONLY=y
+CONFIG_USB_DWC3=m
+# CONFIG_USB_DWC3_ULPI is not set
+# CONFIG_USB_DWC3_HOST is not set
+# CONFIG_USB_DWC3_GADGET is not set
+CONFIG_USB_DWC3_DUAL_ROLE=y
+
+#
+# Platform Glue Driver Support
+#
+CONFIG_USB_DWC3_PCI=m
+CONFIG_USB_DWC3_HAPS=m
+CONFIG_USB_DWC2=y
+CONFIG_USB_DWC2_HOST=y
+
+#
+# Gadget/Dual-role mode requires USB Gadget support to be enabled
+#
+CONFIG_USB_DWC2_PCI=m
+# CONFIG_USB_DWC2_DEBUG is not set
+# CONFIG_USB_DWC2_TRACK_MISSED_SOFS is not set
+CONFIG_USB_CHIPIDEA=m
+CONFIG_USB_CHIPIDEA_PCI=m
+CONFIG_USB_CHIPIDEA_UDC=y
+CONFIG_USB_CHIPIDEA_HOST=y
+# CONFIG_USB_ISP1760 is not set
+
+#
+# USB port drivers
+#
+CONFIG_USB_USS720=m
+CONFIG_USB_SERIAL=m
+CONFIG_USB_SERIAL_GENERIC=y
+CONFIG_USB_SERIAL_SIMPLE=m
+CONFIG_USB_SERIAL_AIRCABLE=m
+CONFIG_USB_SERIAL_ARK3116=m
+CONFIG_USB_SERIAL_BELKIN=m
+CONFIG_USB_SERIAL_CH341=m
+CONFIG_USB_SERIAL_WHITEHEAT=m
+CONFIG_USB_SERIAL_DIGI_ACCELEPORT=m
+CONFIG_USB_SERIAL_CP210X=m
+CONFIG_USB_SERIAL_CYPRESS_M8=m
+CONFIG_USB_SERIAL_EMPEG=m
+CONFIG_USB_SERIAL_FTDI_SIO=m
+CONFIG_USB_SERIAL_VISOR=m
+CONFIG_USB_SERIAL_IPAQ=m
+CONFIG_USB_SERIAL_IR=m
+CONFIG_USB_SERIAL_EDGEPORT=m
+CONFIG_USB_SERIAL_EDGEPORT_TI=m
+CONFIG_USB_SERIAL_F81232=m
+# CONFIG_USB_SERIAL_F8153X is not set
+CONFIG_USB_SERIAL_GARMIN=m
+CONFIG_USB_SERIAL_IPW=m
+CONFIG_USB_SERIAL_IUU=m
+CONFIG_USB_SERIAL_KEYSPAN_PDA=m
+CONFIG_USB_SERIAL_KEYSPAN=m
+CONFIG_USB_SERIAL_KLSI=m
+CONFIG_USB_SERIAL_KOBIL_SCT=m
+CONFIG_USB_SERIAL_MCT_U232=m
+CONFIG_USB_SERIAL_METRO=m
+CONFIG_USB_SERIAL_MOS7720=m
+CONFIG_USB_SERIAL_MOS7715_PARPORT=y
+CONFIG_USB_SERIAL_MOS7840=m
+CONFIG_USB_SERIAL_MXUPORT=m
+CONFIG_USB_SERIAL_NAVMAN=m
+CONFIG_USB_SERIAL_PL2303=m
+CONFIG_USB_SERIAL_OTI6858=m
+CONFIG_USB_SERIAL_QCAUX=m
+CONFIG_USB_SERIAL_QUALCOMM=m
+CONFIG_USB_SERIAL_SPCP8X5=m
+CONFIG_USB_SERIAL_SAFE=m
+# CONFIG_USB_SERIAL_SAFE_PADDED is not set
+CONFIG_USB_SERIAL_SIERRAWIRELESS=m
+CONFIG_USB_SERIAL_SYMBOL=m
+CONFIG_USB_SERIAL_TI=m
+CONFIG_USB_SERIAL_CYBERJACK=m
+CONFIG_USB_SERIAL_XIRCOM=m
+CONFIG_USB_SERIAL_WWAN=m
+CONFIG_USB_SERIAL_OPTION=m
+CONFIG_USB_SERIAL_OMNINET=m
+CONFIG_USB_SERIAL_OPTICON=m
+CONFIG_USB_SERIAL_XSENS_MT=m
+CONFIG_USB_SERIAL_WISHBONE=m
+CONFIG_USB_SERIAL_SSU100=m
+CONFIG_USB_SERIAL_QT2=m
+# CONFIG_USB_SERIAL_UPD78F0730 is not set
+CONFIG_USB_SERIAL_DEBUG=m
+
+#
+# USB Miscellaneous drivers
+#
+CONFIG_USB_EMI62=m
+CONFIG_USB_EMI26=m
+CONFIG_USB_ADUTUX=m
+CONFIG_USB_SEVSEG=m
+CONFIG_USB_LEGOTOWER=m
+CONFIG_USB_LCD=m
+CONFIG_USB_CYPRESS_CY7C63=m
+CONFIG_USB_CYTHERM=m
+CONFIG_USB_IDMOUSE=m
+CONFIG_USB_FTDI_ELAN=m
+CONFIG_USB_APPLEDISPLAY=m
+# CONFIG_APPLE_MFI_FASTCHARGE is not set
+CONFIG_USB_SISUSBVGA=m
+# CONFIG_USB_SISUSBVGA_CON is not set
+CONFIG_USB_LD=m
+CONFIG_USB_TRANCEVIBRATOR=m
+CONFIG_USB_IOWARRIOR=m
+CONFIG_USB_TEST=m
+CONFIG_USB_EHSET_TEST_FIXTURE=m
+CONFIG_USB_ISIGHTFW=m
+CONFIG_USB_YUREX=m
+CONFIG_USB_EZUSB_FX2=m
+# CONFIG_USB_HUB_USB251XB is not set
+CONFIG_USB_HSIC_USB3503=m
+# CONFIG_USB_HSIC_USB4604 is not set
+CONFIG_USB_LINK_LAYER_TEST=m
+# CONFIG_USB_CHAOSKEY is not set
+
+#
+# USB Physical Layer drivers
+#
+CONFIG_USB_PHY=y
+CONFIG_NOP_USB_XCEIV=m
+CONFIG_USB_GPIO_VBUS=m
+CONFIG_TAHVO_USB=m
+CONFIG_TAHVO_USB_HOST_BY_DEFAULT=y
+CONFIG_USB_ISP1301=m
+# end of USB Physical Layer drivers
+
+CONFIG_USB_GADGET=m
+# CONFIG_USB_GADGET_DEBUG is not set
+# CONFIG_USB_GADGET_DEBUG_FILES is not set
+# CONFIG_USB_GADGET_DEBUG_FS is not set
+CONFIG_USB_GADGET_VBUS_DRAW=2
+CONFIG_USB_GADGET_STORAGE_NUM_BUFFERS=2
+# CONFIG_U_SERIAL_CONSOLE is not set
+
+#
+# USB Peripheral Controller
+#
+CONFIG_USB_FOTG210_UDC=m
+CONFIG_USB_GR_UDC=m
+CONFIG_USB_R8A66597=m
+CONFIG_USB_PXA27X=m
+CONFIG_USB_MV_UDC=m
+CONFIG_USB_MV_U3D=m
+CONFIG_USB_SNP_CORE=m
+# CONFIG_USB_M66592 is not set
+CONFIG_USB_BDC_UDC=m
+
+#
+# Platform Support
+#
+CONFIG_USB_BDC_PCI=m
+CONFIG_USB_AMD5536UDC=m
+CONFIG_USB_NET2272=m
+CONFIG_USB_NET2272_DMA=y
+CONFIG_USB_NET2280=m
+CONFIG_USB_GOKU=m
+CONFIG_USB_EG20T=m
+# CONFIG_USB_DUMMY_HCD is not set
+# end of USB Peripheral Controller
+
+CONFIG_USB_LIBCOMPOSITE=m
+CONFIG_USB_F_ACM=m
+CONFIG_USB_F_SS_LB=m
+CONFIG_USB_U_SERIAL=m
+CONFIG_USB_U_ETHER=m
+CONFIG_USB_F_SERIAL=m
+CONFIG_USB_F_OBEX=m
+CONFIG_USB_F_NCM=m
+CONFIG_USB_F_ECM=m
+CONFIG_USB_F_PHONET=m
+CONFIG_USB_F_EEM=m
+CONFIG_USB_F_SUBSET=m
+CONFIG_USB_F_RNDIS=m
+CONFIG_USB_F_MASS_STORAGE=m
+CONFIG_USB_F_FS=m
+CONFIG_USB_F_HID=m
+CONFIG_USB_F_PRINTER=m
+CONFIG_USB_F_TCM=m
+CONFIG_USB_CONFIGFS=m
+CONFIG_USB_CONFIGFS_SERIAL=y
+CONFIG_USB_CONFIGFS_ACM=y
+CONFIG_USB_CONFIGFS_OBEX=y
+CONFIG_USB_CONFIGFS_NCM=y
+CONFIG_USB_CONFIGFS_ECM=y
+CONFIG_USB_CONFIGFS_ECM_SUBSET=y
+CONFIG_USB_CONFIGFS_RNDIS=y
+CONFIG_USB_CONFIGFS_EEM=y
+CONFIG_USB_CONFIGFS_PHONET=y
+CONFIG_USB_CONFIGFS_MASS_STORAGE=y
+CONFIG_USB_CONFIGFS_F_LB_SS=y
+CONFIG_USB_CONFIGFS_F_FS=y
+CONFIG_USB_CONFIGFS_F_HID=y
+# CONFIG_USB_CONFIGFS_F_PRINTER is not set
+# CONFIG_USB_CONFIGFS_F_TCM is not set
+
+#
+# USB Gadget precomposed configurations
+#
+CONFIG_USB_ZERO=m
+CONFIG_USB_ETH=m
+CONFIG_USB_ETH_RNDIS=y
+CONFIG_USB_ETH_EEM=y
+CONFIG_USB_G_NCM=m
+CONFIG_USB_GADGETFS=m
+CONFIG_USB_FUNCTIONFS=m
+CONFIG_USB_FUNCTIONFS_ETH=y
+CONFIG_USB_FUNCTIONFS_RNDIS=y
+CONFIG_USB_FUNCTIONFS_GENERIC=y
+CONFIG_USB_MASS_STORAGE=m
+CONFIG_USB_GADGET_TARGET=m
+CONFIG_USB_G_SERIAL=m
+CONFIG_USB_G_PRINTER=m
+CONFIG_USB_CDC_COMPOSITE=m
+CONFIG_USB_G_NOKIA=m
+CONFIG_USB_G_ACM_MS=m
+# CONFIG_USB_G_MULTI is not set
+CONFIG_USB_G_HID=m
+CONFIG_USB_G_DBGP=m
+# CONFIG_USB_G_DBGP_PRINTK is not set
+CONFIG_USB_G_DBGP_SERIAL=y
+# CONFIG_USB_RAW_GADGET is not set
+# end of USB Gadget precomposed configurations
+
+# CONFIG_TYPEC is not set
+CONFIG_USB_ROLE_SWITCH=m
+# CONFIG_USB_ROLES_INTEL_XHCI is not set
+# CONFIG_MMC is not set
+# CONFIG_MEMSTICK is not set
+CONFIG_NEW_LEDS=y
+CONFIG_LEDS_CLASS=y
+# CONFIG_LEDS_CLASS_FLASH is not set
+# CONFIG_LEDS_BRIGHTNESS_HW_CHANGED is not set
+
+#
+# LED drivers
+#
+CONFIG_LEDS_88PM860X=m
+# CONFIG_LEDS_APU is not set
+CONFIG_LEDS_LM3530=m
+# CONFIG_LEDS_LM3532 is not set
+CONFIG_LEDS_LM3533=m
+CONFIG_LEDS_LM3642=m
+CONFIG_LEDS_PCA9532=m
+CONFIG_LEDS_PCA9532_GPIO=y
+CONFIG_LEDS_GPIO=m
+CONFIG_LEDS_LP3944=m
+# CONFIG_LEDS_LP3952 is not set
+CONFIG_LEDS_LP55XX_COMMON=m
+CONFIG_LEDS_LP5521=m
+CONFIG_LEDS_LP5523=m
+CONFIG_LEDS_LP5562=m
+CONFIG_LEDS_LP8501=m
+CONFIG_LEDS_LP8788=m
+CONFIG_LEDS_CLEVO_MAIL=m
+CONFIG_LEDS_PCA955X=m
+# CONFIG_LEDS_PCA955X_GPIO is not set
+CONFIG_LEDS_PCA963X=m
+CONFIG_LEDS_WM831X_STATUS=m
+CONFIG_LEDS_WM8350=m
+CONFIG_LEDS_DA903X=m
+CONFIG_LEDS_DA9052=m
+CONFIG_LEDS_BD2802=m
+CONFIG_LEDS_INTEL_SS4200=m
+CONFIG_LEDS_ADP5520=m
+CONFIG_LEDS_MC13783=m
+CONFIG_LEDS_TCA6507=m
+# CONFIG_LEDS_TLC591XX is not set
+CONFIG_LEDS_MAX8997=m
+CONFIG_LEDS_LM355x=m
+CONFIG_LEDS_MENF21BMC=m
+
+#
+# LED driver for blink(1) USB RGB LED is under Special HID drivers (HID_THINGM)
+#
+CONFIG_LEDS_BLINKM=m
+# CONFIG_LEDS_MLXCPLD is not set
+# CONFIG_LEDS_MLXREG is not set
+# CONFIG_LEDS_USER is not set
+# CONFIG_LEDS_NIC78BX is not set
+# CONFIG_LEDS_TI_LMU_COMMON is not set
+
+#
+# LED Triggers
+#
+CONFIG_LEDS_TRIGGERS=y
+CONFIG_LEDS_TRIGGER_TIMER=m
+CONFIG_LEDS_TRIGGER_ONESHOT=m
+# CONFIG_LEDS_TRIGGER_DISK is not set
+CONFIG_LEDS_TRIGGER_HEARTBEAT=m
+CONFIG_LEDS_TRIGGER_BACKLIGHT=m
+CONFIG_LEDS_TRIGGER_CPU=y
+# CONFIG_LEDS_TRIGGER_ACTIVITY is not set
+CONFIG_LEDS_TRIGGER_GPIO=m
+CONFIG_LEDS_TRIGGER_DEFAULT_ON=m
+
+#
+# iptables trigger is under Netfilter config (LED target)
+#
+CONFIG_LEDS_TRIGGER_TRANSIENT=m
+CONFIG_LEDS_TRIGGER_CAMERA=m
+# CONFIG_LEDS_TRIGGER_PANIC is not set
+# CONFIG_LEDS_TRIGGER_NETDEV is not set
+# CONFIG_LEDS_TRIGGER_PATTERN is not set
+CONFIG_LEDS_TRIGGER_AUDIO=m
+# CONFIG_ACCESSIBILITY is not set
+CONFIG_INFINIBAND=m
+CONFIG_INFINIBAND_USER_MAD=m
+CONFIG_INFINIBAND_USER_ACCESS=m
+# CONFIG_INFINIBAND_EXP_LEGACY_VERBS_NEW_UAPI is not set
+CONFIG_INFINIBAND_USER_MEM=y
+CONFIG_INFINIBAND_ON_DEMAND_PAGING=y
+CONFIG_INFINIBAND_ADDR_TRANS=y
+CONFIG_INFINIBAND_ADDR_TRANS_CONFIGFS=y
+CONFIG_INFINIBAND_MTHCA=m
+# CONFIG_INFINIBAND_MTHCA_DEBUG is not set
+CONFIG_INFINIBAND_CXGB4=m
+# CONFIG_INFINIBAND_EFA is not set
+# CONFIG_INFINIBAND_I40IW is not set
+CONFIG_MLX4_INFINIBAND=m
+CONFIG_MLX5_INFINIBAND=m
+CONFIG_INFINIBAND_OCRDMA=m
+# CONFIG_INFINIBAND_VMWARE_PVRDMA is not set
+CONFIG_INFINIBAND_USNIC=m
+# CONFIG_INFINIBAND_BNXT_RE is not set
+# CONFIG_INFINIBAND_RDMAVT is not set
+# CONFIG_RDMA_RXE is not set
+# CONFIG_RDMA_SIW is not set
+CONFIG_INFINIBAND_IPOIB=m
+CONFIG_INFINIBAND_IPOIB_CM=y
+# CONFIG_INFINIBAND_IPOIB_DEBUG is not set
+CONFIG_INFINIBAND_SRP=m
+CONFIG_INFINIBAND_SRPT=m
+CONFIG_INFINIBAND_ISER=m
+CONFIG_INFINIBAND_ISERT=m
+# CONFIG_INFINIBAND_OPA_VNIC is not set
+CONFIG_EDAC_ATOMIC_SCRUB=y
+CONFIG_EDAC_SUPPORT=y
+CONFIG_EDAC=y
+# CONFIG_EDAC_LEGACY_SYSFS is not set
+# CONFIG_EDAC_DEBUG is not set
+CONFIG_EDAC_DECODE_MCE=m
+# CONFIG_EDAC_GHES is not set
+CONFIG_EDAC_AMD64=m
+# CONFIG_EDAC_AMD64_ERROR_INJECTION is not set
+CONFIG_EDAC_E752X=m
+CONFIG_EDAC_I82975X=m
+CONFIG_EDAC_I3000=m
+CONFIG_EDAC_I3200=m
+CONFIG_EDAC_IE31200=m
+CONFIG_EDAC_X38=m
+CONFIG_EDAC_I5400=m
+CONFIG_EDAC_I7CORE=m
+CONFIG_EDAC_I5000=m
+CONFIG_EDAC_I5100=m
+CONFIG_EDAC_I7300=m
+CONFIG_EDAC_SBRIDGE=m
+# CONFIG_EDAC_SKX is not set
+# CONFIG_EDAC_I10NM is not set
+# CONFIG_EDAC_PND2 is not set
+CONFIG_RTC_LIB=y
+CONFIG_RTC_MC146818_LIB=y
+CONFIG_RTC_CLASS=y
+CONFIG_RTC_HCTOSYS=y
+CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
+CONFIG_RTC_SYSTOHC=y
+CONFIG_RTC_SYSTOHC_DEVICE="rtc0"
+# CONFIG_RTC_DEBUG is not set
+CONFIG_RTC_NVMEM=y
+
+#
+# RTC interfaces
+#
+CONFIG_RTC_INTF_SYSFS=y
+CONFIG_RTC_INTF_PROC=y
+CONFIG_RTC_INTF_DEV=y
+# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
+# CONFIG_RTC_DRV_TEST is not set
+
+#
+# I2C RTC drivers
+#
+CONFIG_RTC_DRV_88PM860X=m
+CONFIG_RTC_DRV_88PM80X=m
+# CONFIG_RTC_DRV_ABB5ZES3 is not set
+# CONFIG_RTC_DRV_ABEOZ9 is not set
+# CONFIG_RTC_DRV_ABX80X is not set
+CONFIG_RTC_DRV_DS1307=m
+# CONFIG_RTC_DRV_DS1307_CENTURY is not set
+CONFIG_RTC_DRV_DS1374=m
+CONFIG_RTC_DRV_DS1374_WDT=y
+CONFIG_RTC_DRV_DS1672=m
+CONFIG_RTC_DRV_LP8788=m
+CONFIG_RTC_DRV_MAX6900=m
+CONFIG_RTC_DRV_MAX8907=m
+CONFIG_RTC_DRV_MAX8925=m
+CONFIG_RTC_DRV_MAX8998=m
+CONFIG_RTC_DRV_MAX8997=m
+CONFIG_RTC_DRV_RS5C372=m
+CONFIG_RTC_DRV_ISL1208=m
+CONFIG_RTC_DRV_ISL12022=m
+CONFIG_RTC_DRV_X1205=m
+CONFIG_RTC_DRV_PCF8523=m
+CONFIG_RTC_DRV_PCF85063=m
+# CONFIG_RTC_DRV_PCF85363 is not set
+CONFIG_RTC_DRV_PCF8563=m
+CONFIG_RTC_DRV_PCF8583=m
+CONFIG_RTC_DRV_M41T80=m
+CONFIG_RTC_DRV_M41T80_WDT=y
+CONFIG_RTC_DRV_BQ32K=m
+CONFIG_RTC_DRV_PALMAS=m
+CONFIG_RTC_DRV_TPS6586X=m
+CONFIG_RTC_DRV_TPS65910=m
+CONFIG_RTC_DRV_TPS80031=m
+CONFIG_RTC_DRV_RC5T583=m
+CONFIG_RTC_DRV_S35390A=m
+CONFIG_RTC_DRV_FM3130=m
+# CONFIG_RTC_DRV_RX8010 is not set
+CONFIG_RTC_DRV_RX8581=m
+CONFIG_RTC_DRV_RX8025=m
+CONFIG_RTC_DRV_EM3027=m
+# CONFIG_RTC_DRV_RV3028 is not set
+# CONFIG_RTC_DRV_RV8803 is not set
+CONFIG_RTC_DRV_S5M=m
+# CONFIG_RTC_DRV_SD3078 is not set
+
+#
+# SPI RTC drivers
+#
+CONFIG_RTC_I2C_AND_SPI=y
+
+#
+# SPI and I2C RTC drivers
+#
+CONFIG_RTC_DRV_DS3232=m
+CONFIG_RTC_DRV_DS3232_HWMON=y
+CONFIG_RTC_DRV_PCF2127=m
+CONFIG_RTC_DRV_RV3029C2=m
+CONFIG_RTC_DRV_RV3029_HWMON=y
+
+#
+# Platform RTC drivers
+#
+CONFIG_RTC_DRV_CMOS=y
+CONFIG_RTC_DRV_DS1286=m
+CONFIG_RTC_DRV_DS1511=m
+CONFIG_RTC_DRV_DS1553=m
+# CONFIG_RTC_DRV_DS1685_FAMILY is not set
+CONFIG_RTC_DRV_DS1742=m
+CONFIG_RTC_DRV_DS2404=m
+CONFIG_RTC_DRV_DA9052=m
+CONFIG_RTC_DRV_DA9055=m
+CONFIG_RTC_DRV_DA9063=m
+CONFIG_RTC_DRV_STK17TA8=m
+CONFIG_RTC_DRV_M48T86=m
+CONFIG_RTC_DRV_M48T35=m
+CONFIG_RTC_DRV_M48T59=m
+CONFIG_RTC_DRV_MSM6242=m
+CONFIG_RTC_DRV_BQ4802=m
+CONFIG_RTC_DRV_RP5C01=m
+CONFIG_RTC_DRV_V3020=m
+CONFIG_RTC_DRV_WM831X=m
+CONFIG_RTC_DRV_WM8350=m
+CONFIG_RTC_DRV_PCF50633=m
+CONFIG_RTC_DRV_AB3100=m
+# CONFIG_RTC_DRV_CROS_EC is not set
+
+#
+# on-CPU RTC drivers
+#
+# CONFIG_RTC_DRV_FTRTC010 is not set
+CONFIG_RTC_DRV_MC13XXX=m
+
+#
+# HID Sensor RTC drivers
+#
+CONFIG_DMADEVICES=y
+# CONFIG_DMADEVICES_DEBUG is not set
+
+#
+# DMA Devices
+#
+CONFIG_DMA_ENGINE=y
+CONFIG_DMA_VIRTUAL_CHANNELS=y
+CONFIG_DMA_ACPI=y
+# CONFIG_ALTERA_MSGDMA is not set
+# CONFIG_INTEL_IDMA64 is not set
+# CONFIG_INTEL_IDXD is not set
+CONFIG_INTEL_IOATDMA=m
+CONFIG_INTEL_MIC_X100_DMA=m
+# CONFIG_PLX_DMA is not set
+# CONFIG_QCOM_HIDMA_MGMT is not set
+# CONFIG_QCOM_HIDMA is not set
+CONFIG_DW_DMAC_CORE=y
+CONFIG_DW_DMAC=m
+CONFIG_DW_DMAC_PCI=y
+# CONFIG_DW_EDMA is not set
+# CONFIG_DW_EDMA_PCIE is not set
+CONFIG_HSU_DMA=y
+# CONFIG_SF_PDMA is not set
+
+#
+# DMA Clients
+#
+CONFIG_ASYNC_TX_DMA=y
+# CONFIG_DMATEST is not set
+CONFIG_DMA_ENGINE_RAID=y
+
+#
+# DMABUF options
+#
+# CONFIG_SYNC_FILE is not set
+# CONFIG_DMABUF_MOVE_NOTIFY is not set
+# CONFIG_DMABUF_HEAPS is not set
+# end of DMABUF options
+
+CONFIG_DCA=m
+CONFIG_AUXDISPLAY=y
+# CONFIG_HD44780 is not set
+CONFIG_KS0108=m
+CONFIG_KS0108_PORT=0x378
+CONFIG_KS0108_DELAY=2
+# CONFIG_IMG_ASCII_LCD is not set
+# CONFIG_PARPORT_PANEL is not set
+# CONFIG_CHARLCD_BL_OFF is not set
+# CONFIG_CHARLCD_BL_ON is not set
+CONFIG_CHARLCD_BL_FLASH=y
+# CONFIG_PANEL is not set
+CONFIG_UIO=m
+CONFIG_UIO_CIF=m
+CONFIG_UIO_PDRV_GENIRQ=m
+CONFIG_UIO_DMEM_GENIRQ=m
+CONFIG_UIO_AEC=m
+CONFIG_UIO_SERCOS3=m
+CONFIG_UIO_PCI_GENERIC=m
+CONFIG_UIO_NETX=m
+# CONFIG_UIO_PRUSS is not set
+CONFIG_UIO_MF624=m
+# CONFIG_UIO_HV_GENERIC is not set
+CONFIG_VFIO_IOMMU_TYPE1=m
+CONFIG_VFIO_VIRQFD=m
+CONFIG_VFIO=m
+# CONFIG_VFIO_NOIOMMU is not set
+CONFIG_VFIO_PCI=m
+CONFIG_VFIO_PCI_VGA=y
+CONFIG_VFIO_PCI_MMAP=y
+CONFIG_VFIO_PCI_INTX=y
+CONFIG_VFIO_PCI_IGD=y
+# CONFIG_VFIO_MDEV is not set
+CONFIG_IRQ_BYPASS_MANAGER=m
+CONFIG_VIRT_DRIVERS=y
+# CONFIG_VBOXGUEST is not set
+CONFIG_VIRTIO=y
+CONFIG_VIRTIO_MENU=y
+CONFIG_VIRTIO_PCI=y
+CONFIG_VIRTIO_PCI_LEGACY=y
+CONFIG_VIRTIO_BALLOON=y
+# CONFIG_VIRTIO_INPUT is not set
+CONFIG_VIRTIO_MMIO=y
+CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
+# CONFIG_VDPA is not set
+CONFIG_VHOST_IOTLB=m
+CONFIG_VHOST_DPN=y
+CONFIG_VHOST=m
+CONFIG_VHOST_MENU=y
+CONFIG_VHOST_NET=m
+CONFIG_VHOST_SCSI=m
+# CONFIG_VHOST_VSOCK is not set
+# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set
+
+#
+# Microsoft Hyper-V guest support
+#
+CONFIG_HYPERV=m
+CONFIG_HYPERV_TIMER=y
+CONFIG_HYPERV_UTILS=m
+CONFIG_HYPERV_BALLOON=m
+# end of Microsoft Hyper-V guest support
+
+#
+# Xen driver support
+#
+CONFIG_XEN_BALLOON=y
+CONFIG_XEN_BALLOON_MEMORY_HOTPLUG=y
+CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT=512
+CONFIG_XEN_SCRUB_PAGES_DEFAULT=y
+CONFIG_XEN_DEV_EVTCHN=m
+CONFIG_XEN_BACKEND=y
+CONFIG_XENFS=m
+CONFIG_XEN_COMPAT_XENFS=y
+CONFIG_XEN_SYS_HYPERVISOR=y
+CONFIG_XEN_XENBUS_FRONTEND=y
+CONFIG_XEN_GNTDEV=m
+CONFIG_XEN_GRANT_DEV_ALLOC=m
+# CONFIG_XEN_GRANT_DMA_ALLOC is not set
+CONFIG_SWIOTLB_XEN=y
+CONFIG_XEN_PCIDEV_BACKEND=m
+# CONFIG_XEN_PVCALLS_FRONTEND is not set
+# CONFIG_XEN_PVCALLS_BACKEND is not set
+CONFIG_XEN_SCSI_BACKEND=m
+CONFIG_XEN_PRIVCMD=m
+CONFIG_XEN_ACPI_PROCESSOR=y
+CONFIG_XEN_MCE_LOG=y
+CONFIG_XEN_HAVE_PVMMU=y
+CONFIG_XEN_EFI=y
+CONFIG_XEN_AUTO_XLATE=y
+CONFIG_XEN_ACPI=y
+CONFIG_XEN_SYMS=y
+CONFIG_XEN_HAVE_VPMU=y
+# end of Xen driver support
+
+# CONFIG_GREYBUS is not set
+# CONFIG_STAGING is not set
+CONFIG_X86_PLATFORM_DEVICES=y
+CONFIG_ACPI_WMI=m
+CONFIG_WMI_BMOF=m
+CONFIG_ALIENWARE_WMI=m
+# CONFIG_HUAWEI_WMI is not set
+# CONFIG_INTEL_WMI_THUNDERBOLT is not set
+CONFIG_MXM_WMI=m
+# CONFIG_PEAQ_WMI is not set
+# CONFIG_XIAOMI_WMI is not set
+CONFIG_ACERHDF=m
+# CONFIG_ACER_WIRELESS is not set
+CONFIG_ACER_WMI=m
+CONFIG_APPLE_GMUX=m
+CONFIG_ASUS_LAPTOP=m
+# CONFIG_ASUS_WIRELESS is not set
+CONFIG_ASUS_WMI=m
+CONFIG_ASUS_NB_WMI=m
+CONFIG_EEEPC_LAPTOP=m
+CONFIG_EEEPC_WMI=m
+CONFIG_DCDBAS=m
+# CONFIG_DELL_SMBIOS is not set
+CONFIG_DELL_RBU=m
+CONFIG_DELL_SMO8800=m
+CONFIG_DELL_WMI_AIO=m
+# CONFIG_DELL_WMI_LED is not set
+CONFIG_FUJITSU_LAPTOP=m
+CONFIG_FUJITSU_TABLET=m
+# CONFIG_GPD_POCKET_FAN is not set
+CONFIG_HP_ACCEL=m
+CONFIG_HP_WIRELESS=m
+CONFIG_HP_WMI=m
+CONFIG_IBM_RTL=m
+CONFIG_SENSORS_HDAPS=m
+CONFIG_THINKPAD_ACPI=m
+CONFIG_THINKPAD_ACPI_DEBUGFACILITIES=y
+# CONFIG_THINKPAD_ACPI_DEBUG is not set
+# CONFIG_THINKPAD_ACPI_UNSAFE_LEDS is not set
+CONFIG_THINKPAD_ACPI_VIDEO=y
+CONFIG_THINKPAD_ACPI_HOTKEY_POLL=y
+# CONFIG_INTEL_ATOMISP2_PM is not set
+# CONFIG_INTEL_HID_EVENT is not set
+# CONFIG_INTEL_INT0002_VGPIO is not set
+CONFIG_INTEL_MENLOW=m
+# CONFIG_INTEL_VBTN is not set
+# CONFIG_SURFACE_3_BUTTON is not set
+# CONFIG_SURFACE_3_POWER_OPREGION is not set
+# CONFIG_SURFACE_PRO3_BUTTON is not set
+CONFIG_MSI_WMI=m
+# CONFIG_PCENGINES_APU2 is not set
+CONFIG_SAMSUNG_LAPTOP=m
+CONFIG_SAMSUNG_Q10=m
+CONFIG_TOSHIBA_BT_RFKILL=m
+CONFIG_TOSHIBA_HAPS=m
+# CONFIG_TOSHIBA_WMI is not set
+CONFIG_ACPI_CMPC=m
+# CONFIG_LG_LAPTOP is not set
+CONFIG_PANASONIC_LAPTOP=m
+# CONFIG_SYSTEM76_ACPI is not set
+CONFIG_TOPSTAR_LAPTOP=m
+# CONFIG_I2C_MULTI_INSTANTIATE is not set
+# CONFIG_MLX_PLATFORM is not set
+CONFIG_INTEL_IPS=m
+CONFIG_INTEL_RST=m
+CONFIG_INTEL_SMARTCONNECT=m
+
+#
+# Intel Speed Select Technology interface support
+#
+# CONFIG_INTEL_SPEED_SELECT_INTERFACE is not set
+# end of Intel Speed Select Technology interface support
+
+# CONFIG_INTEL_TURBO_MAX_3 is not set
+# CONFIG_INTEL_UNCORE_FREQ_CONTROL is not set
+# CONFIG_INTEL_PMC_CORE is not set
+# CONFIG_INTEL_PMC_IPC is not set
+# CONFIG_INTEL_PUNIT_IPC is not set
+CONFIG_PMC_ATOM=y
+CONFIG_MFD_CROS_EC=m
+CONFIG_CHROME_PLATFORMS=y
+# CONFIG_CHROMEOS_LAPTOP is not set
+# CONFIG_CHROMEOS_PSTORE is not set
+# CONFIG_CHROMEOS_TBMC is not set
+CONFIG_CROS_EC=m
+# CONFIG_CROS_EC_I2C is not set
+# CONFIG_CROS_EC_LPC is not set
+CONFIG_CROS_EC_PROTO=y
+# CONFIG_CROS_KBD_LED_BACKLIGHT is not set
+CONFIG_CROS_EC_CHARDEV=m
+CONFIG_CROS_EC_LIGHTBAR=m
+CONFIG_CROS_EC_DEBUGFS=m
+CONFIG_CROS_EC_SENSORHUB=m
+CONFIG_CROS_EC_SYSFS=m
+CONFIG_CROS_USBPD_NOTIFY=m
+# CONFIG_MELLANOX_PLATFORM is not set
+CONFIG_CLKDEV_LOOKUP=y
+CONFIG_HAVE_CLK_PREPARE=y
+CONFIG_COMMON_CLK=y
+
+#
+# Common Clock Framework
+#
+CONFIG_COMMON_CLK_WM831X=m
+# CONFIG_COMMON_CLK_MAX9485 is not set
+# CONFIG_COMMON_CLK_SI5341 is not set
+CONFIG_COMMON_CLK_SI5351=m
+# CONFIG_COMMON_CLK_SI544 is not set
+# CONFIG_COMMON_CLK_CDCE706 is not set
+# CONFIG_COMMON_CLK_CS2000_CP is not set
+CONFIG_COMMON_CLK_S2MPS11=m
+CONFIG_CLK_TWL6040=m
+CONFIG_COMMON_CLK_PALMAS=m
+# end of Common Clock Framework
+
+# CONFIG_HWSPINLOCK is not set
+
+#
+# Clock Source drivers
+#
+CONFIG_CLKEVT_I8253=y
+CONFIG_I8253_LOCK=y
+CONFIG_CLKBLD_I8253=y
+# end of Clock Source drivers
+
+CONFIG_MAILBOX=y
+CONFIG_PCC=y
+# CONFIG_ALTERA_MBOX is not set
+CONFIG_IOMMU_IOVA=y
+CONFIG_IOASID=y
+CONFIG_IOMMU_API=y
+CONFIG_IOMMU_SUPPORT=y
+
+#
+# Generic IOMMU Pagetable Support
+#
+# end of Generic IOMMU Pagetable Support
+
+# CONFIG_IOMMU_DEBUGFS is not set
+# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set
+CONFIG_IOMMU_DMA=y
+CONFIG_AMD_IOMMU=y
+CONFIG_AMD_IOMMU_V2=m
+CONFIG_DMAR_TABLE=y
+CONFIG_INTEL_IOMMU=y
+# CONFIG_INTEL_IOMMU_SVM is not set
+# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
+CONFIG_INTEL_IOMMU_FLOPPY_WA=y
+# CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON is not set
+CONFIG_IRQ_REMAP=y
+CONFIG_HYPERV_IOMMU=y
+
+#
+# Remoteproc drivers
+#
+# CONFIG_REMOTEPROC is not set
+# end of Remoteproc drivers
+
+#
+# Rpmsg drivers
+#
+# CONFIG_RPMSG_QCOM_GLINK_RPM is not set
+# CONFIG_RPMSG_VIRTIO is not set
+# end of Rpmsg drivers
+
+# CONFIG_SOUNDWIRE is not set
+
+#
+# SOC (System On Chip) specific Drivers
+#
+
+#
+# Amlogic SoC drivers
+#
+# end of Amlogic SoC drivers
+
+#
+# Aspeed SoC drivers
+#
+# end of Aspeed SoC drivers
+
+#
+# Broadcom SoC drivers
+#
+# end of Broadcom SoC drivers
+
+#
+# NXP/Freescale QorIQ SoC drivers
+#
+# end of NXP/Freescale QorIQ SoC drivers
+
+#
+# i.MX SoC drivers
+#
+# end of i.MX SoC drivers
+
+#
+# Qualcomm SoC drivers
+#
+# end of Qualcomm SoC drivers
+
+CONFIG_SOC_TI=y
+
+#
+# Xilinx SoC drivers
+#
+# CONFIG_XILINX_VCU is not set
+# end of Xilinx SoC drivers
+# end of SOC (System On Chip) specific Drivers
+
+CONFIG_PM_DEVFREQ=y
+
+#
+# DEVFREQ Governors
+#
+CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND=y
+CONFIG_DEVFREQ_GOV_PERFORMANCE=y
+CONFIG_DEVFREQ_GOV_POWERSAVE=y
+CONFIG_DEVFREQ_GOV_USERSPACE=y
+# CONFIG_DEVFREQ_GOV_PASSIVE is not set
+
+#
+# DEVFREQ Drivers
+#
+# CONFIG_PM_DEVFREQ_EVENT is not set
+CONFIG_EXTCON=y
+
+#
+# Extcon Device Drivers
+#
+# CONFIG_EXTCON_FSA9480 is not set
+CONFIG_EXTCON_GPIO=m
+# CONFIG_EXTCON_INTEL_INT3496 is not set
+CONFIG_EXTCON_MAX14577=m
+# CONFIG_EXTCON_MAX3355 is not set
+CONFIG_EXTCON_MAX77693=m
+CONFIG_EXTCON_MAX8997=m
+CONFIG_EXTCON_PALMAS=m
+# CONFIG_EXTCON_PTN5150 is not set
+CONFIG_EXTCON_RT8973A=m
+CONFIG_EXTCON_SM5502=m
+# CONFIG_EXTCON_USB_GPIO is not set
+# CONFIG_EXTCON_USBC_CROS_EC is not set
+CONFIG_MEMORY=y
+# CONFIG_IIO is not set
+CONFIG_NTB=m
+# CONFIG_NTB_MSI is not set
+# CONFIG_NTB_AMD is not set
+# CONFIG_NTB_IDT is not set
+# CONFIG_NTB_INTEL is not set
+# CONFIG_NTB_SWITCHTEC is not set
+# CONFIG_NTB_PINGPONG is not set
+# CONFIG_NTB_TOOL is not set
+# CONFIG_NTB_PERF is not set
+# CONFIG_NTB_TRANSPORT is not set
+# CONFIG_VME_BUS is not set
+# CONFIG_PWM is not set
+
+#
+# IRQ chip support
+#
+# end of IRQ chip support
+
+# CONFIG_IPACK_BUS is not set
+CONFIG_RESET_CONTROLLER=y
+# CONFIG_RESET_BRCMSTB_RESCAL is not set
+# CONFIG_RESET_TI_SYSCON is not set
+
+#
+# PHY Subsystem
+#
+CONFIG_GENERIC_PHY=y
+CONFIG_BCM_KONA_USB2_PHY=m
+# CONFIG_PHY_PXA_28NM_HSIC is not set
+# CONFIG_PHY_PXA_28NM_USB2 is not set
+# CONFIG_PHY_QCOM_USB_HS is not set
+# CONFIG_PHY_QCOM_USB_HSIC is not set
+CONFIG_PHY_SAMSUNG_USB2=m
+# CONFIG_PHY_TUSB1210 is not set
+# CONFIG_PHY_INTEL_EMMC is not set
+# end of PHY Subsystem
+
+CONFIG_POWERCAP=y
+CONFIG_INTEL_RAPL_CORE=m
+CONFIG_INTEL_RAPL=m
+# CONFIG_IDLE_INJECT is not set
+# CONFIG_MCB is not set
+
+#
+# Performance monitor support
+#
+# end of Performance monitor support
+
+CONFIG_RAS=y
+# CONFIG_RAS_CEC is not set
+# CONFIG_USB4 is not set
+
+#
+# Android
+#
+# CONFIG_ANDROID is not set
+# end of Android
+
+# CONFIG_LIBNVDIMM is not set
+# CONFIG_DAX is not set
+CONFIG_NVMEM=y
+CONFIG_NVMEM_SYSFS=y
+
+#
+# HW tracing support
+#
+# CONFIG_STM is not set
+# CONFIG_INTEL_TH is not set
+# end of HW tracing support
+
+# CONFIG_FPGA is not set
+# CONFIG_TEE is not set
+CONFIG_PM_OPP=y
+# CONFIG_UNISYS_VISORBUS is not set
+# CONFIG_SIOX is not set
+# CONFIG_SLIMBUS is not set
+# CONFIG_INTERCONNECT is not set
+# CONFIG_COUNTER is not set
+# CONFIG_MOST is not set
+# end of Device Drivers
+
+#
+# File systems
+#
+CONFIG_DCACHE_WORD_ACCESS=y
+# CONFIG_VALIDATE_FS_PARSER is not set
+CONFIG_FS_IOMAP=y
+# CONFIG_EXT2_FS is not set
+# CONFIG_EXT3_FS is not set
+CONFIG_EXT4_FS=y
+CONFIG_EXT4_USE_FOR_EXT2=y
+CONFIG_EXT4_FS_POSIX_ACL=y
+CONFIG_EXT4_FS_SECURITY=y
+# CONFIG_EXT4_DEBUG is not set
+CONFIG_JBD2=y
+# CONFIG_JBD2_DEBUG is not set
+CONFIG_FS_MBCACHE=y
+CONFIG_REISERFS_FS=m
+# CONFIG_REISERFS_CHECK is not set
+# CONFIG_REISERFS_PROC_INFO is not set
+CONFIG_REISERFS_FS_XATTR=y
+CONFIG_REISERFS_FS_POSIX_ACL=y
+CONFIG_REISERFS_FS_SECURITY=y
+CONFIG_JFS_FS=m
+CONFIG_JFS_POSIX_ACL=y
+CONFIG_JFS_SECURITY=y
+# CONFIG_JFS_DEBUG is not set
+CONFIG_JFS_STATISTICS=y
+CONFIG_XFS_FS=m
+CONFIG_XFS_QUOTA=y
+CONFIG_XFS_POSIX_ACL=y
+CONFIG_XFS_RT=y
+# CONFIG_XFS_ONLINE_SCRUB is not set
+# CONFIG_XFS_WARN is not set
+# CONFIG_XFS_DEBUG is not set
+CONFIG_GFS2_FS=m
+CONFIG_GFS2_FS_LOCKING_DLM=y
+CONFIG_OCFS2_FS=m
+CONFIG_OCFS2_FS_O2CB=m
+CONFIG_OCFS2_FS_USERSPACE_CLUSTER=m
+CONFIG_OCFS2_FS_STATS=y
+CONFIG_OCFS2_DEBUG_MASKLOG=y
+# CONFIG_OCFS2_DEBUG_FS is not set
+CONFIG_BTRFS_FS=m
+CONFIG_BTRFS_FS_POSIX_ACL=y
+# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set
+# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
+# CONFIG_BTRFS_DEBUG is not set
+# CONFIG_BTRFS_ASSERT is not set
+# CONFIG_BTRFS_FS_REF_VERIFY is not set
+CONFIG_NILFS2_FS=m
+CONFIG_F2FS_FS=m
+CONFIG_F2FS_STAT_FS=y
+CONFIG_F2FS_FS_XATTR=y
+CONFIG_F2FS_FS_POSIX_ACL=y
+CONFIG_F2FS_FS_SECURITY=y
+# CONFIG_F2FS_CHECK_FS is not set
+# CONFIG_F2FS_IO_TRACE is not set
+# CONFIG_F2FS_FAULT_INJECTION is not set
+# CONFIG_F2FS_FS_COMPRESSION is not set
+# CONFIG_FS_DAX is not set
+CONFIG_FS_POSIX_ACL=y
+CONFIG_EXPORTFS=y
+# CONFIG_EXPORTFS_BLOCK_OPS is not set
+CONFIG_FILE_LOCKING=y
+CONFIG_MANDATORY_FILE_LOCKING=y
+CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_ALGS=m
+# CONFIG_FS_VERITY is not set
+CONFIG_FSNOTIFY=y
+CONFIG_DNOTIFY=y
+CONFIG_INOTIFY_USER=y
+CONFIG_FANOTIFY=y
+CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
+CONFIG_QUOTA=y
+CONFIG_QUOTA_NETLINK_INTERFACE=y
+# CONFIG_PRINT_QUOTA_WARNING is not set
+# CONFIG_QUOTA_DEBUG is not set
+CONFIG_QUOTA_TREE=m
+CONFIG_QFMT_V1=m
+CONFIG_QFMT_V2=m
+CONFIG_QUOTACTL=y
+CONFIG_QUOTACTL_COMPAT=y
+CONFIG_AUTOFS4_FS=m
+CONFIG_AUTOFS_FS=m
+CONFIG_FUSE_FS=y
+CONFIG_CUSE=m
+# CONFIG_VIRTIO_FS is not set
+CONFIG_OVERLAY_FS=m
+# CONFIG_OVERLAY_FS_REDIRECT_DIR is not set
+CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW=y
+# CONFIG_OVERLAY_FS_INDEX is not set
+# CONFIG_OVERLAY_FS_XINO_AUTO is not set
+# CONFIG_OVERLAY_FS_METACOPY is not set
+
+#
+# Caches
+#
+CONFIG_FSCACHE=m
+CONFIG_FSCACHE_STATS=y
+# CONFIG_FSCACHE_HISTOGRAM is not set
+# CONFIG_FSCACHE_DEBUG is not set
+# CONFIG_FSCACHE_OBJECT_LIST is not set
+CONFIG_CACHEFILES=m
+# CONFIG_CACHEFILES_DEBUG is not set
+# CONFIG_CACHEFILES_HISTOGRAM is not set
+# end of Caches
+
+#
+# CD-ROM/DVD Filesystems
+#
+CONFIG_ISO9660_FS=m
+CONFIG_JOLIET=y
+CONFIG_ZISOFS=y
+CONFIG_UDF_FS=m
+# end of CD-ROM/DVD Filesystems
+
+#
+# DOS/FAT/EXFAT/NT Filesystems
+#
+CONFIG_FAT_FS=y
+CONFIG_MSDOS_FS=m
+CONFIG_VFAT_FS=y
+CONFIG_FAT_DEFAULT_CODEPAGE=437
+CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
+# CONFIG_FAT_DEFAULT_UTF8 is not set
+# CONFIG_EXFAT_FS is not set
+CONFIG_NTFS_FS=m
+# CONFIG_NTFS_DEBUG is not set
+# CONFIG_NTFS_RW is not set
+# end of DOS/FAT/EXFAT/NT Filesystems
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_PROC_VMCORE=y
+# CONFIG_PROC_VMCORE_DEVICE_DUMP is not set
+CONFIG_PROC_SYSCTL=y
+CONFIG_PROC_PAGE_MONITOR=y
+CONFIG_PROC_CHILDREN=y
+CONFIG_PROC_PID_ARCH_STATUS=y
+CONFIG_KERNFS=y
+CONFIG_SYSFS=y
+CONFIG_TMPFS=y
+CONFIG_TMPFS_POSIX_ACL=y
+CONFIG_TMPFS_XATTR=y
+CONFIG_HUGETLBFS=y
+CONFIG_HUGETLB_PAGE=y
+CONFIG_MEMFD_CREATE=y
+CONFIG_ARCH_HAS_GIGANTIC_PAGE=y
+CONFIG_CONFIGFS_FS=m
+CONFIG_EFIVAR_FS=y
+# end of Pseudo filesystems
+
+CONFIG_MISC_FILESYSTEMS=y
+# CONFIG_ORANGEFS_FS is not set
+CONFIG_ADFS_FS=m
+# CONFIG_ADFS_FS_RW is not set
+CONFIG_AFFS_FS=m
+CONFIG_ECRYPT_FS=y
+CONFIG_ECRYPT_FS_MESSAGING=y
+CONFIG_HFS_FS=m
+CONFIG_HFSPLUS_FS=m
+CONFIG_BEFS_FS=m
+# CONFIG_BEFS_DEBUG is not set
+CONFIG_BFS_FS=m
+CONFIG_EFS_FS=m
+CONFIG_CRAMFS=m
+CONFIG_CRAMFS_BLOCKDEV=y
+CONFIG_SQUASHFS=m
+# CONFIG_SQUASHFS_FILE_CACHE is not set
+CONFIG_SQUASHFS_FILE_DIRECT=y
+# CONFIG_SQUASHFS_DECOMP_SINGLE is not set
+# CONFIG_SQUASHFS_DECOMP_MULTI is not set
+CONFIG_SQUASHFS_DECOMP_MULTI_PERCPU=y
+CONFIG_SQUASHFS_XATTR=y
+CONFIG_SQUASHFS_ZLIB=y
+# CONFIG_SQUASHFS_LZ4 is not set
+CONFIG_SQUASHFS_LZO=y
+CONFIG_SQUASHFS_XZ=y
+# CONFIG_SQUASHFS_ZSTD is not set
+# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set
+# CONFIG_SQUASHFS_EMBEDDED is not set
+CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3
+CONFIG_VXFS_FS=m
+CONFIG_MINIX_FS=m
+CONFIG_OMFS_FS=m
+CONFIG_HPFS_FS=m
+CONFIG_QNX4FS_FS=m
+CONFIG_QNX6FS_FS=m
+# CONFIG_QNX6FS_DEBUG is not set
+CONFIG_ROMFS_FS=m
+CONFIG_ROMFS_BACKED_BY_BLOCK=y
+CONFIG_ROMFS_ON_BLOCK=y
+CONFIG_PSTORE=y
+CONFIG_PSTORE_DEFLATE_COMPRESS=y
+# CONFIG_PSTORE_LZO_COMPRESS is not set
+# CONFIG_PSTORE_LZ4_COMPRESS is not set
+# CONFIG_PSTORE_LZ4HC_COMPRESS is not set
+# CONFIG_PSTORE_842_COMPRESS is not set
+# CONFIG_PSTORE_ZSTD_COMPRESS is not set
+CONFIG_PSTORE_COMPRESS=y
+CONFIG_PSTORE_DEFLATE_COMPRESS_DEFAULT=y
+CONFIG_PSTORE_COMPRESS_DEFAULT="deflate"
+# CONFIG_PSTORE_CONSOLE is not set
+# CONFIG_PSTORE_PMSG is not set
+# CONFIG_PSTORE_FTRACE is not set
+CONFIG_PSTORE_RAM=m
+CONFIG_SYSV_FS=m
+CONFIG_UFS_FS=m
+# CONFIG_UFS_FS_WRITE is not set
+# CONFIG_UFS_DEBUG is not set
+# CONFIG_EROFS_FS is not set
+CONFIG_NETWORK_FILESYSTEMS=y
+CONFIG_NFS_FS=m
+CONFIG_NFS_V2=m
+CONFIG_NFS_V3=m
+CONFIG_NFS_V3_ACL=y
+CONFIG_NFS_V4=m
+CONFIG_NFS_SWAP=y
+CONFIG_NFS_V4_1=y
+CONFIG_NFS_V4_2=y
+CONFIG_PNFS_FILE_LAYOUT=m
+CONFIG_PNFS_BLOCK=m
+CONFIG_PNFS_FLEXFILE_LAYOUT=m
+CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org"
+CONFIG_NFS_V4_1_MIGRATION=y
+CONFIG_NFS_V4_SECURITY_LABEL=y
+CONFIG_NFS_FSCACHE=y
+# CONFIG_NFS_USE_LEGACY_DNS is not set
+CONFIG_NFS_USE_KERNEL_DNS=y
+CONFIG_NFS_DEBUG=y
+CONFIG_NFS_DISABLE_UDP_SUPPORT=y
+CONFIG_NFSD=m
+CONFIG_NFSD_V2_ACL=y
+CONFIG_NFSD_V3=y
+CONFIG_NFSD_V3_ACL=y
+CONFIG_NFSD_V4=y
+# CONFIG_NFSD_BLOCKLAYOUT is not set
+# CONFIG_NFSD_SCSILAYOUT is not set
+# CONFIG_NFSD_FLEXFILELAYOUT is not set
+CONFIG_NFSD_V4_SECURITY_LABEL=y
+CONFIG_GRACE_PERIOD=m
+CONFIG_LOCKD=m
+CONFIG_LOCKD_V4=y
+CONFIG_NFS_ACL_SUPPORT=m
+CONFIG_NFS_COMMON=y
+CONFIG_SUNRPC=m
+CONFIG_SUNRPC_GSS=m
+CONFIG_SUNRPC_BACKCHANNEL=y
+CONFIG_SUNRPC_SWAP=y
+CONFIG_RPCSEC_GSS_KRB5=m
+# CONFIG_SUNRPC_DISABLE_INSECURE_ENCTYPES is not set
+CONFIG_SUNRPC_DEBUG=y
+CONFIG_SUNRPC_XPRT_RDMA=m
+CONFIG_CEPH_FS=m
+CONFIG_CEPH_FSCACHE=y
+CONFIG_CEPH_FS_POSIX_ACL=y
+CONFIG_CEPH_FS_SECURITY_LABEL=y
+CONFIG_CIFS=m
+# CONFIG_CIFS_STATS2 is not set
+CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y
+CONFIG_CIFS_WEAK_PW_HASH=y
+CONFIG_CIFS_UPCALL=y
+CONFIG_CIFS_XATTR=y
+CONFIG_CIFS_POSIX=y
+CONFIG_CIFS_DEBUG=y
+# CONFIG_CIFS_DEBUG2 is not set
+# CONFIG_CIFS_DEBUG_DUMP_KEYS is not set
+CONFIG_CIFS_DFS_UPCALL=y
+# CONFIG_CIFS_SMB_DIRECT is not set
+CONFIG_CIFS_FSCACHE=y
+CONFIG_CODA_FS=m
+CONFIG_AFS_FS=m
+# CONFIG_AFS_DEBUG is not set
+CONFIG_AFS_FSCACHE=y
+# CONFIG_AFS_DEBUG_CURSOR is not set
+CONFIG_9P_FS=m
+CONFIG_9P_FSCACHE=y
+CONFIG_9P_FS_POSIX_ACL=y
+CONFIG_9P_FS_SECURITY=y
+CONFIG_NLS=y
+CONFIG_NLS_DEFAULT="utf8"
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_CODEPAGE_737=m
+CONFIG_NLS_CODEPAGE_775=m
+CONFIG_NLS_CODEPAGE_850=m
+CONFIG_NLS_CODEPAGE_852=m
+CONFIG_NLS_CODEPAGE_855=m
+CONFIG_NLS_CODEPAGE_857=m
+CONFIG_NLS_CODEPAGE_860=m
+CONFIG_NLS_CODEPAGE_861=m
+CONFIG_NLS_CODEPAGE_862=m
+CONFIG_NLS_CODEPAGE_863=m
+CONFIG_NLS_CODEPAGE_864=m
+CONFIG_NLS_CODEPAGE_865=m
+CONFIG_NLS_CODEPAGE_866=m
+CONFIG_NLS_CODEPAGE_869=m
+CONFIG_NLS_CODEPAGE_936=m
+CONFIG_NLS_CODEPAGE_950=m
+CONFIG_NLS_CODEPAGE_932=m
+CONFIG_NLS_CODEPAGE_949=m
+CONFIG_NLS_CODEPAGE_874=m
+CONFIG_NLS_ISO8859_8=m
+CONFIG_NLS_CODEPAGE_1250=m
+CONFIG_NLS_CODEPAGE_1251=m
+CONFIG_NLS_ASCII=m
+CONFIG_NLS_ISO8859_1=m
+CONFIG_NLS_ISO8859_2=m
+CONFIG_NLS_ISO8859_3=m
+CONFIG_NLS_ISO8859_4=m
+CONFIG_NLS_ISO8859_5=m
+CONFIG_NLS_ISO8859_6=m
+CONFIG_NLS_ISO8859_7=m
+CONFIG_NLS_ISO8859_9=m
+CONFIG_NLS_ISO8859_13=m
+CONFIG_NLS_ISO8859_14=m
+CONFIG_NLS_ISO8859_15=m
+CONFIG_NLS_KOI8_R=m
+CONFIG_NLS_KOI8_U=m
+CONFIG_NLS_MAC_ROMAN=m
+CONFIG_NLS_MAC_CELTIC=m
+CONFIG_NLS_MAC_CENTEURO=m
+CONFIG_NLS_MAC_CROATIAN=m
+CONFIG_NLS_MAC_CYRILLIC=m
+CONFIG_NLS_MAC_GAELIC=m
+CONFIG_NLS_MAC_GREEK=m
+CONFIG_NLS_MAC_ICELAND=m
+CONFIG_NLS_MAC_INUIT=m
+CONFIG_NLS_MAC_ROMANIAN=m
+CONFIG_NLS_MAC_TURKISH=m
+CONFIG_NLS_UTF8=m
+CONFIG_DLM=m
+# CONFIG_DLM_DEBUG is not set
+# CONFIG_UNICODE is not set
+CONFIG_IO_WQ=y
+# end of File systems
+
+#
+# Security options
+#
+CONFIG_KEYS=y
+# CONFIG_KEYS_REQUEST_CACHE is not set
+CONFIG_PERSISTENT_KEYRINGS=y
+CONFIG_BIG_KEYS=y
+CONFIG_TRUSTED_KEYS=y
+CONFIG_ENCRYPTED_KEYS=y
+# CONFIG_KEY_DH_OPERATIONS is not set
+# CONFIG_SECURITY_DMESG_RESTRICT is not set
+CONFIG_SECURITY=y
+CONFIG_SECURITY_WRITABLE_HOOKS=y
+CONFIG_SECURITYFS=y
+CONFIG_SECURITY_NETWORK=y
+CONFIG_PAGE_TABLE_ISOLATION=y
+# CONFIG_SECURITY_INFINIBAND is not set
+CONFIG_SECURITY_NETWORK_XFRM=y
+CONFIG_SECURITY_PATH=y
+CONFIG_INTEL_TXT=y
+CONFIG_LSM_MMAP_MIN_ADDR=0
+CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y
+CONFIG_HARDENED_USERCOPY=y
+CONFIG_HARDENED_USERCOPY_FALLBACK=y
+# CONFIG_HARDENED_USERCOPY_PAGESPAN is not set
+# CONFIG_FORTIFY_SOURCE is not set
+# CONFIG_STATIC_USERMODEHELPER is not set
+CONFIG_SECURITY_SELINUX=y
+CONFIG_SECURITY_SELINUX_BOOTPARAM=y
+CONFIG_SECURITY_SELINUX_DISABLE=y
+CONFIG_SECURITY_SELINUX_DEVELOP=y
+CONFIG_SECURITY_SELINUX_AVC_STATS=y
+CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
+CONFIG_SECURITY_SELINUX_SIDTAB_HASH_BITS=9
+CONFIG_SECURITY_SELINUX_SID2STR_CACHE_SIZE=256
+CONFIG_SECURITY_SMACK=y
+# CONFIG_SECURITY_SMACK_BRINGUP is not set
+# CONFIG_SECURITY_SMACK_NETFILTER is not set
+# CONFIG_SECURITY_SMACK_APPEND_SIGNALS is not set
+CONFIG_SECURITY_TOMOYO=y
+CONFIG_SECURITY_TOMOYO_MAX_ACCEPT_ENTRY=2048
+CONFIG_SECURITY_TOMOYO_MAX_AUDIT_LOG=1024
+# CONFIG_SECURITY_TOMOYO_OMIT_USERSPACE_LOADER is not set
+CONFIG_SECURITY_TOMOYO_POLICY_LOADER="/sbin/tomoyo-init"
+CONFIG_SECURITY_TOMOYO_ACTIVATION_TRIGGER="/sbin/init"
+# CONFIG_SECURITY_TOMOYO_INSECURE_BUILTIN_SETTING is not set
+CONFIG_SECURITY_APPARMOR=y
+CONFIG_SECURITY_APPARMOR_HASH=y
+CONFIG_SECURITY_APPARMOR_HASH_DEFAULT=y
+# CONFIG_SECURITY_APPARMOR_DEBUG is not set
+# CONFIG_SECURITY_LOADPIN is not set
+CONFIG_SECURITY_YAMA=y
+# CONFIG_SECURITY_SAFESETID is not set
+# CONFIG_SECURITY_LOCKDOWN_LSM is not set
+CONFIG_INTEGRITY=y
+CONFIG_INTEGRITY_SIGNATURE=y
+CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y
+CONFIG_INTEGRITY_TRUSTED_KEYRING=y
+CONFIG_INTEGRITY_AUDIT=y
+CONFIG_IMA=y
+CONFIG_IMA_MEASURE_PCR_IDX=10
+CONFIG_IMA_LSM_RULES=y
+# CONFIG_IMA_TEMPLATE is not set
+CONFIG_IMA_NG_TEMPLATE=y
+# CONFIG_IMA_SIG_TEMPLATE is not set
+CONFIG_IMA_DEFAULT_TEMPLATE="ima-ng"
+CONFIG_IMA_DEFAULT_HASH_SHA1=y
+# CONFIG_IMA_DEFAULT_HASH_SHA256 is not set
+# CONFIG_IMA_DEFAULT_HASH_SHA512 is not set
+CONFIG_IMA_DEFAULT_HASH="sha1"
+# CONFIG_IMA_WRITE_POLICY is not set
+# CONFIG_IMA_READ_POLICY is not set
+CONFIG_IMA_APPRAISE=y
+# CONFIG_IMA_ARCH_POLICY is not set
+# CONFIG_IMA_APPRAISE_BUILD_POLICY is not set
+CONFIG_IMA_APPRAISE_BOOTPARAM=y
+# CONFIG_IMA_APPRAISE_MODSIG is not set
+CONFIG_IMA_TRUSTED_KEYRING=y
+# CONFIG_IMA_BLACKLIST_KEYRING is not set
+# CONFIG_IMA_LOAD_X509 is not set
+CONFIG_IMA_MEASURE_ASYMMETRIC_KEYS=y
+CONFIG_IMA_QUEUE_EARLY_BOOT_KEYS=y
+# CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT is not set
+CONFIG_EVM=y
+CONFIG_EVM_ATTR_FSUUID=y
+CONFIG_EVM_EXTRA_SMACK_XATTRS=y
+# CONFIG_EVM_ADD_XATTRS is not set
+# CONFIG_EVM_LOAD_X509 is not set
+# CONFIG_DEFAULT_SECURITY_SELINUX is not set
+# CONFIG_DEFAULT_SECURITY_SMACK is not set
+# CONFIG_DEFAULT_SECURITY_TOMOYO is not set
+CONFIG_DEFAULT_SECURITY_APPARMOR=y
+# CONFIG_DEFAULT_SECURITY_DAC is not set
+CONFIG_LSM="lockdown,yama,loadpin,safesetid,integrity,apparmor,selinux,smack,tomoyo,bpf"
+
+#
+# Kernel hardening options
+#
+
+#
+# Memory initialization
+#
+CONFIG_INIT_STACK_NONE=y
+# CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set
+# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set
+# end of Memory initialization
+# end of Kernel hardening options
+# end of Security options
+
+CONFIG_XOR_BLOCKS=m
+CONFIG_ASYNC_CORE=m
+CONFIG_ASYNC_MEMCPY=m
+CONFIG_ASYNC_XOR=m
+CONFIG_ASYNC_PQ=m
+CONFIG_ASYNC_RAID6_RECOV=m
+CONFIG_CRYPTO=y
+
+#
+# Crypto core or helper
+#
+CONFIG_CRYPTO_ALGAPI=y
+CONFIG_CRYPTO_ALGAPI2=y
+CONFIG_CRYPTO_AEAD=y
+CONFIG_CRYPTO_AEAD2=y
+CONFIG_CRYPTO_SKCIPHER=y
+CONFIG_CRYPTO_SKCIPHER2=y
+CONFIG_CRYPTO_HASH=y
+CONFIG_CRYPTO_HASH2=y
+CONFIG_CRYPTO_RNG=y
+CONFIG_CRYPTO_RNG2=y
+CONFIG_CRYPTO_RNG_DEFAULT=y
+CONFIG_CRYPTO_AKCIPHER2=y
+CONFIG_CRYPTO_AKCIPHER=y
+CONFIG_CRYPTO_KPP2=y
+CONFIG_CRYPTO_KPP=m
+CONFIG_CRYPTO_ACOMP2=y
+CONFIG_CRYPTO_MANAGER=y
+CONFIG_CRYPTO_MANAGER2=y
+CONFIG_CRYPTO_USER=m
+CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
+CONFIG_CRYPTO_GF128MUL=y
+CONFIG_CRYPTO_NULL=y
+CONFIG_CRYPTO_NULL2=y
+CONFIG_CRYPTO_PCRYPT=m
+CONFIG_CRYPTO_CRYPTD=m
+CONFIG_CRYPTO_AUTHENC=m
+CONFIG_CRYPTO_TEST=m
+CONFIG_CRYPTO_SIMD=m
+CONFIG_CRYPTO_GLUE_HELPER_X86=m
+CONFIG_CRYPTO_ENGINE=m
+
+#
+# Public-key cryptography
+#
+CONFIG_CRYPTO_RSA=y
+CONFIG_CRYPTO_DH=m
+# CONFIG_CRYPTO_ECDH is not set
+# CONFIG_CRYPTO_ECRDSA is not set
+# CONFIG_CRYPTO_CURVE25519 is not set
+# CONFIG_CRYPTO_CURVE25519_X86 is not set
+
+#
+# Authenticated Encryption with Associated Data
+#
+CONFIG_CRYPTO_CCM=m
+CONFIG_CRYPTO_GCM=y
+# CONFIG_CRYPTO_CHACHA20POLY1305 is not set
+# CONFIG_CRYPTO_AEGIS128 is not set
+# CONFIG_CRYPTO_AEGIS128_AESNI_SSE2 is not set
+CONFIG_CRYPTO_SEQIV=y
+CONFIG_CRYPTO_ECHAINIV=m
+
+#
+# Block modes
+#
+CONFIG_CRYPTO_CBC=y
+# CONFIG_CRYPTO_CFB is not set
+CONFIG_CRYPTO_CTR=y
+CONFIG_CRYPTO_CTS=m
+CONFIG_CRYPTO_ECB=y
+CONFIG_CRYPTO_LRW=m
+# CONFIG_CRYPTO_OFB is not set
+CONFIG_CRYPTO_PCBC=m
+CONFIG_CRYPTO_XTS=m
+# CONFIG_CRYPTO_KEYWRAP is not set
+# CONFIG_CRYPTO_NHPOLY1305_SSE2 is not set
+# CONFIG_CRYPTO_NHPOLY1305_AVX2 is not set
+# CONFIG_CRYPTO_ADIANTUM is not set
+CONFIG_CRYPTO_ESSIV=m
+
+#
+# Hash modes
+#
+CONFIG_CRYPTO_CMAC=m
+CONFIG_CRYPTO_HMAC=y
+CONFIG_CRYPTO_XCBC=m
+CONFIG_CRYPTO_VMAC=m
+
+#
+# Digest
+#
+CONFIG_CRYPTO_CRC32C=y
+CONFIG_CRYPTO_CRC32C_INTEL=y
+CONFIG_CRYPTO_CRC32=m
+CONFIG_CRYPTO_CRC32_PCLMUL=m
+CONFIG_CRYPTO_XXHASH=m
+CONFIG_CRYPTO_BLAKE2B=m
+# CONFIG_CRYPTO_BLAKE2S is not set
+# CONFIG_CRYPTO_BLAKE2S_X86 is not set
+CONFIG_CRYPTO_CRCT10DIF=y
+CONFIG_CRYPTO_CRCT10DIF_PCLMUL=m
+CONFIG_CRYPTO_GHASH=y
+# CONFIG_CRYPTO_POLY1305 is not set
+# CONFIG_CRYPTO_POLY1305_X86_64 is not set
+CONFIG_CRYPTO_MD4=m
+CONFIG_CRYPTO_MD5=y
+CONFIG_CRYPTO_MICHAEL_MIC=m
+CONFIG_CRYPTO_RMD128=m
+CONFIG_CRYPTO_RMD160=m
+CONFIG_CRYPTO_RMD256=m
+CONFIG_CRYPTO_RMD320=m
+CONFIG_CRYPTO_SHA1=y
+CONFIG_CRYPTO_SHA1_SSSE3=m
+CONFIG_CRYPTO_SHA256_SSSE3=m
+CONFIG_CRYPTO_SHA512_SSSE3=m
+CONFIG_CRYPTO_SHA256=y
+CONFIG_CRYPTO_SHA512=y
+# CONFIG_CRYPTO_SHA3 is not set
+# CONFIG_CRYPTO_SM3 is not set
+# CONFIG_CRYPTO_STREEBOG is not set
+CONFIG_CRYPTO_TGR192=m
+CONFIG_CRYPTO_WP512=m
+CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL=m
+
+#
+# Ciphers
+#
+CONFIG_CRYPTO_AES=y
+# CONFIG_CRYPTO_AES_TI is not set
+CONFIG_CRYPTO_AES_NI_INTEL=m
+CONFIG_CRYPTO_ANUBIS=m
+CONFIG_CRYPTO_ARC4=m
+CONFIG_CRYPTO_BLOWFISH=m
+CONFIG_CRYPTO_BLOWFISH_COMMON=m
+CONFIG_CRYPTO_BLOWFISH_X86_64=m
+CONFIG_CRYPTO_CAMELLIA=m
+CONFIG_CRYPTO_CAMELLIA_X86_64=m
+CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64=m
+CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64=m
+CONFIG_CRYPTO_CAST_COMMON=m
+CONFIG_CRYPTO_CAST5=m
+CONFIG_CRYPTO_CAST5_AVX_X86_64=m
+CONFIG_CRYPTO_CAST6=m
+CONFIG_CRYPTO_CAST6_AVX_X86_64=m
+CONFIG_CRYPTO_DES=m
+CONFIG_CRYPTO_DES3_EDE_X86_64=m
+CONFIG_CRYPTO_FCRYPT=m
+CONFIG_CRYPTO_KHAZAD=m
+CONFIG_CRYPTO_SALSA20=m
+# CONFIG_CRYPTO_CHACHA20 is not set
+# CONFIG_CRYPTO_CHACHA20_X86_64 is not set
+CONFIG_CRYPTO_SEED=m
+CONFIG_CRYPTO_SERPENT=m
+CONFIG_CRYPTO_SERPENT_SSE2_X86_64=m
+CONFIG_CRYPTO_SERPENT_AVX_X86_64=m
+CONFIG_CRYPTO_SERPENT_AVX2_X86_64=m
+# CONFIG_CRYPTO_SM4 is not set
+CONFIG_CRYPTO_TEA=m
+CONFIG_CRYPTO_TWOFISH=m
+CONFIG_CRYPTO_TWOFISH_COMMON=m
+CONFIG_CRYPTO_TWOFISH_X86_64=m
+CONFIG_CRYPTO_TWOFISH_X86_64_3WAY=m
+CONFIG_CRYPTO_TWOFISH_AVX_X86_64=m
+
+#
+# Compression
+#
+CONFIG_CRYPTO_DEFLATE=y
+CONFIG_CRYPTO_LZO=y
+# CONFIG_CRYPTO_842 is not set
+CONFIG_CRYPTO_LZ4=m
+CONFIG_CRYPTO_LZ4HC=m
+# CONFIG_CRYPTO_ZSTD is not set
+
+#
+# Random Number Generation
+#
+CONFIG_CRYPTO_ANSI_CPRNG=m
+CONFIG_CRYPTO_DRBG_MENU=y
+CONFIG_CRYPTO_DRBG_HMAC=y
+CONFIG_CRYPTO_DRBG_HASH=y
+CONFIG_CRYPTO_DRBG_CTR=y
+CONFIG_CRYPTO_DRBG=y
+CONFIG_CRYPTO_JITTERENTROPY=y
+CONFIG_CRYPTO_USER_API=m
+CONFIG_CRYPTO_USER_API_HASH=m
+CONFIG_CRYPTO_USER_API_SKCIPHER=m
+# CONFIG_CRYPTO_USER_API_RNG is not set
+# CONFIG_CRYPTO_USER_API_AEAD is not set
+# CONFIG_CRYPTO_STATS is not set
+CONFIG_CRYPTO_HASH_INFO=y
+
+#
+# Crypto library routines
+#
+CONFIG_CRYPTO_LIB_AES=y
+CONFIG_CRYPTO_LIB_ARC4=m
+# CONFIG_CRYPTO_LIB_BLAKE2S is not set
+# CONFIG_CRYPTO_LIB_CHACHA is not set
+# CONFIG_CRYPTO_LIB_CURVE25519 is not set
+CONFIG_CRYPTO_LIB_DES=m
+CONFIG_CRYPTO_LIB_POLY1305_RSIZE=11
+# CONFIG_CRYPTO_LIB_POLY1305 is not set
+# CONFIG_CRYPTO_LIB_CHACHA20POLY1305 is not set
+CONFIG_CRYPTO_LIB_SHA256=y
+CONFIG_CRYPTO_HW=y
+CONFIG_CRYPTO_DEV_PADLOCK=y
+CONFIG_CRYPTO_DEV_PADLOCK_AES=m
+CONFIG_CRYPTO_DEV_PADLOCK_SHA=m
+# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set
+# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set
+CONFIG_CRYPTO_DEV_CCP=y
+CONFIG_CRYPTO_DEV_CCP_DD=m
+CONFIG_CRYPTO_DEV_SP_CCP=y
+CONFIG_CRYPTO_DEV_CCP_CRYPTO=m
+CONFIG_CRYPTO_DEV_SP_PSP=y
+# CONFIG_CRYPTO_DEV_CCP_DEBUGFS is not set
+CONFIG_CRYPTO_DEV_QAT=m
+CONFIG_CRYPTO_DEV_QAT_DH895xCC=m
+# CONFIG_CRYPTO_DEV_QAT_C3XXX is not set
+# CONFIG_CRYPTO_DEV_QAT_C62X is not set
+# CONFIG_CRYPTO_DEV_QAT_DH895xCCVF is not set
+# CONFIG_CRYPTO_DEV_QAT_C3XXXVF is not set
+# CONFIG_CRYPTO_DEV_QAT_C62XVF is not set
+# CONFIG_CRYPTO_DEV_NITROX_CNN55XX is not set
+# CONFIG_CRYPTO_DEV_CHELSIO is not set
+CONFIG_CRYPTO_DEV_VIRTIO=m
+# CONFIG_CRYPTO_DEV_SAFEXCEL is not set
+# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set
+CONFIG_ASYMMETRIC_KEY_TYPE=y
+CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
+# CONFIG_ASYMMETRIC_TPM_KEY_SUBTYPE is not set
+CONFIG_X509_CERTIFICATE_PARSER=y
+# CONFIG_PKCS8_PRIVATE_KEY_PARSER is not set
+CONFIG_PKCS7_MESSAGE_PARSER=y
+CONFIG_PKCS7_TEST_KEY=m
+# CONFIG_SIGNED_PE_FILE_VERIFICATION is not set
+
+#
+# Certificates for signature checking
+#
+CONFIG_MODULE_SIG_KEY="certs/signing_key.pem"
+CONFIG_SYSTEM_TRUSTED_KEYRING=y
+CONFIG_SYSTEM_TRUSTED_KEYS=""
+# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
+# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
+# CONFIG_SYSTEM_BLACKLIST_KEYRING is not set
+# end of Certificates for signature checking
+
+CONFIG_BINARY_PRINTF=y
+
+#
+# Library routines
+#
+CONFIG_RAID6_PQ=m
+CONFIG_RAID6_PQ_BENCHMARK=y
+# CONFIG_PACKING is not set
+CONFIG_BITREVERSE=y
+CONFIG_GENERIC_STRNCPY_FROM_USER=y
+CONFIG_GENERIC_STRNLEN_USER=y
+CONFIG_GENERIC_NET_UTILS=y
+CONFIG_GENERIC_FIND_FIRST_BIT=y
+CONFIG_CORDIC=m
+CONFIG_RATIONAL=y
+CONFIG_GENERIC_PCI_IOMAP=y
+CONFIG_GENERIC_IOMAP=y
+CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
+CONFIG_ARCH_HAS_FAST_MULTIPLIER=y
+CONFIG_CRC_CCITT=y
+CONFIG_CRC16=y
+CONFIG_CRC_T10DIF=y
+CONFIG_CRC_ITU_T=m
+CONFIG_CRC32=y
+# CONFIG_CRC32_SELFTEST is not set
+CONFIG_CRC32_SLICEBY8=y
+# CONFIG_CRC32_SLICEBY4 is not set
+# CONFIG_CRC32_SARWATE is not set
+# CONFIG_CRC32_BIT is not set
+CONFIG_CRC64=m
+# CONFIG_CRC4 is not set
+CONFIG_CRC7=m
+CONFIG_LIBCRC32C=m
+CONFIG_CRC8=m
+CONFIG_XXHASH=y
+# CONFIG_RANDOM32_SELFTEST is not set
+CONFIG_ZLIB_INFLATE=y
+CONFIG_ZLIB_DEFLATE=y
+CONFIG_LZO_COMPRESS=y
+CONFIG_LZO_DECOMPRESS=y
+CONFIG_LZ4_COMPRESS=m
+CONFIG_LZ4HC_COMPRESS=m
+CONFIG_LZ4_DECOMPRESS=y
+CONFIG_ZSTD_COMPRESS=m
+CONFIG_ZSTD_DECOMPRESS=m
+CONFIG_XZ_DEC=y
+CONFIG_XZ_DEC_X86=y
+CONFIG_XZ_DEC_POWERPC=y
+CONFIG_XZ_DEC_IA64=y
+CONFIG_XZ_DEC_ARM=y
+CONFIG_XZ_DEC_ARMTHUMB=y
+CONFIG_XZ_DEC_SPARC=y
+CONFIG_XZ_DEC_BCJ=y
+CONFIG_XZ_DEC_TEST=m
+CONFIG_DECOMPRESS_GZIP=y
+CONFIG_DECOMPRESS_BZIP2=y
+CONFIG_DECOMPRESS_LZMA=y
+CONFIG_DECOMPRESS_XZ=y
+CONFIG_DECOMPRESS_LZO=y
+CONFIG_DECOMPRESS_LZ4=y
+CONFIG_GENERIC_ALLOCATOR=y
+CONFIG_REED_SOLOMON=m
+CONFIG_REED_SOLOMON_ENC8=y
+CONFIG_REED_SOLOMON_DEC8=y
+CONFIG_TEXTSEARCH=y
+CONFIG_TEXTSEARCH_KMP=m
+CONFIG_TEXTSEARCH_BM=m
+CONFIG_TEXTSEARCH_FSM=m
+CONFIG_BTREE=y
+CONFIG_INTERVAL_TREE=y
+CONFIG_XARRAY_MULTI=y
+CONFIG_ASSOCIATIVE_ARRAY=y
+CONFIG_HAS_IOMEM=y
+CONFIG_HAS_IOPORT_MAP=y
+CONFIG_HAS_DMA=y
+CONFIG_NEED_SG_DMA_LENGTH=y
+CONFIG_NEED_DMA_MAP_STATE=y
+CONFIG_ARCH_DMA_ADDR_T_64BIT=y
+CONFIG_SWIOTLB=y
+# CONFIG_DMA_CMA is not set
+# CONFIG_DMA_API_DEBUG is not set
+CONFIG_SGL_ALLOC=y
+CONFIG_IOMMU_HELPER=y
+CONFIG_CHECK_SIGNATURE=y
+CONFIG_CPU_RMAP=y
+CONFIG_DQL=y
+CONFIG_GLOB=y
+# CONFIG_GLOB_SELFTEST is not set
+CONFIG_NLATTR=y
+CONFIG_LRU_CACHE=m
+CONFIG_CLZ_TAB=y
+CONFIG_IRQ_POLL=y
+CONFIG_MPILIB=y
+CONFIG_SIGNATURE=y
+CONFIG_DIMLIB=y
+CONFIG_OID_REGISTRY=y
+CONFIG_UCS2_STRING=y
+CONFIG_HAVE_GENERIC_VDSO=y
+CONFIG_GENERIC_GETTIMEOFDAY=y
+CONFIG_GENERIC_VDSO_TIME_NS=y
+CONFIG_FONT_SUPPORT=y
+CONFIG_FONT_8x16=y
+CONFIG_FONT_AUTOSELECT=y
+CONFIG_SG_POOL=y
+CONFIG_ARCH_HAS_PMEM_API=y
+CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE=y
+CONFIG_ARCH_HAS_UACCESS_MCSAFE=y
+CONFIG_ARCH_STACKWALK=y
+CONFIG_SBITMAP=y
+# CONFIG_STRING_SELFTEST is not set
+# end of Library routines
+
+#
+# Kernel hacking
+#
+
+#
+# printk and dmesg options
+#
+CONFIG_PRINTK_TIME=y
+# CONFIG_PRINTK_CALLER is not set
+CONFIG_CONSOLE_LOGLEVEL_DEFAULT=7
+CONFIG_CONSOLE_LOGLEVEL_QUIET=4
+CONFIG_MESSAGE_LOGLEVEL_DEFAULT=4
+CONFIG_BOOT_PRINTK_DELAY=y
+CONFIG_DYNAMIC_DEBUG=y
+CONFIG_SYMBOLIC_ERRNAME=y
+CONFIG_DEBUG_BUGVERBOSE=y
+# end of printk and dmesg options
+
+#
+# Compile-time checks and compiler options
+#
+CONFIG_DEBUG_INFO=y
+CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+# CONFIG_DEBUG_INFO_REDUCED is not set
+# CONFIG_DEBUG_INFO_SPLIT is not set
+# CONFIG_DEBUG_INFO_DWARF4 is not set
+# CONFIG_DEBUG_INFO_BTF is not set
+# CONFIG_GDB_SCRIPTS is not set
+CONFIG_ENABLE_MUST_CHECK=y
+CONFIG_FRAME_WARN=1024
+# CONFIG_STRIP_ASM_SYMS is not set
+CONFIG_READABLE_ASM=y
+# CONFIG_HEADERS_INSTALL is not set
+# CONFIG_DEBUG_SECTION_MISMATCH is not set
+CONFIG_SECTION_MISMATCH_WARN_ONLY=y
+CONFIG_STACK_VALIDATION=y
+# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
+# end of Compile-time checks and compiler options
+
+#
+# Generic Kernel Debugging Instruments
+#
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x1
+CONFIG_MAGIC_SYSRQ_SERIAL=y
+CONFIG_MAGIC_SYSRQ_SERIAL_SEQUENCE=""
+CONFIG_DEBUG_FS=y
+CONFIG_HAVE_ARCH_KGDB=y
+CONFIG_KGDB=y
+CONFIG_KGDB_SERIAL_CONSOLE=y
+# CONFIG_KGDB_TESTS is not set
+CONFIG_KGDB_LOW_LEVEL_TRAP=y
+CONFIG_KGDB_KDB=y
+CONFIG_KDB_DEFAULT_ENABLE=0x1
+CONFIG_KDB_KEYBOARD=y
+CONFIG_KDB_CONTINUE_CATASTROPHIC=0
+CONFIG_ARCH_HAS_UBSAN_SANITIZE_ALL=y
+# CONFIG_UBSAN is not set
+# end of Generic Kernel Debugging Instruments
+
+CONFIG_DEBUG_KERNEL=y
+CONFIG_DEBUG_MISC=y
+
+#
+# Memory Debugging
+#
+# CONFIG_PAGE_EXTENSION is not set
+# CONFIG_DEBUG_PAGEALLOC is not set
+# CONFIG_PAGE_OWNER is not set
+# CONFIG_PAGE_POISONING is not set
+# CONFIG_DEBUG_PAGE_REF is not set
+# CONFIG_DEBUG_RODATA_TEST is not set
+CONFIG_GENERIC_PTDUMP=y
+# CONFIG_PTDUMP_DEBUGFS is not set
+# CONFIG_DEBUG_OBJECTS is not set
+# CONFIG_SLUB_DEBUG_ON is not set
+# CONFIG_SLUB_STATS is not set
+CONFIG_HAVE_DEBUG_KMEMLEAK=y
+# CONFIG_DEBUG_KMEMLEAK is not set
+# CONFIG_DEBUG_STACK_USAGE is not set
+CONFIG_SCHED_STACK_END_CHECK=y
+CONFIG_DEBUG_VM=y
+# CONFIG_DEBUG_VM_VMACACHE is not set
+# CONFIG_DEBUG_VM_RB is not set
+# CONFIG_DEBUG_VM_PGFLAGS is not set
+CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y
+# CONFIG_DEBUG_VIRTUAL is not set
+# CONFIG_DEBUG_MEMORY_INIT is not set
+CONFIG_MEMORY_NOTIFIER_ERROR_INJECT=m
+# CONFIG_DEBUG_PER_CPU_MAPS is not set
+CONFIG_HAVE_ARCH_KASAN=y
+CONFIG_HAVE_ARCH_KASAN_VMALLOC=y
+CONFIG_CC_HAS_KASAN_GENERIC=y
+# CONFIG_KASAN is not set
+CONFIG_KASAN_STACK=1
+# end of Memory Debugging
+
+# CONFIG_DEBUG_SHIRQ is not set
+
+#
+# Debug Oops, Lockups and Hangs
+#
+# CONFIG_PANIC_ON_OOPS is not set
+CONFIG_PANIC_ON_OOPS_VALUE=0
+CONFIG_PANIC_TIMEOUT=0
+CONFIG_LOCKUP_DETECTOR=y
+CONFIG_SOFTLOCKUP_DETECTOR=y
+# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set
+CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0
+CONFIG_HARDLOCKUP_DETECTOR_PERF=y
+CONFIG_HARDLOCKUP_CHECK_TIMESTAMP=y
+CONFIG_HARDLOCKUP_DETECTOR=y
+# CONFIG_BOOTPARAM_HARDLOCKUP_PANIC is not set
+CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=0
+CONFIG_DETECT_HUNG_TASK=y
+CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=120
+# CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set
+CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=0
+# CONFIG_WQ_WATCHDOG is not set
+# CONFIG_TEST_LOCKUP is not set
+# end of Debug Oops, Lockups and Hangs
+
+#
+# Scheduler Debugging
+#
+CONFIG_SCHED_DEBUG=y
+CONFIG_SCHED_INFO=y
+CONFIG_SCHEDSTATS=y
+# end of Scheduler Debugging
+
+# CONFIG_DEBUG_TIMEKEEPING is not set
+
+#
+# Lock Debugging (spinlocks, mutexes, etc...)
+#
+CONFIG_LOCK_DEBUGGING_SUPPORT=y
+CONFIG_PROVE_LOCKING=y
+# CONFIG_PROVE_RAW_LOCK_NESTING is not set
+# CONFIG_LOCK_STAT is not set
+CONFIG_DEBUG_RT_MUTEXES=y
+CONFIG_DEBUG_SPINLOCK=y
+CONFIG_DEBUG_MUTEXES=y
+CONFIG_DEBUG_WW_MUTEX_SLOWPATH=y
+CONFIG_DEBUG_RWSEMS=y
+CONFIG_DEBUG_LOCK_ALLOC=y
+CONFIG_LOCKDEP=y
+# CONFIG_DEBUG_LOCKDEP is not set
+CONFIG_DEBUG_ATOMIC_SLEEP=y
+# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
+CONFIG_LOCK_TORTURE_TEST=m
+# CONFIG_WW_MUTEX_SELFTEST is not set
+# end of Lock Debugging (spinlocks, mutexes, etc...)
+
+CONFIG_TRACE_IRQFLAGS=y
+CONFIG_STACKTRACE=y
+# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
+# CONFIG_DEBUG_KOBJECT is not set
+
+#
+# Debug kernel data structures
+#
+CONFIG_DEBUG_LIST=y
+# CONFIG_DEBUG_PLIST is not set
+# CONFIG_DEBUG_SG is not set
+# CONFIG_DEBUG_NOTIFIERS is not set
+# CONFIG_BUG_ON_DATA_CORRUPTION is not set
+# end of Debug kernel data structures
+
+# CONFIG_DEBUG_CREDENTIALS is not set
+
+#
+# RCU Debugging
+#
+CONFIG_PROVE_RCU=y
+CONFIG_TORTURE_TEST=m
+# CONFIG_RCU_PERF_TEST is not set
+# CONFIG_RCU_TORTURE_TEST is not set
+CONFIG_RCU_CPU_STALL_TIMEOUT=60
+# CONFIG_RCU_TRACE is not set
+# CONFIG_RCU_EQS_DEBUG is not set
+# end of RCU Debugging
+
+# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set
+# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
+# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set
+CONFIG_LATENCYTOP=y
+CONFIG_USER_STACKTRACE_SUPPORT=y
+CONFIG_NOP_TRACER=y
+CONFIG_HAVE_FUNCTION_TRACER=y
+CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
+CONFIG_HAVE_DYNAMIC_FTRACE=y
+CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
+CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
+CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
+CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
+CONFIG_HAVE_FENTRY=y
+CONFIG_HAVE_C_RECORDMCOUNT=y
+CONFIG_TRACER_MAX_TRACE=y
+CONFIG_TRACE_CLOCK=y
+CONFIG_RING_BUFFER=y
+CONFIG_EVENT_TRACING=y
+CONFIG_CONTEXT_SWITCH_TRACER=y
+CONFIG_RING_BUFFER_ALLOW_SWAP=y
+CONFIG_PREEMPTIRQ_TRACEPOINTS=y
+CONFIG_TRACING=y
+CONFIG_GENERIC_TRACER=y
+CONFIG_TRACING_SUPPORT=y
+CONFIG_FTRACE=y
+# CONFIG_BOOTTIME_TRACING is not set
+CONFIG_FUNCTION_TRACER=y
+CONFIG_FUNCTION_GRAPH_TRACER=y
+CONFIG_DYNAMIC_FTRACE=y
+CONFIG_DYNAMIC_FTRACE_WITH_REGS=y
+CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
+CONFIG_FUNCTION_PROFILER=y
+CONFIG_STACK_TRACER=y
+# CONFIG_PREEMPTIRQ_EVENTS is not set
+# CONFIG_IRQSOFF_TRACER is not set
+CONFIG_SCHED_TRACER=y
+# CONFIG_HWLAT_TRACER is not set
+CONFIG_MMIOTRACE=y
+CONFIG_FTRACE_SYSCALLS=y
+CONFIG_TRACER_SNAPSHOT=y
+# CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP is not set
+CONFIG_BRANCH_PROFILE_NONE=y
+# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
+# CONFIG_PROFILE_ALL_BRANCHES is not set
+CONFIG_BLK_DEV_IO_TRACE=y
+CONFIG_KPROBE_EVENTS=y
+# CONFIG_KPROBE_EVENTS_ON_NOTRACE is not set
+CONFIG_UPROBE_EVENTS=y
+CONFIG_BPF_EVENTS=y
+CONFIG_DYNAMIC_EVENTS=y
+CONFIG_PROBE_EVENTS=y
+# CONFIG_BPF_KPROBE_OVERRIDE is not set
+CONFIG_FTRACE_MCOUNT_RECORD=y
+# CONFIG_HIST_TRIGGERS is not set
+# CONFIG_TRACE_EVENT_INJECT is not set
+# CONFIG_TRACEPOINT_BENCHMARK is not set
+# CONFIG_RING_BUFFER_BENCHMARK is not set
+# CONFIG_TRACE_EVAL_MAP_FILE is not set
+# CONFIG_FTRACE_STARTUP_TEST is not set
+# CONFIG_RING_BUFFER_STARTUP_TEST is not set
+# CONFIG_MMIOTRACE_TEST is not set
+# CONFIG_PREEMPTIRQ_DELAY_TEST is not set
+# CONFIG_KPROBE_EVENT_GEN_TEST is not set
+# CONFIG_PROVIDE_OHCI1394_DMA_INIT is not set
+# CONFIG_SAMPLES is not set
+CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y
+CONFIG_STRICT_DEVMEM=y
+# CONFIG_IO_STRICT_DEVMEM is not set
+
+#
+# x86 Debugging
+#
+CONFIG_TRACE_IRQFLAGS_SUPPORT=y
+CONFIG_EARLY_PRINTK_USB=y
+# CONFIG_X86_VERBOSE_BOOTUP is not set
+CONFIG_EARLY_PRINTK=y
+CONFIG_EARLY_PRINTK_DBGP=y
+# CONFIG_EARLY_PRINTK_USB_XDBC is not set
+# CONFIG_EFI_PGT_DUMP is not set
+# CONFIG_DEBUG_WX is not set
+CONFIG_DOUBLEFAULT=y
+# CONFIG_DEBUG_TLBFLUSH is not set
+# CONFIG_IOMMU_DEBUG is not set
+CONFIG_HAVE_MMIOTRACE_SUPPORT=y
+# CONFIG_X86_DECODER_SELFTEST is not set
+# CONFIG_IO_DELAY_0X80 is not set
+CONFIG_IO_DELAY_0XED=y
+# CONFIG_IO_DELAY_UDELAY is not set
+# CONFIG_IO_DELAY_NONE is not set
+# CONFIG_DEBUG_BOOT_PARAMS is not set
+# CONFIG_CPA_DEBUG is not set
+# CONFIG_DEBUG_ENTRY is not set
+# CONFIG_DEBUG_NMI_SELFTEST is not set
+CONFIG_X86_DEBUG_FPU=y
+# CONFIG_PUNIT_ATOM_DEBUG is not set
+CONFIG_UNWINDER_ORC=y
+# CONFIG_UNWINDER_FRAME_POINTER is not set
+# CONFIG_UNWINDER_GUESS is not set
+# end of x86 Debugging
+
+#
+# Kernel Testing and Coverage
+#
+# CONFIG_KUNIT is not set
+CONFIG_NOTIFIER_ERROR_INJECTION=m
+CONFIG_PM_NOTIFIER_ERROR_INJECT=m
+# CONFIG_NETDEV_NOTIFIER_ERROR_INJECT is not set
+CONFIG_FUNCTION_ERROR_INJECTION=y
+# CONFIG_FAULT_INJECTION is not set
+CONFIG_ARCH_HAS_KCOV=y
+CONFIG_CC_HAS_SANCOV_TRACE_PC=y
+# CONFIG_KCOV is not set
+CONFIG_RUNTIME_TESTING_MENU=y
+# CONFIG_LKDTM is not set
+# CONFIG_TEST_LIST_SORT is not set
+# CONFIG_TEST_MIN_HEAP is not set
+# CONFIG_TEST_SORT is not set
+# CONFIG_KPROBES_SANITY_TEST is not set
+# CONFIG_BACKTRACE_SELF_TEST is not set
+CONFIG_RBTREE_TEST=m
+# CONFIG_REED_SOLOMON_TEST is not set
+CONFIG_INTERVAL_TREE_TEST=m
+CONFIG_PERCPU_TEST=m
+# CONFIG_ATOMIC64_SELFTEST is not set
+CONFIG_ASYNC_RAID6_TEST=m
+# CONFIG_TEST_HEXDUMP is not set
+CONFIG_TEST_STRING_HELPERS=m
+# CONFIG_TEST_STRSCPY is not set
+CONFIG_TEST_KSTRTOX=m
+# CONFIG_TEST_PRINTF is not set
+# CONFIG_TEST_BITMAP is not set
+# CONFIG_TEST_BITFIELD is not set
+# CONFIG_TEST_UUID is not set
+# CONFIG_TEST_XARRAY is not set
+# CONFIG_TEST_OVERFLOW is not set
+# CONFIG_TEST_RHASHTABLE is not set
+# CONFIG_TEST_HASH is not set
+# CONFIG_TEST_IDA is not set
+CONFIG_TEST_LKM=m
+# CONFIG_TEST_VMALLOC is not set
+CONFIG_TEST_USER_COPY=m
+CONFIG_TEST_BPF=m
+# CONFIG_TEST_BLACKHOLE_DEV is not set
+# CONFIG_FIND_BIT_BENCHMARK is not set
+CONFIG_TEST_FIRMWARE=m
+# CONFIG_TEST_SYSCTL is not set
+CONFIG_TEST_UDELAY=m
+# CONFIG_TEST_STATIC_KEYS is not set
+# CONFIG_TEST_KMOD is not set
+# CONFIG_TEST_MEMCAT_P is not set
+# CONFIG_TEST_STACKINIT is not set
+# CONFIG_TEST_MEMINIT is not set
+CONFIG_MEMTEST=y
+# CONFIG_HYPERV_TESTING is not set
+# end of Kernel Testing and Coverage
+# end of Kernel hacking
+EOF
--- /dev/null
+#! /usr/bin/env bash
+
+cat > .config << EOF
+#
+# Automatically generated file; DO NOT EDIT.
+# Linux/x86 5.7.0-rc5 Kernel Configuration
+#
+
+#
+# Compiler: gcc (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3)
+#
+CONFIG_CC_IS_GCC=y
+CONFIG_GCC_VERSION=80301
+CONFIG_LD_VERSION=230000000
+CONFIG_CLANG_VERSION=0
+CONFIG_CC_CAN_LINK=y
+CONFIG_CC_HAS_ASM_GOTO=y
+CONFIG_CC_HAS_ASM_INLINE=y
+CONFIG_IRQ_WORK=y
+CONFIG_BUILDTIME_TABLE_SORT=y
+CONFIG_THREAD_INFO_IN_TASK=y
+
+#
+# General setup
+#
+CONFIG_INIT_ENV_ARG_LIMIT=32
+# CONFIG_COMPILE_TEST is not set
+CONFIG_LOCALVERSION=""
+CONFIG_LOCALVERSION_AUTO=y
+CONFIG_BUILD_SALT=""
+CONFIG_HAVE_KERNEL_GZIP=y
+CONFIG_HAVE_KERNEL_BZIP2=y
+CONFIG_HAVE_KERNEL_LZMA=y
+CONFIG_HAVE_KERNEL_XZ=y
+CONFIG_HAVE_KERNEL_LZO=y
+CONFIG_HAVE_KERNEL_LZ4=y
+CONFIG_KERNEL_GZIP=y
+# CONFIG_KERNEL_BZIP2 is not set
+# CONFIG_KERNEL_LZMA is not set
+# CONFIG_KERNEL_XZ is not set
+# CONFIG_KERNEL_LZO is not set
+# CONFIG_KERNEL_LZ4 is not set
+CONFIG_DEFAULT_HOSTNAME="(none)"
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+CONFIG_SYSVIPC_SYSCTL=y
+CONFIG_POSIX_MQUEUE=y
+CONFIG_POSIX_MQUEUE_SYSCTL=y
+CONFIG_CROSS_MEMORY_ATTACH=y
+# CONFIG_USELIB is not set
+CONFIG_AUDIT=y
+CONFIG_HAVE_ARCH_AUDITSYSCALL=y
+CONFIG_AUDITSYSCALL=y
+
+#
+# IRQ subsystem
+#
+CONFIG_GENERIC_IRQ_PROBE=y
+CONFIG_GENERIC_IRQ_SHOW=y
+CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y
+CONFIG_GENERIC_PENDING_IRQ=y
+CONFIG_GENERIC_IRQ_MIGRATION=y
+CONFIG_GENERIC_IRQ_INJECTION=y
+CONFIG_HARDIRQS_SW_RESEND=y
+CONFIG_IRQ_DOMAIN=y
+CONFIG_IRQ_DOMAIN_HIERARCHY=y
+CONFIG_GENERIC_MSI_IRQ=y
+CONFIG_GENERIC_MSI_IRQ_DOMAIN=y
+CONFIG_IRQ_MSI_IOMMU=y
+CONFIG_GENERIC_IRQ_MATRIX_ALLOCATOR=y
+CONFIG_GENERIC_IRQ_RESERVATION_MODE=y
+CONFIG_IRQ_FORCED_THREADING=y
+CONFIG_SPARSE_IRQ=y
+# CONFIG_GENERIC_IRQ_DEBUGFS is not set
+# end of IRQ subsystem
+
+CONFIG_CLOCKSOURCE_WATCHDOG=y
+CONFIG_ARCH_CLOCKSOURCE_INIT=y
+CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE=y
+CONFIG_GENERIC_TIME_VSYSCALL=y
+CONFIG_GENERIC_CLOCKEVENTS=y
+CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
+CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
+CONFIG_GENERIC_CMOS_UPDATE=y
+
+#
+# Timers subsystem
+#
+CONFIG_TICK_ONESHOT=y
+CONFIG_NO_HZ_COMMON=y
+# CONFIG_HZ_PERIODIC is not set
+# CONFIG_NO_HZ_IDLE is not set
+CONFIG_NO_HZ_FULL=y
+CONFIG_CONTEXT_TRACKING=y
+# CONFIG_CONTEXT_TRACKING_FORCE is not set
+CONFIG_NO_HZ=y
+CONFIG_HIGH_RES_TIMERS=y
+# end of Timers subsystem
+
+# CONFIG_PREEMPT_NONE is not set
+CONFIG_PREEMPT_VOLUNTARY=y
+# CONFIG_PREEMPT is not set
+CONFIG_PREEMPT_COUNT=y
+
+#
+# CPU/Task time and stats accounting
+#
+CONFIG_VIRT_CPU_ACCOUNTING=y
+CONFIG_VIRT_CPU_ACCOUNTING_GEN=y
+# CONFIG_IRQ_TIME_ACCOUNTING is not set
+CONFIG_HAVE_SCHED_AVG_IRQ=y
+# CONFIG_SCHED_THERMAL_PRESSURE is not set
+CONFIG_BSD_PROCESS_ACCT=y
+CONFIG_BSD_PROCESS_ACCT_V3=y
+CONFIG_TASKSTATS=y
+CONFIG_TASK_DELAY_ACCT=y
+CONFIG_TASK_XACCT=y
+CONFIG_TASK_IO_ACCOUNTING=y
+# CONFIG_PSI is not set
+# end of CPU/Task time and stats accounting
+
+CONFIG_CPU_ISOLATION=y
+
+#
+# RCU Subsystem
+#
+CONFIG_TREE_RCU=y
+# CONFIG_RCU_EXPERT is not set
+CONFIG_SRCU=y
+CONFIG_TREE_SRCU=y
+CONFIG_TASKS_RCU=y
+CONFIG_RCU_STALL_COMMON=y
+CONFIG_RCU_NEED_SEGCBLIST=y
+CONFIG_RCU_NOCB_CPU=y
+# end of RCU Subsystem
+
+CONFIG_BUILD_BIN2C=y
+# CONFIG_IKCONFIG is not set
+# CONFIG_IKHEADERS is not set
+CONFIG_LOG_BUF_SHIFT=18
+CONFIG_LOG_CPU_MAX_BUF_SHIFT=12
+CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT=13
+CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
+
+#
+# Scheduler features
+#
+# CONFIG_UCLAMP_TASK is not set
+# end of Scheduler features
+
+CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
+CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y
+CONFIG_CC_HAS_INT128=y
+CONFIG_ARCH_SUPPORTS_INT128=y
+CONFIG_NUMA_BALANCING=y
+CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
+CONFIG_CGROUPS=y
+CONFIG_PAGE_COUNTER=y
+CONFIG_MEMCG=y
+CONFIG_MEMCG_SWAP=y
+CONFIG_MEMCG_SWAP_ENABLED=y
+CONFIG_MEMCG_KMEM=y
+CONFIG_BLK_CGROUP=y
+CONFIG_CGROUP_WRITEBACK=y
+CONFIG_CGROUP_SCHED=y
+CONFIG_FAIR_GROUP_SCHED=y
+CONFIG_CFS_BANDWIDTH=y
+CONFIG_RT_GROUP_SCHED=y
+CONFIG_CGROUP_PIDS=y
+# CONFIG_CGROUP_RDMA is not set
+CONFIG_CGROUP_FREEZER=y
+CONFIG_CGROUP_HUGETLB=y
+CONFIG_CPUSETS=y
+CONFIG_PROC_PID_CPUSET=y
+CONFIG_CGROUP_DEVICE=y
+CONFIG_CGROUP_CPUACCT=y
+CONFIG_CGROUP_PERF=y
+CONFIG_CGROUP_BPF=y
+# CONFIG_CGROUP_DEBUG is not set
+CONFIG_SOCK_CGROUP_DATA=y
+CONFIG_NAMESPACES=y
+CONFIG_UTS_NS=y
+CONFIG_TIME_NS=y
+CONFIG_IPC_NS=y
+CONFIG_USER_NS=y
+CONFIG_PID_NS=y
+CONFIG_NET_NS=y
+# CONFIG_CHECKPOINT_RESTORE is not set
+CONFIG_SCHED_AUTOGROUP=y
+# CONFIG_SYSFS_DEPRECATED is not set
+CONFIG_RELAY=y
+CONFIG_BLK_DEV_INITRD=y
+CONFIG_INITRAMFS_SOURCE=""
+CONFIG_RD_GZIP=y
+CONFIG_RD_BZIP2=y
+CONFIG_RD_LZMA=y
+CONFIG_RD_XZ=y
+CONFIG_RD_LZO=y
+CONFIG_RD_LZ4=y
+# CONFIG_BOOT_CONFIG is not set
+CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+CONFIG_SYSCTL=y
+CONFIG_HAVE_UID16=y
+CONFIG_SYSCTL_EXCEPTION_TRACE=y
+CONFIG_HAVE_PCSPKR_PLATFORM=y
+CONFIG_BPF=y
+# CONFIG_EXPERT is not set
+CONFIG_UID16=y
+CONFIG_MULTIUSER=y
+CONFIG_SGETMASK_SYSCALL=y
+CONFIG_SYSFS_SYSCALL=y
+CONFIG_FHANDLE=y
+CONFIG_POSIX_TIMERS=y
+CONFIG_PRINTK=y
+CONFIG_PRINTK_NMI=y
+CONFIG_BUG=y
+CONFIG_ELF_CORE=y
+CONFIG_PCSPKR_PLATFORM=y
+CONFIG_BASE_FULL=y
+CONFIG_FUTEX=y
+CONFIG_FUTEX_PI=y
+CONFIG_EPOLL=y
+CONFIG_SIGNALFD=y
+CONFIG_TIMERFD=y
+CONFIG_EVENTFD=y
+CONFIG_SHMEM=y
+CONFIG_AIO=y
+CONFIG_IO_URING=y
+CONFIG_ADVISE_SYSCALLS=y
+CONFIG_MEMBARRIER=y
+CONFIG_KALLSYMS=y
+CONFIG_KALLSYMS_ALL=y
+CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y
+CONFIG_KALLSYMS_BASE_RELATIVE=y
+CONFIG_BPF_SYSCALL=y
+CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y
+CONFIG_BPF_JIT_DEFAULT_ON=y
+# CONFIG_USERFAULTFD is not set
+CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
+CONFIG_RSEQ=y
+# CONFIG_EMBEDDED is not set
+CONFIG_HAVE_PERF_EVENTS=y
+
+#
+# Kernel Performance Events And Counters
+#
+CONFIG_PERF_EVENTS=y
+# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
+# end of Kernel Performance Events And Counters
+
+CONFIG_VM_EVENT_COUNTERS=y
+CONFIG_SLUB_DEBUG=y
+# CONFIG_COMPAT_BRK is not set
+# CONFIG_SLAB is not set
+CONFIG_SLUB=y
+CONFIG_SLAB_MERGE_DEFAULT=y
+# CONFIG_SLAB_FREELIST_RANDOM is not set
+# CONFIG_SLAB_FREELIST_HARDENED is not set
+# CONFIG_SHUFFLE_PAGE_ALLOCATOR is not set
+CONFIG_SLUB_CPU_PARTIAL=y
+CONFIG_SYSTEM_DATA_VERIFICATION=y
+CONFIG_PROFILING=y
+CONFIG_TRACEPOINTS=y
+# end of General setup
+
+CONFIG_64BIT=y
+CONFIG_X86_64=y
+CONFIG_X86=y
+CONFIG_INSTRUCTION_DECODER=y
+CONFIG_OUTPUT_FORMAT="elf64-x86-64"
+CONFIG_LOCKDEP_SUPPORT=y
+CONFIG_STACKTRACE_SUPPORT=y
+CONFIG_MMU=y
+CONFIG_ARCH_MMAP_RND_BITS_MIN=28
+CONFIG_ARCH_MMAP_RND_BITS_MAX=32
+CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=8
+CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16
+CONFIG_GENERIC_ISA_DMA=y
+CONFIG_GENERIC_BUG=y
+CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
+CONFIG_ARCH_MAY_HAVE_PC_FDC=y
+CONFIG_GENERIC_CALIBRATE_DELAY=y
+CONFIG_ARCH_HAS_CPU_RELAX=y
+CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
+CONFIG_ARCH_HAS_FILTER_PGPROT=y
+CONFIG_HAVE_SETUP_PER_CPU_AREA=y
+CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
+CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
+CONFIG_ARCH_HIBERNATION_POSSIBLE=y
+CONFIG_ARCH_SUSPEND_POSSIBLE=y
+CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
+CONFIG_ZONE_DMA32=y
+CONFIG_AUDIT_ARCH=y
+CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
+CONFIG_HAVE_INTEL_TXT=y
+CONFIG_X86_64_SMP=y
+CONFIG_ARCH_SUPPORTS_UPROBES=y
+CONFIG_FIX_EARLYCON_MEM=y
+CONFIG_PGTABLE_LEVELS=5
+CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
+
+#
+# Processor type and features
+#
+CONFIG_ZONE_DMA=y
+CONFIG_SMP=y
+CONFIG_X86_FEATURE_NAMES=y
+CONFIG_X86_X2APIC=y
+CONFIG_X86_MPPARSE=y
+# CONFIG_GOLDFISH is not set
+# CONFIG_RETPOLINE is not set
+# CONFIG_X86_CPU_RESCTRL is not set
+CONFIG_X86_EXTENDED_PLATFORM=y
+# CONFIG_X86_NUMACHIP is not set
+# CONFIG_X86_VSMP is not set
+CONFIG_X86_UV=y
+# CONFIG_X86_GOLDFISH is not set
+# CONFIG_X86_INTEL_MID is not set
+CONFIG_X86_INTEL_LPSS=y
+# CONFIG_X86_AMD_PLATFORM_DEVICE is not set
+CONFIG_IOSF_MBI=y
+# CONFIG_IOSF_MBI_DEBUG is not set
+CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
+CONFIG_SCHED_OMIT_FRAME_POINTER=y
+CONFIG_HYPERVISOR_GUEST=y
+CONFIG_PARAVIRT=y
+CONFIG_PARAVIRT_XXL=y
+# CONFIG_PARAVIRT_DEBUG is not set
+# CONFIG_PARAVIRT_SPINLOCKS is not set
+CONFIG_X86_HV_CALLBACK_VECTOR=y
+CONFIG_XEN=y
+CONFIG_XEN_PV=y
+CONFIG_XEN_PV_SMP=y
+CONFIG_XEN_DOM0=y
+CONFIG_XEN_PVHVM=y
+CONFIG_XEN_PVHVM_SMP=y
+CONFIG_XEN_512GB=y
+CONFIG_XEN_SAVE_RESTORE=y
+CONFIG_XEN_DEBUG_FS=y
+# CONFIG_XEN_PVH is not set
+CONFIG_KVM_GUEST=y
+CONFIG_ARCH_CPUIDLE_HALTPOLL=y
+# CONFIG_PVH is not set
+# CONFIG_KVM_DEBUG_FS is not set
+CONFIG_PARAVIRT_TIME_ACCOUNTING=y
+CONFIG_PARAVIRT_CLOCK=y
+# CONFIG_JAILHOUSE_GUEST is not set
+# CONFIG_ACRN_GUEST is not set
+# CONFIG_MK8 is not set
+# CONFIG_MPSC is not set
+# CONFIG_MCORE2 is not set
+# CONFIG_MATOM is not set
+CONFIG_GENERIC_CPU=y
+CONFIG_X86_INTERNODE_CACHE_SHIFT=6
+CONFIG_X86_L1_CACHE_SHIFT=6
+CONFIG_X86_TSC=y
+CONFIG_X86_CMPXCHG64=y
+CONFIG_X86_CMOV=y
+CONFIG_X86_MINIMUM_CPU_FAMILY=64
+CONFIG_X86_DEBUGCTLMSR=y
+CONFIG_IA32_FEAT_CTL=y
+CONFIG_X86_VMX_FEATURE_NAMES=y
+CONFIG_CPU_SUP_INTEL=y
+CONFIG_CPU_SUP_AMD=y
+CONFIG_CPU_SUP_HYGON=y
+CONFIG_CPU_SUP_CENTAUR=y
+CONFIG_CPU_SUP_ZHAOXIN=y
+CONFIG_HPET_TIMER=y
+CONFIG_HPET_EMULATE_RTC=y
+CONFIG_DMI=y
+# CONFIG_GART_IOMMU is not set
+# CONFIG_MAXSMP is not set
+CONFIG_NR_CPUS_RANGE_BEGIN=2
+CONFIG_NR_CPUS_RANGE_END=512
+CONFIG_NR_CPUS_DEFAULT=64
+CONFIG_NR_CPUS=8
+CONFIG_SCHED_SMT=y
+CONFIG_SCHED_MC=y
+CONFIG_SCHED_MC_PRIO=y
+CONFIG_X86_LOCAL_APIC=y
+CONFIG_X86_IO_APIC=y
+CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
+CONFIG_X86_MCE=y
+# CONFIG_X86_MCELOG_LEGACY is not set
+CONFIG_X86_MCE_INTEL=y
+CONFIG_X86_MCE_AMD=y
+CONFIG_X86_MCE_THRESHOLD=y
+CONFIG_X86_MCE_INJECT=m
+CONFIG_X86_THERMAL_VECTOR=y
+
+#
+# Performance monitoring
+#
+CONFIG_PERF_EVENTS_INTEL_UNCORE=y
+CONFIG_PERF_EVENTS_INTEL_RAPL=y
+CONFIG_PERF_EVENTS_INTEL_CSTATE=y
+# CONFIG_PERF_EVENTS_AMD_POWER is not set
+# end of Performance monitoring
+
+CONFIG_X86_16BIT=y
+CONFIG_X86_ESPFIX64=y
+CONFIG_X86_VSYSCALL_EMULATION=y
+CONFIG_X86_IOPL_IOPERM=y
+CONFIG_I8K=m
+CONFIG_MICROCODE=y
+CONFIG_MICROCODE_INTEL=y
+CONFIG_MICROCODE_AMD=y
+CONFIG_MICROCODE_OLD_INTERFACE=y
+CONFIG_X86_MSR=y
+CONFIG_X86_CPUID=y
+CONFIG_X86_5LEVEL=y
+CONFIG_X86_DIRECT_GBPAGES=y
+# CONFIG_X86_CPA_STATISTICS is not set
+# CONFIG_AMD_MEM_ENCRYPT is not set
+CONFIG_NUMA=y
+CONFIG_AMD_NUMA=y
+CONFIG_X86_64_ACPI_NUMA=y
+CONFIG_NODES_SPAN_OTHER_NODES=y
+# CONFIG_NUMA_EMU is not set
+CONFIG_NODES_SHIFT=9
+CONFIG_ARCH_SPARSEMEM_ENABLE=y
+CONFIG_ARCH_SPARSEMEM_DEFAULT=y
+CONFIG_ARCH_SELECT_MEMORY_MODEL=y
+# CONFIG_ARCH_MEMORY_PROBE is not set
+CONFIG_ARCH_PROC_KCORE_TEXT=y
+CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
+# CONFIG_X86_PMEM_LEGACY is not set
+CONFIG_X86_CHECK_BIOS_CORRUPTION=y
+# CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK is not set
+CONFIG_X86_RESERVE_LOW=64
+CONFIG_MTRR=y
+CONFIG_MTRR_SANITIZER=y
+CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=0
+CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
+CONFIG_X86_PAT=y
+CONFIG_ARCH_USES_PG_UNCACHED=y
+CONFIG_ARCH_RANDOM=y
+CONFIG_X86_SMAP=y
+CONFIG_X86_UMIP=y
+CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=y
+CONFIG_X86_INTEL_TSX_MODE_OFF=y
+# CONFIG_X86_INTEL_TSX_MODE_ON is not set
+# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set
+CONFIG_EFI=y
+CONFIG_EFI_STUB=y
+# CONFIG_EFI_MIXED is not set
+CONFIG_SECCOMP=y
+# CONFIG_HZ_100 is not set
+# CONFIG_HZ_250 is not set
+# CONFIG_HZ_300 is not set
+CONFIG_HZ_1000=y
+CONFIG_HZ=1000
+CONFIG_SCHED_HRTICK=y
+CONFIG_KEXEC=y
+CONFIG_KEXEC_FILE=y
+CONFIG_ARCH_HAS_KEXEC_PURGATORY=y
+# CONFIG_KEXEC_SIG is not set
+CONFIG_CRASH_DUMP=y
+CONFIG_KEXEC_JUMP=y
+CONFIG_PHYSICAL_START=0x1000000
+CONFIG_RELOCATABLE=y
+# CONFIG_RANDOMIZE_BASE is not set
+CONFIG_PHYSICAL_ALIGN=0x1000000
+CONFIG_DYNAMIC_MEMORY_LAYOUT=y
+CONFIG_HOTPLUG_CPU=y
+# CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set
+# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
+# CONFIG_COMPAT_VDSO is not set
+# CONFIG_LEGACY_VSYSCALL_EMULATE is not set
+CONFIG_LEGACY_VSYSCALL_XONLY=y
+# CONFIG_LEGACY_VSYSCALL_NONE is not set
+# CONFIG_CMDLINE_BOOL is not set
+CONFIG_MODIFY_LDT_SYSCALL=y
+CONFIG_HAVE_LIVEPATCH=y
+# CONFIG_LIVEPATCH is not set
+# end of Processor type and features
+
+CONFIG_ARCH_HAS_ADD_PAGES=y
+CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
+CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
+CONFIG_USE_PERCPU_NUMA_NODE_ID=y
+CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y
+CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=y
+CONFIG_ARCH_ENABLE_THP_MIGRATION=y
+
+#
+# Power management and ACPI options
+#
+CONFIG_ARCH_HIBERNATION_HEADER=y
+CONFIG_SUSPEND=y
+CONFIG_SUSPEND_FREEZER=y
+CONFIG_HIBERNATE_CALLBACKS=y
+CONFIG_HIBERNATION=y
+CONFIG_PM_STD_PARTITION=""
+CONFIG_PM_SLEEP=y
+CONFIG_PM_SLEEP_SMP=y
+# CONFIG_PM_AUTOSLEEP is not set
+# CONFIG_PM_WAKELOCKS is not set
+CONFIG_PM=y
+CONFIG_PM_DEBUG=y
+CONFIG_PM_ADVANCED_DEBUG=y
+# CONFIG_PM_TEST_SUSPEND is not set
+CONFIG_PM_SLEEP_DEBUG=y
+CONFIG_PM_TRACE=y
+CONFIG_PM_TRACE_RTC=y
+CONFIG_PM_CLK=y
+# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set
+# CONFIG_ENERGY_MODEL is not set
+CONFIG_ARCH_SUPPORTS_ACPI=y
+CONFIG_ACPI=y
+CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y
+CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=y
+CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=y
+# CONFIG_ACPI_DEBUGGER is not set
+CONFIG_ACPI_SPCR_TABLE=y
+CONFIG_ACPI_LPIT=y
+CONFIG_ACPI_SLEEP=y
+# CONFIG_ACPI_PROCFS_POWER is not set
+CONFIG_ACPI_REV_OVERRIDE_POSSIBLE=y
+CONFIG_ACPI_EC_DEBUGFS=m
+CONFIG_ACPI_AC=y
+CONFIG_ACPI_BATTERY=y
+CONFIG_ACPI_BUTTON=y
+CONFIG_ACPI_VIDEO=m
+CONFIG_ACPI_FAN=y
+# CONFIG_ACPI_TAD is not set
+CONFIG_ACPI_DOCK=y
+CONFIG_ACPI_CPU_FREQ_PSS=y
+CONFIG_ACPI_PROCESSOR_CSTATE=y
+CONFIG_ACPI_PROCESSOR_IDLE=y
+CONFIG_ACPI_CPPC_LIB=y
+CONFIG_ACPI_PROCESSOR=y
+CONFIG_ACPI_IPMI=m
+CONFIG_ACPI_HOTPLUG_CPU=y
+CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
+CONFIG_ACPI_THERMAL=y
+CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=y
+CONFIG_ACPI_TABLE_UPGRADE=y
+# CONFIG_ACPI_DEBUG is not set
+CONFIG_ACPI_PCI_SLOT=y
+CONFIG_ACPI_CONTAINER=y
+CONFIG_ACPI_HOTPLUG_MEMORY=y
+CONFIG_ACPI_HOTPLUG_IOAPIC=y
+CONFIG_ACPI_SBS=m
+CONFIG_ACPI_HED=y
+CONFIG_ACPI_CUSTOM_METHOD=m
+CONFIG_ACPI_BGRT=y
+# CONFIG_ACPI_NFIT is not set
+CONFIG_ACPI_NUMA=y
+# CONFIG_ACPI_HMAT is not set
+CONFIG_HAVE_ACPI_APEI=y
+CONFIG_HAVE_ACPI_APEI_NMI=y
+CONFIG_ACPI_APEI=y
+CONFIG_ACPI_APEI_GHES=y
+CONFIG_ACPI_APEI_PCIEAER=y
+CONFIG_ACPI_APEI_MEMORY_FAILURE=y
+# CONFIG_ACPI_APEI_EINJ is not set
+# CONFIG_ACPI_APEI_ERST_DEBUG is not set
+# CONFIG_DPTF_POWER is not set
+# CONFIG_ACPI_EXTLOG is not set
+# CONFIG_PMIC_OPREGION is not set
+# CONFIG_ACPI_CONFIGFS is not set
+CONFIG_X86_PM_TIMER=y
+CONFIG_SFI=y
+
+#
+# CPU Frequency scaling
+#
+CONFIG_CPU_FREQ=y
+CONFIG_CPU_FREQ_GOV_ATTR_SET=y
+CONFIG_CPU_FREQ_GOV_COMMON=y
+# CONFIG_CPU_FREQ_STAT is not set
+# CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE is not set
+# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
+# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
+CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
+# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
+# CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set
+CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
+CONFIG_CPU_FREQ_GOV_POWERSAVE=y
+CONFIG_CPU_FREQ_GOV_USERSPACE=y
+CONFIG_CPU_FREQ_GOV_ONDEMAND=y
+CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
+CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y
+
+#
+# CPU frequency scaling drivers
+#
+CONFIG_X86_INTEL_PSTATE=y
+CONFIG_X86_PCC_CPUFREQ=m
+CONFIG_X86_ACPI_CPUFREQ=m
+CONFIG_X86_ACPI_CPUFREQ_CPB=y
+CONFIG_X86_POWERNOW_K8=m
+CONFIG_X86_AMD_FREQ_SENSITIVITY=m
+# CONFIG_X86_SPEEDSTEP_CENTRINO is not set
+CONFIG_X86_P4_CLOCKMOD=m
+
+#
+# shared options
+#
+CONFIG_X86_SPEEDSTEP_LIB=m
+# end of CPU Frequency scaling
+
+#
+# CPU Idle
+#
+CONFIG_CPU_IDLE=y
+# CONFIG_CPU_IDLE_GOV_LADDER is not set
+CONFIG_CPU_IDLE_GOV_MENU=y
+# CONFIG_CPU_IDLE_GOV_TEO is not set
+# CONFIG_CPU_IDLE_GOV_HALTPOLL is not set
+CONFIG_HALTPOLL_CPUIDLE=y
+# end of CPU Idle
+
+CONFIG_INTEL_IDLE=y
+# end of Power management and ACPI options
+
+#
+# Bus options (PCI etc.)
+#
+CONFIG_PCI_DIRECT=y
+CONFIG_PCI_MMCONFIG=y
+CONFIG_PCI_XEN=y
+CONFIG_MMCONF_FAM10H=y
+CONFIG_ISA_DMA_API=y
+CONFIG_AMD_NB=y
+# CONFIG_X86_SYSFB is not set
+# end of Bus options (PCI etc.)
+
+#
+# Binary Emulations
+#
+CONFIG_IA32_EMULATION=y
+# CONFIG_X86_X32 is not set
+CONFIG_COMPAT_32=y
+CONFIG_COMPAT=y
+CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
+CONFIG_SYSVIPC_COMPAT=y
+# end of Binary Emulations
+
+#
+# Firmware Drivers
+#
+CONFIG_EDD=m
+# CONFIG_EDD_OFF is not set
+CONFIG_FIRMWARE_MEMMAP=y
+CONFIG_DMIID=y
+CONFIG_DMI_SYSFS=y
+CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y
+CONFIG_ISCSI_IBFT_FIND=y
+CONFIG_ISCSI_IBFT=m
+# CONFIG_FW_CFG_SYSFS is not set
+# CONFIG_GOOGLE_FIRMWARE is not set
+
+#
+# EFI (Extensible Firmware Interface) Support
+#
+CONFIG_EFI_VARS=y
+CONFIG_EFI_ESRT=y
+CONFIG_EFI_VARS_PSTORE=y
+CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE=y
+CONFIG_EFI_RUNTIME_MAP=y
+# CONFIG_EFI_FAKE_MEMMAP is not set
+CONFIG_EFI_RUNTIME_WRAPPERS=y
+# CONFIG_EFI_BOOTLOADER_CONTROL is not set
+# CONFIG_EFI_CAPSULE_LOADER is not set
+# CONFIG_EFI_TEST is not set
+# CONFIG_APPLE_PROPERTIES is not set
+# CONFIG_RESET_ATTACK_MITIGATION is not set
+# CONFIG_EFI_RCI2_TABLE is not set
+# CONFIG_EFI_DISABLE_PCI_DMA is not set
+# end of EFI (Extensible Firmware Interface) Support
+
+CONFIG_UEFI_CPER=y
+CONFIG_UEFI_CPER_X86=y
+CONFIG_EFI_EARLYCON=y
+
+#
+# Tegra firmware driver
+#
+# end of Tegra firmware driver
+# end of Firmware Drivers
+
+CONFIG_HAVE_KVM=y
+CONFIG_HAVE_KVM_IRQCHIP=y
+CONFIG_HAVE_KVM_IRQFD=y
+CONFIG_HAVE_KVM_IRQ_ROUTING=y
+CONFIG_HAVE_KVM_EVENTFD=y
+CONFIG_KVM_MMIO=y
+CONFIG_KVM_ASYNC_PF=y
+CONFIG_HAVE_KVM_MSI=y
+CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
+CONFIG_KVM_VFIO=y
+CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y
+CONFIG_KVM_COMPAT=y
+CONFIG_HAVE_KVM_IRQ_BYPASS=y
+CONFIG_HAVE_KVM_NO_POLL=y
+CONFIG_VIRTUALIZATION=y
+CONFIG_KVM=m
+CONFIG_KVM_INTEL=m
+CONFIG_KVM_AMD=m
+CONFIG_KVM_AMD_SEV=y
+CONFIG_KVM_MMU_AUDIT=y
+CONFIG_AS_AVX512=y
+CONFIG_AS_SHA1_NI=y
+CONFIG_AS_SHA256_NI=y
+
+#
+# General architecture-dependent options
+#
+CONFIG_CRASH_CORE=y
+CONFIG_KEXEC_CORE=y
+CONFIG_HOTPLUG_SMT=y
+CONFIG_OPROFILE=m
+CONFIG_OPROFILE_EVENT_MULTIPLEX=y
+CONFIG_HAVE_OPROFILE=y
+CONFIG_OPROFILE_NMI_TIMER=y
+CONFIG_KPROBES=y
+CONFIG_JUMP_LABEL=y
+# CONFIG_STATIC_KEYS_SELFTEST is not set
+CONFIG_OPTPROBES=y
+CONFIG_KPROBES_ON_FTRACE=y
+CONFIG_UPROBES=y
+CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
+CONFIG_ARCH_USE_BUILTIN_BSWAP=y
+CONFIG_KRETPROBES=y
+CONFIG_USER_RETURN_NOTIFIER=y
+CONFIG_HAVE_IOREMAP_PROT=y
+CONFIG_HAVE_KPROBES=y
+CONFIG_HAVE_KRETPROBES=y
+CONFIG_HAVE_OPTPROBES=y
+CONFIG_HAVE_KPROBES_ON_FTRACE=y
+CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y
+CONFIG_HAVE_NMI=y
+CONFIG_HAVE_ARCH_TRACEHOOK=y
+CONFIG_HAVE_DMA_CONTIGUOUS=y
+CONFIG_GENERIC_SMP_IDLE_THREAD=y
+CONFIG_ARCH_HAS_FORTIFY_SOURCE=y
+CONFIG_ARCH_HAS_SET_MEMORY=y
+CONFIG_ARCH_HAS_SET_DIRECT_MAP=y
+CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y
+CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT=y
+CONFIG_HAVE_ASM_MODVERSIONS=y
+CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
+CONFIG_HAVE_RSEQ=y
+CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y
+CONFIG_HAVE_CLK=y
+CONFIG_HAVE_HW_BREAKPOINT=y
+CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
+CONFIG_HAVE_USER_RETURN_NOTIFIER=y
+CONFIG_HAVE_PERF_EVENTS_NMI=y
+CONFIG_HAVE_HARDLOCKUP_DETECTOR_PERF=y
+CONFIG_HAVE_PERF_REGS=y
+CONFIG_HAVE_PERF_USER_STACK_DUMP=y
+CONFIG_HAVE_ARCH_JUMP_LABEL=y
+CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y
+CONFIG_MMU_GATHER_TABLE_FREE=y
+CONFIG_MMU_GATHER_RCU_TABLE_FREE=y
+CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
+CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
+CONFIG_HAVE_CMPXCHG_LOCAL=y
+CONFIG_HAVE_CMPXCHG_DOUBLE=y
+CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
+CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
+CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
+CONFIG_SECCOMP_FILTER=y
+CONFIG_HAVE_ARCH_STACKLEAK=y
+CONFIG_HAVE_STACKPROTECTOR=y
+CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
+CONFIG_STACKPROTECTOR=y
+CONFIG_STACKPROTECTOR_STRONG=y
+CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES=y
+CONFIG_HAVE_CONTEXT_TRACKING=y
+CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
+CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
+CONFIG_HAVE_MOVE_PMD=y
+CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
+CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD=y
+CONFIG_HAVE_ARCH_HUGE_VMAP=y
+CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
+CONFIG_HAVE_ARCH_SOFT_DIRTY=y
+CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
+CONFIG_MODULES_USE_ELF_RELA=y
+CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK=y
+CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
+CONFIG_HAVE_ARCH_MMAP_RND_BITS=y
+CONFIG_HAVE_EXIT_THREAD=y
+CONFIG_ARCH_MMAP_RND_BITS=28
+CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS=y
+CONFIG_ARCH_MMAP_RND_COMPAT_BITS=8
+CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES=y
+CONFIG_HAVE_COPY_THREAD_TLS=y
+CONFIG_HAVE_STACK_VALIDATION=y
+CONFIG_HAVE_RELIABLE_STACKTRACE=y
+CONFIG_OLD_SIGSUSPEND3=y
+CONFIG_COMPAT_OLD_SIGACTION=y
+CONFIG_COMPAT_32BIT_TIME=y
+CONFIG_HAVE_ARCH_VMAP_STACK=y
+CONFIG_VMAP_STACK=y
+CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
+CONFIG_STRICT_KERNEL_RWX=y
+CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
+CONFIG_STRICT_MODULE_RWX=y
+CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y
+CONFIG_ARCH_USE_MEMREMAP_PROT=y
+# CONFIG_LOCK_EVENT_COUNTS is not set
+CONFIG_ARCH_HAS_MEM_ENCRYPT=y
+
+#
+# GCOV-based kernel profiling
+#
+# CONFIG_GCOV_KERNEL is not set
+CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
+# end of GCOV-based kernel profiling
+
+CONFIG_HAVE_GCC_PLUGINS=y
+# end of General architecture-dependent options
+
+CONFIG_RT_MUTEXES=y
+CONFIG_BASE_SMALL=0
+CONFIG_MODULE_SIG_FORMAT=y
+CONFIG_MODULES=y
+# CONFIG_MODULE_FORCE_LOAD is not set
+CONFIG_MODULE_UNLOAD=y
+# CONFIG_MODULE_FORCE_UNLOAD is not set
+# CONFIG_MODVERSIONS is not set
+# CONFIG_MODULE_SRCVERSION_ALL is not set
+CONFIG_MODULE_SIG=y
+# CONFIG_MODULE_SIG_FORCE is not set
+CONFIG_MODULE_SIG_ALL=y
+# CONFIG_MODULE_SIG_SHA1 is not set
+# CONFIG_MODULE_SIG_SHA224 is not set
+CONFIG_MODULE_SIG_SHA256=y
+# CONFIG_MODULE_SIG_SHA384 is not set
+# CONFIG_MODULE_SIG_SHA512 is not set
+CONFIG_MODULE_SIG_HASH="sha256"
+# CONFIG_MODULE_COMPRESS is not set
+# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set
+CONFIG_UNUSED_SYMBOLS=y
+CONFIG_MODULES_TREE_LOOKUP=y
+CONFIG_BLOCK=y
+CONFIG_BLK_SCSI_REQUEST=y
+CONFIG_BLK_CGROUP_RWSTAT=y
+CONFIG_BLK_DEV_BSG=y
+CONFIG_BLK_DEV_BSGLIB=y
+CONFIG_BLK_DEV_INTEGRITY=y
+CONFIG_BLK_DEV_INTEGRITY_T10=y
+# CONFIG_BLK_DEV_ZONED is not set
+CONFIG_BLK_DEV_THROTTLING=y
+# CONFIG_BLK_DEV_THROTTLING_LOW is not set
+# CONFIG_BLK_CMDLINE_PARSER is not set
+# CONFIG_BLK_WBT is not set
+# CONFIG_BLK_CGROUP_IOLATENCY is not set
+# CONFIG_BLK_CGROUP_IOCOST is not set
+CONFIG_BLK_DEBUG_FS=y
+# CONFIG_BLK_SED_OPAL is not set
+
+#
+# Partition Types
+#
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_ACORN_PARTITION is not set
+CONFIG_AIX_PARTITION=y
+CONFIG_OSF_PARTITION=y
+CONFIG_AMIGA_PARTITION=y
+# CONFIG_ATARI_PARTITION is not set
+CONFIG_MAC_PARTITION=y
+CONFIG_MSDOS_PARTITION=y
+CONFIG_BSD_DISKLABEL=y
+CONFIG_MINIX_SUBPARTITION=y
+CONFIG_SOLARIS_X86_PARTITION=y
+CONFIG_UNIXWARE_DISKLABEL=y
+CONFIG_LDM_PARTITION=y
+# CONFIG_LDM_DEBUG is not set
+CONFIG_SGI_PARTITION=y
+# CONFIG_ULTRIX_PARTITION is not set
+CONFIG_SUN_PARTITION=y
+CONFIG_KARMA_PARTITION=y
+CONFIG_EFI_PARTITION=y
+# CONFIG_SYSV68_PARTITION is not set
+# CONFIG_CMDLINE_PARTITION is not set
+# end of Partition Types
+
+CONFIG_BLOCK_COMPAT=y
+CONFIG_BLK_MQ_PCI=y
+CONFIG_BLK_MQ_VIRTIO=y
+CONFIG_BLK_MQ_RDMA=y
+CONFIG_BLK_PM=y
+
+#
+# IO Schedulers
+#
+CONFIG_MQ_IOSCHED_DEADLINE=y
+CONFIG_MQ_IOSCHED_KYBER=y
+# CONFIG_IOSCHED_BFQ is not set
+# end of IO Schedulers
+
+CONFIG_PREEMPT_NOTIFIERS=y
+CONFIG_PADATA=y
+CONFIG_ASN1=y
+CONFIG_UNINLINE_SPIN_UNLOCK=y
+CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y
+CONFIG_MUTEX_SPIN_ON_OWNER=y
+CONFIG_RWSEM_SPIN_ON_OWNER=y
+CONFIG_LOCK_SPIN_ON_OWNER=y
+CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y
+CONFIG_QUEUED_SPINLOCKS=y
+CONFIG_ARCH_USE_QUEUED_RWLOCKS=y
+CONFIG_QUEUED_RWLOCKS=y
+CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE=y
+CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y
+CONFIG_FREEZER=y
+
+#
+# Executable file formats
+#
+CONFIG_BINFMT_ELF=y
+CONFIG_COMPAT_BINFMT_ELF=y
+CONFIG_ELFCORE=y
+CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
+CONFIG_BINFMT_SCRIPT=y
+CONFIG_BINFMT_MISC=m
+CONFIG_COREDUMP=y
+# end of Executable file formats
+
+#
+# Memory Management options
+#
+CONFIG_SELECT_MEMORY_MODEL=y
+CONFIG_SPARSEMEM_MANUAL=y
+CONFIG_SPARSEMEM=y
+CONFIG_NEED_MULTIPLE_NODES=y
+CONFIG_HAVE_MEMORY_PRESENT=y
+CONFIG_SPARSEMEM_EXTREME=y
+CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
+CONFIG_SPARSEMEM_VMEMMAP=y
+CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
+CONFIG_HAVE_FAST_GUP=y
+CONFIG_NUMA_KEEP_MEMINFO=y
+CONFIG_MEMORY_ISOLATION=y
+CONFIG_MEMORY_HOTPLUG=y
+CONFIG_MEMORY_HOTPLUG_SPARSE=y
+# CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set
+# CONFIG_MEMORY_HOTREMOVE is not set
+CONFIG_SPLIT_PTLOCK_CPUS=4
+CONFIG_MEMORY_BALLOON=y
+CONFIG_BALLOON_COMPACTION=y
+CONFIG_COMPACTION=y
+CONFIG_PAGE_REPORTING=y
+CONFIG_MIGRATION=y
+CONFIG_CONTIG_ALLOC=y
+CONFIG_PHYS_ADDR_T_64BIT=y
+CONFIG_BOUNCE=y
+CONFIG_VIRT_TO_BUS=y
+CONFIG_MMU_NOTIFIER=y
+CONFIG_KSM=y
+CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
+CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
+CONFIG_MEMORY_FAILURE=y
+CONFIG_HWPOISON_INJECT=m
+CONFIG_TRANSPARENT_HUGEPAGE=y
+# CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS is not set
+CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y
+CONFIG_ARCH_WANTS_THP_SWAP=y
+CONFIG_THP_SWAP=y
+CONFIG_CLEANCACHE=y
+CONFIG_FRONTSWAP=y
+# CONFIG_CMA is not set
+CONFIG_ZSWAP=y
+# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set
+CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO=y
+# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set
+# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4 is not set
+# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4HC is not set
+# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_ZSTD is not set
+CONFIG_ZSWAP_COMPRESSOR_DEFAULT="lzo"
+CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y
+# CONFIG_ZSWAP_ZPOOL_DEFAULT_Z3FOLD is not set
+# CONFIG_ZSWAP_ZPOOL_DEFAULT_ZSMALLOC is not set
+CONFIG_ZSWAP_ZPOOL_DEFAULT="zbud"
+# CONFIG_ZSWAP_DEFAULT_ON is not set
+CONFIG_ZPOOL=y
+CONFIG_ZBUD=y
+# CONFIG_Z3FOLD is not set
+CONFIG_ZSMALLOC=y
+# CONFIG_PGTABLE_MAPPING is not set
+# CONFIG_ZSMALLOC_STAT is not set
+CONFIG_GENERIC_EARLY_IOREMAP=y
+# CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set
+# CONFIG_IDLE_PAGE_TRACKING is not set
+CONFIG_ARCH_HAS_PTE_DEVMAP=y
+CONFIG_ARCH_USES_HIGH_VMA_FLAGS=y
+CONFIG_ARCH_HAS_PKEYS=y
+# CONFIG_PERCPU_STATS is not set
+# CONFIG_GUP_BENCHMARK is not set
+# CONFIG_READ_ONLY_THP_FOR_FS is not set
+CONFIG_ARCH_HAS_PTE_SPECIAL=y
+# end of Memory Management options
+
+CONFIG_NET=y
+CONFIG_NET_INGRESS=y
+CONFIG_NET_EGRESS=y
+CONFIG_NET_REDIRECT=y
+CONFIG_SKB_EXTENSIONS=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=y
+CONFIG_PACKET_DIAG=m
+CONFIG_UNIX=y
+CONFIG_UNIX_SCM=y
+CONFIG_UNIX_DIAG=m
+# CONFIG_TLS is not set
+CONFIG_XFRM=y
+CONFIG_XFRM_ALGO=y
+CONFIG_XFRM_USER=y
+# CONFIG_XFRM_INTERFACE is not set
+CONFIG_XFRM_SUB_POLICY=y
+CONFIG_XFRM_MIGRATE=y
+CONFIG_XFRM_STATISTICS=y
+CONFIG_XFRM_IPCOMP=m
+CONFIG_NET_KEY=m
+CONFIG_NET_KEY_MIGRATE=y
+# CONFIG_SMC is not set
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+CONFIG_IP_ADVANCED_ROUTER=y
+CONFIG_IP_FIB_TRIE_STATS=y
+CONFIG_IP_MULTIPLE_TABLES=y
+CONFIG_IP_ROUTE_MULTIPATH=y
+CONFIG_IP_ROUTE_VERBOSE=y
+CONFIG_IP_ROUTE_CLASSID=y
+# CONFIG_IP_PNP is not set
+CONFIG_NET_IPIP=m
+CONFIG_NET_IPGRE_DEMUX=m
+CONFIG_NET_IP_TUNNEL=m
+CONFIG_NET_IPGRE=m
+CONFIG_NET_IPGRE_BROADCAST=y
+CONFIG_IP_MROUTE_COMMON=y
+CONFIG_IP_MROUTE=y
+CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
+CONFIG_IP_PIMSM_V1=y
+CONFIG_IP_PIMSM_V2=y
+CONFIG_SYN_COOKIES=y
+CONFIG_NET_IPVTI=m
+CONFIG_NET_UDP_TUNNEL=m
+CONFIG_NET_FOU=m
+CONFIG_NET_FOU_IP_TUNNELS=y
+CONFIG_INET_AH=m
+CONFIG_INET_ESP=m
+# CONFIG_INET_ESP_OFFLOAD is not set
+# CONFIG_INET_ESPINTCP is not set
+CONFIG_INET_IPCOMP=m
+CONFIG_INET_XFRM_TUNNEL=m
+CONFIG_INET_TUNNEL=m
+CONFIG_INET_DIAG=m
+CONFIG_INET_TCP_DIAG=m
+CONFIG_INET_UDP_DIAG=m
+# CONFIG_INET_RAW_DIAG is not set
+# CONFIG_INET_DIAG_DESTROY is not set
+CONFIG_TCP_CONG_ADVANCED=y
+CONFIG_TCP_CONG_BIC=m
+CONFIG_TCP_CONG_CUBIC=y
+CONFIG_TCP_CONG_WESTWOOD=m
+CONFIG_TCP_CONG_HTCP=m
+CONFIG_TCP_CONG_HSTCP=m
+CONFIG_TCP_CONG_HYBLA=m
+CONFIG_TCP_CONG_VEGAS=m
+# CONFIG_TCP_CONG_NV is not set
+CONFIG_TCP_CONG_SCALABLE=m
+CONFIG_TCP_CONG_LP=m
+CONFIG_TCP_CONG_VENO=m
+CONFIG_TCP_CONG_YEAH=m
+CONFIG_TCP_CONG_ILLINOIS=m
+CONFIG_TCP_CONG_DCTCP=m
+# CONFIG_TCP_CONG_CDG is not set
+# CONFIG_TCP_CONG_BBR is not set
+CONFIG_DEFAULT_CUBIC=y
+# CONFIG_DEFAULT_RENO is not set
+CONFIG_DEFAULT_TCP_CONG="cubic"
+CONFIG_TCP_MD5SIG=y
+CONFIG_IPV6=y
+CONFIG_IPV6_ROUTER_PREF=y
+CONFIG_IPV6_ROUTE_INFO=y
+CONFIG_IPV6_OPTIMISTIC_DAD=y
+CONFIG_INET6_AH=m
+CONFIG_INET6_ESP=m
+# CONFIG_INET6_ESP_OFFLOAD is not set
+CONFIG_INET6_IPCOMP=m
+CONFIG_IPV6_MIP6=y
+# CONFIG_IPV6_ILA is not set
+CONFIG_INET6_XFRM_TUNNEL=m
+CONFIG_INET6_TUNNEL=m
+CONFIG_IPV6_VTI=m
+CONFIG_IPV6_SIT=m
+CONFIG_IPV6_SIT_6RD=y
+CONFIG_IPV6_NDISC_NODETYPE=y
+CONFIG_IPV6_TUNNEL=m
+# CONFIG_IPV6_GRE is not set
+CONFIG_IPV6_FOU=m
+CONFIG_IPV6_FOU_TUNNEL=m
+CONFIG_IPV6_MULTIPLE_TABLES=y
+CONFIG_IPV6_SUBTREES=y
+CONFIG_IPV6_MROUTE=y
+CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
+CONFIG_IPV6_PIMSM_V2=y
+# CONFIG_IPV6_SEG6_LWTUNNEL is not set
+# CONFIG_IPV6_SEG6_HMAC is not set
+# CONFIG_IPV6_RPL_LWTUNNEL is not set
+CONFIG_NETLABEL=y
+# CONFIG_MPTCP is not set
+CONFIG_NETWORK_SECMARK=y
+CONFIG_NET_PTP_CLASSIFY=y
+CONFIG_NETWORK_PHY_TIMESTAMPING=y
+CONFIG_NETFILTER=y
+CONFIG_NETFILTER_ADVANCED=y
+CONFIG_BRIDGE_NETFILTER=m
+
+#
+# Core Netfilter Configuration
+#
+CONFIG_NETFILTER_INGRESS=y
+CONFIG_NETFILTER_NETLINK=m
+CONFIG_NETFILTER_FAMILY_BRIDGE=y
+CONFIG_NETFILTER_FAMILY_ARP=y
+CONFIG_NETFILTER_NETLINK_ACCT=m
+CONFIG_NETFILTER_NETLINK_QUEUE=m
+CONFIG_NETFILTER_NETLINK_LOG=m
+CONFIG_NETFILTER_NETLINK_OSF=m
+CONFIG_NF_CONNTRACK=m
+CONFIG_NF_LOG_COMMON=m
+# CONFIG_NF_LOG_NETDEV is not set
+CONFIG_NETFILTER_CONNCOUNT=m
+CONFIG_NF_CONNTRACK_MARK=y
+CONFIG_NF_CONNTRACK_SECMARK=y
+CONFIG_NF_CONNTRACK_ZONES=y
+CONFIG_NF_CONNTRACK_PROCFS=y
+CONFIG_NF_CONNTRACK_EVENTS=y
+# CONFIG_NF_CONNTRACK_TIMEOUT is not set
+CONFIG_NF_CONNTRACK_TIMESTAMP=y
+CONFIG_NF_CONNTRACK_LABELS=y
+CONFIG_NF_CT_PROTO_DCCP=y
+CONFIG_NF_CT_PROTO_GRE=y
+CONFIG_NF_CT_PROTO_SCTP=y
+CONFIG_NF_CT_PROTO_UDPLITE=y
+CONFIG_NF_CONNTRACK_AMANDA=m
+CONFIG_NF_CONNTRACK_FTP=m
+CONFIG_NF_CONNTRACK_H323=m
+CONFIG_NF_CONNTRACK_IRC=m
+CONFIG_NF_CONNTRACK_BROADCAST=m
+CONFIG_NF_CONNTRACK_NETBIOS_NS=m
+CONFIG_NF_CONNTRACK_SNMP=m
+CONFIG_NF_CONNTRACK_PPTP=m
+CONFIG_NF_CONNTRACK_SANE=m
+CONFIG_NF_CONNTRACK_SIP=m
+CONFIG_NF_CONNTRACK_TFTP=m
+CONFIG_NF_CT_NETLINK=m
+# CONFIG_NETFILTER_NETLINK_GLUE_CT is not set
+CONFIG_NF_NAT=m
+CONFIG_NF_NAT_AMANDA=m
+CONFIG_NF_NAT_FTP=m
+CONFIG_NF_NAT_IRC=m
+CONFIG_NF_NAT_SIP=m
+CONFIG_NF_NAT_TFTP=m
+CONFIG_NF_NAT_REDIRECT=y
+CONFIG_NF_NAT_MASQUERADE=y
+CONFIG_NETFILTER_SYNPROXY=m
+CONFIG_NF_TABLES=m
+CONFIG_NF_TABLES_INET=y
+CONFIG_NF_TABLES_NETDEV=y
+CONFIG_NFT_NUMGEN=m
+CONFIG_NFT_CT=m
+# CONFIG_NFT_FLOW_OFFLOAD is not set
+CONFIG_NFT_COUNTER=m
+# CONFIG_NFT_CONNLIMIT is not set
+CONFIG_NFT_LOG=m
+CONFIG_NFT_LIMIT=m
+CONFIG_NFT_MASQ=m
+CONFIG_NFT_REDIR=m
+CONFIG_NFT_NAT=m
+CONFIG_NFT_TUNNEL=m
+CONFIG_NFT_OBJREF=m
+CONFIG_NFT_QUEUE=m
+CONFIG_NFT_QUOTA=m
+CONFIG_NFT_REJECT=m
+CONFIG_NFT_REJECT_INET=m
+CONFIG_NFT_COMPAT=m
+CONFIG_NFT_HASH=m
+CONFIG_NFT_FIB=m
+# CONFIG_NFT_FIB_INET is not set
+# CONFIG_NFT_XFRM is not set
+CONFIG_NFT_SOCKET=m
+# CONFIG_NFT_OSF is not set
+# CONFIG_NFT_TPROXY is not set
+# CONFIG_NFT_SYNPROXY is not set
+CONFIG_NF_DUP_NETDEV=m
+CONFIG_NFT_DUP_NETDEV=m
+CONFIG_NFT_FWD_NETDEV=m
+# CONFIG_NFT_FIB_NETDEV is not set
+CONFIG_NF_FLOW_TABLE_INET=m
+CONFIG_NF_FLOW_TABLE=m
+CONFIG_NETFILTER_XTABLES=y
+
+#
+# Xtables combined modules
+#
+CONFIG_NETFILTER_XT_MARK=m
+CONFIG_NETFILTER_XT_CONNMARK=m
+CONFIG_NETFILTER_XT_SET=m
+
+#
+# Xtables targets
+#
+CONFIG_NETFILTER_XT_TARGET_AUDIT=m
+CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
+CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
+CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
+CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
+CONFIG_NETFILTER_XT_TARGET_CT=m
+CONFIG_NETFILTER_XT_TARGET_DSCP=m
+CONFIG_NETFILTER_XT_TARGET_HL=m
+CONFIG_NETFILTER_XT_TARGET_HMARK=m
+CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
+CONFIG_NETFILTER_XT_TARGET_LED=m
+CONFIG_NETFILTER_XT_TARGET_LOG=m
+CONFIG_NETFILTER_XT_TARGET_MARK=m
+CONFIG_NETFILTER_XT_NAT=m
+CONFIG_NETFILTER_XT_TARGET_NETMAP=m
+CONFIG_NETFILTER_XT_TARGET_NFLOG=m
+CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
+CONFIG_NETFILTER_XT_TARGET_NOTRACK=m
+CONFIG_NETFILTER_XT_TARGET_RATEEST=m
+CONFIG_NETFILTER_XT_TARGET_REDIRECT=m
+CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m
+CONFIG_NETFILTER_XT_TARGET_TEE=m
+CONFIG_NETFILTER_XT_TARGET_TPROXY=m
+CONFIG_NETFILTER_XT_TARGET_TRACE=m
+CONFIG_NETFILTER_XT_TARGET_SECMARK=m
+CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
+CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m
+
+#
+# Xtables matches
+#
+CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
+CONFIG_NETFILTER_XT_MATCH_BPF=m
+CONFIG_NETFILTER_XT_MATCH_CGROUP=m
+CONFIG_NETFILTER_XT_MATCH_CLUSTER=m
+CONFIG_NETFILTER_XT_MATCH_COMMENT=m
+CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
+CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m
+CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
+CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
+CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
+CONFIG_NETFILTER_XT_MATCH_CPU=m
+CONFIG_NETFILTER_XT_MATCH_DCCP=m
+CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
+CONFIG_NETFILTER_XT_MATCH_DSCP=m
+CONFIG_NETFILTER_XT_MATCH_ECN=m
+CONFIG_NETFILTER_XT_MATCH_ESP=m
+CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
+CONFIG_NETFILTER_XT_MATCH_HELPER=m
+CONFIG_NETFILTER_XT_MATCH_HL=m
+CONFIG_NETFILTER_XT_MATCH_IPCOMP=m
+CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
+CONFIG_NETFILTER_XT_MATCH_IPVS=m
+CONFIG_NETFILTER_XT_MATCH_L2TP=m
+CONFIG_NETFILTER_XT_MATCH_LENGTH=m
+CONFIG_NETFILTER_XT_MATCH_LIMIT=m
+CONFIG_NETFILTER_XT_MATCH_MAC=m
+CONFIG_NETFILTER_XT_MATCH_MARK=m
+CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
+CONFIG_NETFILTER_XT_MATCH_NFACCT=m
+CONFIG_NETFILTER_XT_MATCH_OSF=m
+CONFIG_NETFILTER_XT_MATCH_OWNER=m
+CONFIG_NETFILTER_XT_MATCH_POLICY=m
+CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m
+CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
+CONFIG_NETFILTER_XT_MATCH_QUOTA=m
+CONFIG_NETFILTER_XT_MATCH_RATEEST=m
+CONFIG_NETFILTER_XT_MATCH_REALM=m
+CONFIG_NETFILTER_XT_MATCH_RECENT=m
+CONFIG_NETFILTER_XT_MATCH_SCTP=m
+CONFIG_NETFILTER_XT_MATCH_SOCKET=m
+CONFIG_NETFILTER_XT_MATCH_STATE=m
+CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
+CONFIG_NETFILTER_XT_MATCH_STRING=m
+CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
+CONFIG_NETFILTER_XT_MATCH_TIME=m
+CONFIG_NETFILTER_XT_MATCH_U32=m
+# end of Core Netfilter Configuration
+
+CONFIG_IP_SET=m
+CONFIG_IP_SET_MAX=256
+CONFIG_IP_SET_BITMAP_IP=m
+CONFIG_IP_SET_BITMAP_IPMAC=m
+CONFIG_IP_SET_BITMAP_PORT=m
+CONFIG_IP_SET_HASH_IP=m
+CONFIG_IP_SET_HASH_IPMARK=m
+CONFIG_IP_SET_HASH_IPPORT=m
+CONFIG_IP_SET_HASH_IPPORTIP=m
+CONFIG_IP_SET_HASH_IPPORTNET=m
+# CONFIG_IP_SET_HASH_IPMAC is not set
+CONFIG_IP_SET_HASH_MAC=m
+CONFIG_IP_SET_HASH_NETPORTNET=m
+CONFIG_IP_SET_HASH_NET=m
+CONFIG_IP_SET_HASH_NETNET=m
+CONFIG_IP_SET_HASH_NETPORT=m
+CONFIG_IP_SET_HASH_NETIFACE=m
+CONFIG_IP_SET_LIST_SET=m
+CONFIG_IP_VS=m
+CONFIG_IP_VS_IPV6=y
+# CONFIG_IP_VS_DEBUG is not set
+CONFIG_IP_VS_TAB_BITS=12
+
+#
+# IPVS transport protocol load balancing support
+#
+CONFIG_IP_VS_PROTO_TCP=y
+CONFIG_IP_VS_PROTO_UDP=y
+CONFIG_IP_VS_PROTO_AH_ESP=y
+CONFIG_IP_VS_PROTO_ESP=y
+CONFIG_IP_VS_PROTO_AH=y
+CONFIG_IP_VS_PROTO_SCTP=y
+
+#
+# IPVS scheduler
+#
+CONFIG_IP_VS_RR=m
+CONFIG_IP_VS_WRR=m
+CONFIG_IP_VS_LC=m
+CONFIG_IP_VS_WLC=m
+CONFIG_IP_VS_FO=m
+# CONFIG_IP_VS_OVF is not set
+CONFIG_IP_VS_LBLC=m
+CONFIG_IP_VS_LBLCR=m
+CONFIG_IP_VS_DH=m
+CONFIG_IP_VS_SH=m
+# CONFIG_IP_VS_MH is not set
+CONFIG_IP_VS_SED=m
+CONFIG_IP_VS_NQ=m
+
+#
+# IPVS SH scheduler
+#
+CONFIG_IP_VS_SH_TAB_BITS=8
+
+#
+# IPVS MH scheduler
+#
+CONFIG_IP_VS_MH_TAB_INDEX=12
+
+#
+# IPVS application helper
+#
+CONFIG_IP_VS_FTP=m
+CONFIG_IP_VS_NFCT=y
+CONFIG_IP_VS_PE_SIP=m
+
+#
+# IP: Netfilter Configuration
+#
+CONFIG_NF_DEFRAG_IPV4=m
+CONFIG_NF_SOCKET_IPV4=m
+CONFIG_NF_TPROXY_IPV4=m
+CONFIG_NF_TABLES_IPV4=y
+CONFIG_NFT_REJECT_IPV4=m
+CONFIG_NFT_DUP_IPV4=m
+CONFIG_NFT_FIB_IPV4=m
+CONFIG_NF_TABLES_ARP=y
+# CONFIG_NF_FLOW_TABLE_IPV4 is not set
+CONFIG_NF_DUP_IPV4=m
+CONFIG_NF_LOG_ARP=m
+CONFIG_NF_LOG_IPV4=m
+CONFIG_NF_REJECT_IPV4=y
+CONFIG_NF_NAT_SNMP_BASIC=m
+CONFIG_NF_NAT_PPTP=m
+CONFIG_NF_NAT_H323=m
+CONFIG_IP_NF_IPTABLES=y
+CONFIG_IP_NF_MATCH_AH=m
+CONFIG_IP_NF_MATCH_ECN=m
+CONFIG_IP_NF_MATCH_RPFILTER=m
+CONFIG_IP_NF_MATCH_TTL=m
+CONFIG_IP_NF_FILTER=y
+CONFIG_IP_NF_TARGET_REJECT=y
+CONFIG_IP_NF_TARGET_SYNPROXY=m
+CONFIG_IP_NF_NAT=m
+CONFIG_IP_NF_TARGET_MASQUERADE=m
+CONFIG_IP_NF_TARGET_NETMAP=m
+CONFIG_IP_NF_TARGET_REDIRECT=m
+CONFIG_IP_NF_MANGLE=m
+CONFIG_IP_NF_TARGET_CLUSTERIP=m
+CONFIG_IP_NF_TARGET_ECN=m
+CONFIG_IP_NF_TARGET_TTL=m
+CONFIG_IP_NF_RAW=m
+CONFIG_IP_NF_SECURITY=m
+CONFIG_IP_NF_ARPTABLES=m
+CONFIG_IP_NF_ARPFILTER=m
+CONFIG_IP_NF_ARP_MANGLE=m
+# end of IP: Netfilter Configuration
+
+#
+# IPv6: Netfilter Configuration
+#
+CONFIG_NF_SOCKET_IPV6=m
+CONFIG_NF_TPROXY_IPV6=m
+CONFIG_NF_TABLES_IPV6=y
+CONFIG_NFT_REJECT_IPV6=m
+CONFIG_NFT_DUP_IPV6=m
+CONFIG_NFT_FIB_IPV6=m
+CONFIG_NF_FLOW_TABLE_IPV6=m
+CONFIG_NF_DUP_IPV6=m
+CONFIG_NF_REJECT_IPV6=m
+CONFIG_NF_LOG_IPV6=m
+CONFIG_IP6_NF_IPTABLES=m
+CONFIG_IP6_NF_MATCH_AH=m
+CONFIG_IP6_NF_MATCH_EUI64=m
+CONFIG_IP6_NF_MATCH_FRAG=m
+CONFIG_IP6_NF_MATCH_OPTS=m
+CONFIG_IP6_NF_MATCH_HL=m
+CONFIG_IP6_NF_MATCH_IPV6HEADER=m
+CONFIG_IP6_NF_MATCH_MH=m
+CONFIG_IP6_NF_MATCH_RPFILTER=m
+CONFIG_IP6_NF_MATCH_RT=m
+# CONFIG_IP6_NF_MATCH_SRH is not set
+CONFIG_IP6_NF_TARGET_HL=m
+CONFIG_IP6_NF_FILTER=m
+CONFIG_IP6_NF_TARGET_REJECT=m
+CONFIG_IP6_NF_TARGET_SYNPROXY=m
+CONFIG_IP6_NF_MANGLE=m
+CONFIG_IP6_NF_RAW=m
+CONFIG_IP6_NF_SECURITY=m
+CONFIG_IP6_NF_NAT=m
+CONFIG_IP6_NF_TARGET_MASQUERADE=m
+# CONFIG_IP6_NF_TARGET_NPT is not set
+# end of IPv6: Netfilter Configuration
+
+CONFIG_NF_DEFRAG_IPV6=m
+CONFIG_NF_TABLES_BRIDGE=m
+CONFIG_NFT_BRIDGE_META=m
+CONFIG_NFT_BRIDGE_REJECT=m
+CONFIG_NF_LOG_BRIDGE=m
+CONFIG_NF_CONNTRACK_BRIDGE=m
+CONFIG_BRIDGE_NF_EBTABLES=m
+CONFIG_BRIDGE_EBT_BROUTE=m
+CONFIG_BRIDGE_EBT_T_FILTER=m
+CONFIG_BRIDGE_EBT_T_NAT=m
+CONFIG_BRIDGE_EBT_802_3=m
+CONFIG_BRIDGE_EBT_AMONG=m
+CONFIG_BRIDGE_EBT_ARP=m
+CONFIG_BRIDGE_EBT_IP=m
+CONFIG_BRIDGE_EBT_IP6=m
+CONFIG_BRIDGE_EBT_LIMIT=m
+CONFIG_BRIDGE_EBT_MARK=m
+CONFIG_BRIDGE_EBT_PKTTYPE=m
+CONFIG_BRIDGE_EBT_STP=m
+CONFIG_BRIDGE_EBT_VLAN=m
+CONFIG_BRIDGE_EBT_ARPREPLY=m
+CONFIG_BRIDGE_EBT_DNAT=m
+CONFIG_BRIDGE_EBT_MARK_T=m
+CONFIG_BRIDGE_EBT_REDIRECT=m
+CONFIG_BRIDGE_EBT_SNAT=m
+CONFIG_BRIDGE_EBT_LOG=m
+CONFIG_BRIDGE_EBT_NFLOG=m
+# CONFIG_BPFILTER is not set
+CONFIG_IP_DCCP=m
+CONFIG_INET_DCCP_DIAG=m
+
+#
+# DCCP CCIDs Configuration
+#
+# CONFIG_IP_DCCP_CCID2_DEBUG is not set
+CONFIG_IP_DCCP_CCID3=y
+# CONFIG_IP_DCCP_CCID3_DEBUG is not set
+CONFIG_IP_DCCP_TFRC_LIB=y
+# end of DCCP CCIDs Configuration
+
+#
+# DCCP Kernel Hacking
+#
+# CONFIG_IP_DCCP_DEBUG is not set
+# end of DCCP Kernel Hacking
+
+CONFIG_IP_SCTP=m
+# CONFIG_SCTP_DBG_OBJCNT is not set
+# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5 is not set
+CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1=y
+# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
+CONFIG_SCTP_COOKIE_HMAC_MD5=y
+CONFIG_SCTP_COOKIE_HMAC_SHA1=y
+CONFIG_INET_SCTP_DIAG=m
+CONFIG_RDS=m
+CONFIG_RDS_RDMA=m
+CONFIG_RDS_TCP=m
+# CONFIG_RDS_DEBUG is not set
+CONFIG_TIPC=m
+# CONFIG_TIPC_MEDIA_IB is not set
+CONFIG_TIPC_MEDIA_UDP=y
+CONFIG_TIPC_CRYPTO=y
+CONFIG_TIPC_DIAG=m
+# CONFIG_ATM is not set
+CONFIG_L2TP=m
+CONFIG_L2TP_DEBUGFS=m
+CONFIG_L2TP_V3=y
+CONFIG_L2TP_IP=m
+CONFIG_L2TP_ETH=m
+CONFIG_STP=m
+CONFIG_GARP=m
+CONFIG_MRP=m
+CONFIG_BRIDGE=m
+CONFIG_BRIDGE_IGMP_SNOOPING=y
+CONFIG_BRIDGE_VLAN_FILTERING=y
+CONFIG_HAVE_NET_DSA=y
+CONFIG_NET_DSA=m
+# CONFIG_NET_DSA_TAG_AR9331 is not set
+CONFIG_NET_DSA_TAG_BRCM_COMMON=m
+CONFIG_NET_DSA_TAG_BRCM=m
+CONFIG_NET_DSA_TAG_BRCM_PREPEND=m
+# CONFIG_NET_DSA_TAG_GSWIP is not set
+CONFIG_NET_DSA_TAG_DSA=m
+CONFIG_NET_DSA_TAG_EDSA=m
+# CONFIG_NET_DSA_TAG_MTK is not set
+# CONFIG_NET_DSA_TAG_KSZ is not set
+# CONFIG_NET_DSA_TAG_OCELOT is not set
+# CONFIG_NET_DSA_TAG_QCA is not set
+# CONFIG_NET_DSA_TAG_LAN9303 is not set
+# CONFIG_NET_DSA_TAG_SJA1105 is not set
+CONFIG_NET_DSA_TAG_TRAILER=m
+CONFIG_VLAN_8021Q=m
+CONFIG_VLAN_8021Q_GVRP=y
+CONFIG_VLAN_8021Q_MVRP=y
+# CONFIG_DECNET is not set
+CONFIG_LLC=m
+# CONFIG_LLC2 is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_PHONET is not set
+CONFIG_6LOWPAN=m
+# CONFIG_6LOWPAN_DEBUGFS is not set
+CONFIG_6LOWPAN_NHC=m
+CONFIG_6LOWPAN_NHC_DEST=m
+CONFIG_6LOWPAN_NHC_FRAGMENT=m
+CONFIG_6LOWPAN_NHC_HOP=m
+CONFIG_6LOWPAN_NHC_IPV6=m
+CONFIG_6LOWPAN_NHC_MOBILITY=m
+CONFIG_6LOWPAN_NHC_ROUTING=m
+CONFIG_6LOWPAN_NHC_UDP=m
+# CONFIG_6LOWPAN_GHC_EXT_HDR_HOP is not set
+# CONFIG_6LOWPAN_GHC_UDP is not set
+# CONFIG_6LOWPAN_GHC_ICMPV6 is not set
+# CONFIG_6LOWPAN_GHC_EXT_HDR_DEST is not set
+# CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG is not set
+# CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE is not set
+CONFIG_IEEE802154=m
+# CONFIG_IEEE802154_NL802154_EXPERIMENTAL is not set
+CONFIG_IEEE802154_SOCKET=m
+CONFIG_IEEE802154_6LOWPAN=m
+CONFIG_MAC802154=m
+CONFIG_NET_SCHED=y
+
+#
+# Queueing/Scheduling
+#
+CONFIG_NET_SCH_CBQ=m
+CONFIG_NET_SCH_HTB=m
+CONFIG_NET_SCH_HFSC=m
+CONFIG_NET_SCH_PRIO=m
+CONFIG_NET_SCH_MULTIQ=m
+CONFIG_NET_SCH_RED=m
+CONFIG_NET_SCH_SFB=m
+CONFIG_NET_SCH_SFQ=m
+CONFIG_NET_SCH_TEQL=m
+CONFIG_NET_SCH_TBF=m
+# CONFIG_NET_SCH_CBS is not set
+# CONFIG_NET_SCH_ETF is not set
+# CONFIG_NET_SCH_TAPRIO is not set
+CONFIG_NET_SCH_GRED=m
+CONFIG_NET_SCH_DSMARK=m
+CONFIG_NET_SCH_NETEM=m
+CONFIG_NET_SCH_DRR=m
+CONFIG_NET_SCH_MQPRIO=m
+# CONFIG_NET_SCH_SKBPRIO is not set
+CONFIG_NET_SCH_CHOKE=m
+CONFIG_NET_SCH_QFQ=m
+CONFIG_NET_SCH_CODEL=m
+CONFIG_NET_SCH_FQ_CODEL=y
+# CONFIG_NET_SCH_CAKE is not set
+CONFIG_NET_SCH_FQ=m
+CONFIG_NET_SCH_HHF=m
+CONFIG_NET_SCH_PIE=m
+# CONFIG_NET_SCH_FQ_PIE is not set
+CONFIG_NET_SCH_INGRESS=m
+CONFIG_NET_SCH_PLUG=m
+# CONFIG_NET_SCH_ETS is not set
+# CONFIG_NET_SCH_DEFAULT is not set
+
+#
+# Classification
+#
+CONFIG_NET_CLS=y
+CONFIG_NET_CLS_BASIC=m
+CONFIG_NET_CLS_TCINDEX=m
+CONFIG_NET_CLS_ROUTE4=m
+CONFIG_NET_CLS_FW=m
+CONFIG_NET_CLS_U32=m
+CONFIG_CLS_U32_PERF=y
+CONFIG_CLS_U32_MARK=y
+CONFIG_NET_CLS_RSVP=m
+CONFIG_NET_CLS_RSVP6=m
+CONFIG_NET_CLS_FLOW=m
+CONFIG_NET_CLS_CGROUP=y
+CONFIG_NET_CLS_BPF=m
+# CONFIG_NET_CLS_FLOWER is not set
+# CONFIG_NET_CLS_MATCHALL is not set
+CONFIG_NET_EMATCH=y
+CONFIG_NET_EMATCH_STACK=32
+CONFIG_NET_EMATCH_CMP=m
+CONFIG_NET_EMATCH_NBYTE=m
+CONFIG_NET_EMATCH_U32=m
+CONFIG_NET_EMATCH_META=m
+CONFIG_NET_EMATCH_TEXT=m
+CONFIG_NET_EMATCH_IPSET=m
+# CONFIG_NET_EMATCH_IPT is not set
+CONFIG_NET_CLS_ACT=y
+CONFIG_NET_ACT_POLICE=m
+CONFIG_NET_ACT_GACT=m
+CONFIG_GACT_PROB=y
+CONFIG_NET_ACT_MIRRED=m
+# CONFIG_NET_ACT_SAMPLE is not set
+CONFIG_NET_ACT_IPT=m
+CONFIG_NET_ACT_NAT=m
+CONFIG_NET_ACT_PEDIT=m
+CONFIG_NET_ACT_SIMP=m
+CONFIG_NET_ACT_SKBEDIT=m
+CONFIG_NET_ACT_CSUM=m
+# CONFIG_NET_ACT_MPLS is not set
+CONFIG_NET_ACT_VLAN=m
+# CONFIG_NET_ACT_BPF is not set
+# CONFIG_NET_ACT_CONNMARK is not set
+# CONFIG_NET_ACT_CTINFO is not set
+# CONFIG_NET_ACT_SKBMOD is not set
+# CONFIG_NET_ACT_IFE is not set
+# CONFIG_NET_ACT_TUNNEL_KEY is not set
+# CONFIG_NET_ACT_CT is not set
+# CONFIG_NET_TC_SKB_EXT is not set
+CONFIG_NET_SCH_FIFO=y
+CONFIG_DCB=y
+CONFIG_DNS_RESOLVER=m
+CONFIG_BATMAN_ADV=m
+CONFIG_BATMAN_ADV_BATMAN_V=y
+CONFIG_BATMAN_ADV_BLA=y
+CONFIG_BATMAN_ADV_DAT=y
+CONFIG_BATMAN_ADV_NC=y
+CONFIG_BATMAN_ADV_MCAST=y
+# CONFIG_BATMAN_ADV_DEBUGFS is not set
+# CONFIG_BATMAN_ADV_DEBUG is not set
+# CONFIG_BATMAN_ADV_SYSFS is not set
+# CONFIG_BATMAN_ADV_TRACING is not set
+CONFIG_OPENVSWITCH=m
+CONFIG_OPENVSWITCH_GRE=m
+CONFIG_OPENVSWITCH_VXLAN=m
+CONFIG_OPENVSWITCH_GENEVE=m
+CONFIG_VSOCKETS=m
+CONFIG_VSOCKETS_DIAG=m
+CONFIG_VSOCKETS_LOOPBACK=m
+CONFIG_VMWARE_VMCI_VSOCKETS=m
+# CONFIG_VIRTIO_VSOCKETS is not set
+CONFIG_VIRTIO_VSOCKETS_COMMON=m
+# CONFIG_HYPERV_VSOCKETS is not set
+CONFIG_NETLINK_DIAG=m
+CONFIG_MPLS=y
+CONFIG_NET_MPLS_GSO=m
+# CONFIG_MPLS_ROUTING is not set
+CONFIG_NET_NSH=m
+# CONFIG_HSR is not set
+CONFIG_NET_SWITCHDEV=y
+CONFIG_NET_L3_MASTER_DEV=y
+# CONFIG_NET_NCSI is not set
+CONFIG_RPS=y
+CONFIG_RFS_ACCEL=y
+CONFIG_XPS=y
+CONFIG_CGROUP_NET_PRIO=y
+CONFIG_CGROUP_NET_CLASSID=y
+CONFIG_NET_RX_BUSY_POLL=y
+CONFIG_BQL=y
+CONFIG_BPF_JIT=y
+CONFIG_NET_FLOW_LIMIT=y
+
+#
+# Network testing
+#
+CONFIG_NET_PKTGEN=m
+CONFIG_NET_DROP_MONITOR=y
+# end of Network testing
+# end of Networking options
+
+# CONFIG_HAMRADIO is not set
+# CONFIG_CAN is not set
+# CONFIG_BT is not set
+# CONFIG_AF_RXRPC is not set
+# CONFIG_AF_KCM is not set
+CONFIG_FIB_RULES=y
+# CONFIG_WIRELESS is not set
+# CONFIG_WIMAX is not set
+# CONFIG_RFKILL is not set
+CONFIG_NET_9P=m
+CONFIG_NET_9P_VIRTIO=m
+# CONFIG_NET_9P_XEN is not set
+CONFIG_NET_9P_RDMA=m
+# CONFIG_NET_9P_DEBUG is not set
+# CONFIG_CAIF is not set
+CONFIG_CEPH_LIB=m
+CONFIG_CEPH_LIB_PRETTYDEBUG=y
+CONFIG_CEPH_LIB_USE_DNS_RESOLVER=y
+# CONFIG_NFC is not set
+# CONFIG_PSAMPLE is not set
+# CONFIG_NET_IFE is not set
+# CONFIG_LWTUNNEL is not set
+CONFIG_DST_CACHE=y
+CONFIG_GRO_CELLS=y
+CONFIG_NET_DEVLINK=y
+CONFIG_PAGE_POOL=y
+CONFIG_FAILOVER=m
+CONFIG_ETHTOOL_NETLINK=y
+CONFIG_HAVE_EBPF_JIT=y
+
+#
+# Device Drivers
+#
+CONFIG_HAVE_EISA=y
+# CONFIG_EISA is not set
+CONFIG_HAVE_PCI=y
+CONFIG_PCI=y
+CONFIG_PCI_DOMAINS=y
+CONFIG_PCIEPORTBUS=y
+CONFIG_HOTPLUG_PCI_PCIE=y
+CONFIG_PCIEAER=y
+CONFIG_PCIEAER_INJECT=m
+CONFIG_PCIE_ECRC=y
+CONFIG_PCIEASPM=y
+CONFIG_PCIEASPM_DEFAULT=y
+# CONFIG_PCIEASPM_POWERSAVE is not set
+# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set
+# CONFIG_PCIEASPM_PERFORMANCE is not set
+CONFIG_PCIE_PME=y
+# CONFIG_PCIE_DPC is not set
+# CONFIG_PCIE_PTM is not set
+# CONFIG_PCIE_BW is not set
+CONFIG_PCI_MSI=y
+CONFIG_PCI_MSI_IRQ_DOMAIN=y
+CONFIG_PCI_QUIRKS=y
+# CONFIG_PCI_DEBUG is not set
+# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
+CONFIG_PCI_STUB=y
+# CONFIG_PCI_PF_STUB is not set
+CONFIG_XEN_PCIDEV_FRONTEND=m
+CONFIG_PCI_ATS=y
+CONFIG_PCI_LOCKLESS_CONFIG=y
+CONFIG_PCI_IOV=y
+CONFIG_PCI_PRI=y
+CONFIG_PCI_PASID=y
+CONFIG_PCI_LABEL=y
+# CONFIG_PCI_HYPERV is not set
+CONFIG_HOTPLUG_PCI=y
+CONFIG_HOTPLUG_PCI_ACPI=y
+CONFIG_HOTPLUG_PCI_ACPI_IBM=m
+# CONFIG_HOTPLUG_PCI_CPCI is not set
+# CONFIG_HOTPLUG_PCI_SHPC is not set
+
+#
+# PCI controller drivers
+#
+# CONFIG_VMD is not set
+# CONFIG_PCI_HYPERV_INTERFACE is not set
+
+#
+# DesignWare PCI Core Support
+#
+# CONFIG_PCIE_DW_PLAT_HOST is not set
+# CONFIG_PCI_MESON is not set
+# end of DesignWare PCI Core Support
+
+#
+# Mobiveil PCIe Core Support
+#
+# end of Mobiveil PCIe Core Support
+
+#
+# Cadence PCIe controllers support
+#
+# end of Cadence PCIe controllers support
+# end of PCI controller drivers
+
+#
+# PCI Endpoint
+#
+# CONFIG_PCI_ENDPOINT is not set
+# end of PCI Endpoint
+
+#
+# PCI switch controller drivers
+#
+# CONFIG_PCI_SW_SWITCHTEC is not set
+# end of PCI switch controller drivers
+
+CONFIG_PCCARD=y
+CONFIG_PCMCIA=y
+CONFIG_PCMCIA_LOAD_CIS=y
+CONFIG_CARDBUS=y
+
+#
+# PC-card bridges
+#
+CONFIG_YENTA=m
+CONFIG_YENTA_O2=y
+CONFIG_YENTA_RICOH=y
+CONFIG_YENTA_TI=y
+CONFIG_YENTA_ENE_TUNE=y
+CONFIG_YENTA_TOSHIBA=y
+CONFIG_PD6729=m
+CONFIG_I82092=m
+CONFIG_PCCARD_NONSTATIC=y
+# CONFIG_RAPIDIO is not set
+
+#
+# Generic Driver Options
+#
+# CONFIG_UEVENT_HELPER is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+CONFIG_STANDALONE=y
+CONFIG_PREVENT_FIRMWARE_BUILD=y
+
+#
+# Firmware loader
+#
+CONFIG_FW_LOADER=y
+CONFIG_FW_LOADER_PAGED_BUF=y
+CONFIG_EXTRA_FIRMWARE=""
+CONFIG_FW_LOADER_USER_HELPER=y
+# CONFIG_FW_LOADER_USER_HELPER_FALLBACK is not set
+# CONFIG_FW_LOADER_COMPRESS is not set
+CONFIG_FW_CACHE=y
+# end of Firmware loader
+
+CONFIG_ALLOW_DEV_COREDUMP=y
+# CONFIG_DEBUG_DRIVER is not set
+CONFIG_DEBUG_DEVRES=y
+# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
+# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
+CONFIG_SYS_HYPERVISOR=y
+CONFIG_GENERIC_CPU_AUTOPROBE=y
+CONFIG_GENERIC_CPU_VULNERABILITIES=y
+CONFIG_REGMAP=y
+CONFIG_REGMAP_I2C=m
+# end of Generic Driver Options
+
+#
+# Bus devices
+#
+# CONFIG_MHI_BUS is not set
+# end of Bus devices
+
+CONFIG_CONNECTOR=y
+CONFIG_PROC_EVENTS=y
+# CONFIG_GNSS is not set
+# CONFIG_MTD is not set
+# CONFIG_OF is not set
+CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
+CONFIG_PARPORT=m
+CONFIG_PARPORT_PC=m
+CONFIG_PARPORT_SERIAL=m
+# CONFIG_PARPORT_PC_FIFO is not set
+# CONFIG_PARPORT_PC_SUPERIO is not set
+CONFIG_PARPORT_PC_PCMCIA=m
+# CONFIG_PARPORT_AX88796 is not set
+CONFIG_PARPORT_1284=y
+CONFIG_PARPORT_NOT_PC=y
+CONFIG_PNP=y
+# CONFIG_PNP_DEBUG_MESSAGES is not set
+
+#
+# Protocols
+#
+CONFIG_PNPACPI=y
+CONFIG_BLK_DEV=y
+CONFIG_BLK_DEV_NULL_BLK=m
+CONFIG_BLK_DEV_FD=m
+CONFIG_CDROM=y
+# CONFIG_PARIDE is not set
+CONFIG_BLK_DEV_PCIESSD_MTIP32XX=m
+CONFIG_ZRAM=m
+# CONFIG_ZRAM_WRITEBACK is not set
+# CONFIG_ZRAM_MEMORY_TRACKING is not set
+CONFIG_BLK_DEV_UMEM=m
+CONFIG_BLK_DEV_LOOP=m
+CONFIG_BLK_DEV_LOOP_MIN_COUNT=0
+# CONFIG_BLK_DEV_CRYPTOLOOP is not set
+CONFIG_BLK_DEV_DRBD=m
+# CONFIG_DRBD_FAULT_INJECTION is not set
+CONFIG_BLK_DEV_NBD=m
+CONFIG_BLK_DEV_SKD=m
+CONFIG_BLK_DEV_SX8=m
+CONFIG_BLK_DEV_RAM=m
+CONFIG_BLK_DEV_RAM_COUNT=16
+CONFIG_BLK_DEV_RAM_SIZE=16384
+CONFIG_CDROM_PKTCDVD=m
+CONFIG_CDROM_PKTCDVD_BUFFERS=8
+# CONFIG_CDROM_PKTCDVD_WCACHE is not set
+CONFIG_ATA_OVER_ETH=m
+CONFIG_XEN_BLKDEV_FRONTEND=m
+CONFIG_XEN_BLKDEV_BACKEND=m
+CONFIG_VIRTIO_BLK=m
+CONFIG_BLK_DEV_RBD=m
+# CONFIG_BLK_DEV_RSXX is not set
+
+#
+# NVME Support
+#
+CONFIG_NVME_CORE=m
+CONFIG_BLK_DEV_NVME=m
+# CONFIG_NVME_MULTIPATH is not set
+# CONFIG_NVME_HWMON is not set
+# CONFIG_NVME_RDMA is not set
+# CONFIG_NVME_FC is not set
+# CONFIG_NVME_TCP is not set
+# CONFIG_NVME_TARGET is not set
+# end of NVME Support
+
+#
+# Misc devices
+#
+CONFIG_SENSORS_LIS3LV02D=m
+# CONFIG_AD525X_DPOT is not set
+# CONFIG_DUMMY_IRQ is not set
+# CONFIG_IBM_ASM is not set
+# CONFIG_PHANTOM is not set
+CONFIG_TIFM_CORE=m
+CONFIG_TIFM_7XX1=m
+# CONFIG_ICS932S401 is not set
+CONFIG_ENCLOSURE_SERVICES=m
+CONFIG_SGI_XP=m
+CONFIG_HP_ILO=m
+CONFIG_SGI_GRU=m
+# CONFIG_SGI_GRU_DEBUG is not set
+CONFIG_APDS9802ALS=m
+CONFIG_ISL29003=m
+CONFIG_ISL29020=m
+CONFIG_SENSORS_TSL2550=m
+CONFIG_SENSORS_BH1770=m
+CONFIG_SENSORS_APDS990X=m
+# CONFIG_HMC6352 is not set
+# CONFIG_DS1682 is not set
+CONFIG_VMWARE_BALLOON=m
+# CONFIG_SRAM is not set
+# CONFIG_PCI_ENDPOINT_TEST is not set
+# CONFIG_XILINX_SDFEC is not set
+CONFIG_PVPANIC=m
+# CONFIG_C2PORT is not set
+
+#
+# EEPROM support
+#
+CONFIG_EEPROM_AT24=m
+CONFIG_EEPROM_LEGACY=m
+CONFIG_EEPROM_MAX6875=m
+CONFIG_EEPROM_93CX6=m
+# CONFIG_EEPROM_IDT_89HPESX is not set
+# CONFIG_EEPROM_EE1004 is not set
+# end of EEPROM support
+
+CONFIG_CB710_CORE=m
+# CONFIG_CB710_DEBUG is not set
+CONFIG_CB710_DEBUG_ASSUMPTIONS=y
+
+#
+# Texas Instruments shared transport line discipline
+#
+# CONFIG_TI_ST is not set
+# end of Texas Instruments shared transport line discipline
+
+CONFIG_SENSORS_LIS3_I2C=m
+CONFIG_ALTERA_STAPL=m
+CONFIG_INTEL_MEI=m
+CONFIG_INTEL_MEI_ME=m
+CONFIG_INTEL_MEI_TXE=m
+CONFIG_VMWARE_VMCI=m
+
+#
+# Intel MIC & related support
+#
+CONFIG_INTEL_MIC_BUS=m
+# CONFIG_SCIF_BUS is not set
+# CONFIG_VOP_BUS is not set
+# end of Intel MIC & related support
+
+# CONFIG_GENWQE is not set
+# CONFIG_ECHO is not set
+# CONFIG_MISC_ALCOR_PCI is not set
+# CONFIG_MISC_RTSX_PCI is not set
+# CONFIG_MISC_RTSX_USB is not set
+# CONFIG_HABANA_AI is not set
+# CONFIG_UACCE is not set
+# end of Misc devices
+
+CONFIG_HAVE_IDE=y
+# CONFIG_IDE is not set
+
+#
+# SCSI device support
+#
+CONFIG_SCSI_MOD=y
+CONFIG_RAID_ATTRS=m
+CONFIG_SCSI=y
+CONFIG_SCSI_DMA=y
+CONFIG_SCSI_NETLINK=y
+CONFIG_SCSI_PROC_FS=y
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
+CONFIG_BLK_DEV_SD=y
+CONFIG_CHR_DEV_ST=m
+CONFIG_BLK_DEV_SR=y
+CONFIG_CHR_DEV_SG=y
+CONFIG_CHR_DEV_SCH=m
+CONFIG_SCSI_ENCLOSURE=m
+CONFIG_SCSI_CONSTANTS=y
+CONFIG_SCSI_LOGGING=y
+CONFIG_SCSI_SCAN_ASYNC=y
+
+#
+# SCSI Transports
+#
+CONFIG_SCSI_SPI_ATTRS=m
+CONFIG_SCSI_FC_ATTRS=m
+CONFIG_SCSI_ISCSI_ATTRS=m
+CONFIG_SCSI_SAS_ATTRS=m
+CONFIG_SCSI_SAS_LIBSAS=m
+CONFIG_SCSI_SAS_ATA=y
+CONFIG_SCSI_SAS_HOST_SMP=y
+CONFIG_SCSI_SRP_ATTRS=m
+# end of SCSI Transports
+
+CONFIG_SCSI_LOWLEVEL=y
+CONFIG_ISCSI_TCP=m
+CONFIG_ISCSI_BOOT_SYSFS=m
+CONFIG_SCSI_CXGB3_ISCSI=m
+CONFIG_SCSI_CXGB4_ISCSI=m
+CONFIG_SCSI_BNX2_ISCSI=m
+CONFIG_SCSI_BNX2X_FCOE=m
+CONFIG_BE2ISCSI=m
+CONFIG_BLK_DEV_3W_XXXX_RAID=m
+CONFIG_SCSI_HPSA=m
+CONFIG_SCSI_3W_9XXX=m
+CONFIG_SCSI_3W_SAS=m
+CONFIG_SCSI_ACARD=m
+CONFIG_SCSI_AACRAID=m
+CONFIG_SCSI_AIC7XXX=m
+CONFIG_AIC7XXX_CMDS_PER_DEVICE=4
+CONFIG_AIC7XXX_RESET_DELAY_MS=15000
+# CONFIG_AIC7XXX_DEBUG_ENABLE is not set
+CONFIG_AIC7XXX_DEBUG_MASK=0
+# CONFIG_AIC7XXX_REG_PRETTY_PRINT is not set
+CONFIG_SCSI_AIC79XX=m
+CONFIG_AIC79XX_CMDS_PER_DEVICE=4
+CONFIG_AIC79XX_RESET_DELAY_MS=15000
+# CONFIG_AIC79XX_DEBUG_ENABLE is not set
+CONFIG_AIC79XX_DEBUG_MASK=0
+# CONFIG_AIC79XX_REG_PRETTY_PRINT is not set
+# CONFIG_SCSI_AIC94XX is not set
+CONFIG_SCSI_MVSAS=m
+# CONFIG_SCSI_MVSAS_DEBUG is not set
+CONFIG_SCSI_MVSAS_TASKLET=y
+CONFIG_SCSI_MVUMI=m
+# CONFIG_SCSI_DPT_I2O is not set
+CONFIG_SCSI_ADVANSYS=m
+CONFIG_SCSI_ARCMSR=m
+CONFIG_SCSI_ESAS2R=m
+CONFIG_MEGARAID_NEWGEN=y
+CONFIG_MEGARAID_MM=m
+CONFIG_MEGARAID_MAILBOX=m
+CONFIG_MEGARAID_LEGACY=m
+CONFIG_MEGARAID_SAS=m
+CONFIG_SCSI_MPT3SAS=m
+CONFIG_SCSI_MPT2SAS_MAX_SGE=128
+CONFIG_SCSI_MPT3SAS_MAX_SGE=128
+CONFIG_SCSI_MPT2SAS=m
+# CONFIG_SCSI_SMARTPQI is not set
+CONFIG_SCSI_UFSHCD=m
+CONFIG_SCSI_UFSHCD_PCI=m
+# CONFIG_SCSI_UFS_DWC_TC_PCI is not set
+# CONFIG_SCSI_UFSHCD_PLATFORM is not set
+# CONFIG_SCSI_UFS_BSG is not set
+CONFIG_SCSI_HPTIOP=m
+CONFIG_SCSI_BUSLOGIC=m
+CONFIG_SCSI_FLASHPOINT=y
+# CONFIG_SCSI_MYRB is not set
+# CONFIG_SCSI_MYRS is not set
+CONFIG_VMWARE_PVSCSI=m
+# CONFIG_XEN_SCSI_FRONTEND is not set
+CONFIG_HYPERV_STORAGE=m
+CONFIG_LIBFC=m
+CONFIG_LIBFCOE=m
+CONFIG_FCOE=m
+CONFIG_FCOE_FNIC=m
+# CONFIG_SCSI_SNIC is not set
+CONFIG_SCSI_DMX3191D=m
+# CONFIG_SCSI_FDOMAIN_PCI is not set
+CONFIG_SCSI_GDTH=m
+CONFIG_SCSI_ISCI=m
+CONFIG_SCSI_IPS=m
+CONFIG_SCSI_INITIO=m
+CONFIG_SCSI_INIA100=m
+# CONFIG_SCSI_PPA is not set
+# CONFIG_SCSI_IMM is not set
+CONFIG_SCSI_STEX=m
+CONFIG_SCSI_SYM53C8XX_2=m
+CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
+CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
+CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
+CONFIG_SCSI_SYM53C8XX_MMIO=y
+CONFIG_SCSI_IPR=m
+CONFIG_SCSI_IPR_TRACE=y
+CONFIG_SCSI_IPR_DUMP=y
+CONFIG_SCSI_QLOGIC_1280=m
+CONFIG_SCSI_QLA_FC=m
+CONFIG_TCM_QLA2XXX=m
+# CONFIG_TCM_QLA2XXX_DEBUG is not set
+CONFIG_SCSI_QLA_ISCSI=m
+CONFIG_SCSI_LPFC=m
+# CONFIG_SCSI_LPFC_DEBUG_FS is not set
+CONFIG_SCSI_DC395x=m
+CONFIG_SCSI_AM53C974=m
+CONFIG_SCSI_WD719X=m
+CONFIG_SCSI_DEBUG=m
+CONFIG_SCSI_PMCRAID=m
+CONFIG_SCSI_PM8001=m
+CONFIG_SCSI_BFA_FC=m
+CONFIG_SCSI_VIRTIO=m
+CONFIG_SCSI_CHELSIO_FCOE=m
+# CONFIG_SCSI_LOWLEVEL_PCMCIA is not set
+CONFIG_SCSI_DH=y
+CONFIG_SCSI_DH_RDAC=m
+CONFIG_SCSI_DH_HP_SW=m
+CONFIG_SCSI_DH_EMC=m
+CONFIG_SCSI_DH_ALUA=m
+# end of SCSI device support
+
+CONFIG_ATA=y
+CONFIG_SATA_HOST=y
+CONFIG_PATA_TIMINGS=y
+CONFIG_ATA_VERBOSE_ERROR=y
+CONFIG_ATA_FORCE=y
+CONFIG_ATA_ACPI=y
+# CONFIG_SATA_ZPODD is not set
+CONFIG_SATA_PMP=y
+
+#
+# Controllers with non-SFF native interface
+#
+CONFIG_SATA_AHCI=y
+CONFIG_SATA_MOBILE_LPM_POLICY=0
+CONFIG_SATA_AHCI_PLATFORM=m
+CONFIG_SATA_INIC162X=m
+CONFIG_SATA_ACARD_AHCI=m
+CONFIG_SATA_SIL24=m
+CONFIG_ATA_SFF=y
+
+#
+# SFF controllers with custom DMA interface
+#
+CONFIG_PDC_ADMA=m
+CONFIG_SATA_QSTOR=m
+CONFIG_SATA_SX4=m
+CONFIG_ATA_BMDMA=y
+
+#
+# SATA SFF controllers with BMDMA
+#
+CONFIG_ATA_PIIX=y
+# CONFIG_SATA_DWC is not set
+CONFIG_SATA_MV=m
+CONFIG_SATA_NV=m
+CONFIG_SATA_PROMISE=m
+CONFIG_SATA_SIL=m
+CONFIG_SATA_SIS=m
+CONFIG_SATA_SVW=m
+CONFIG_SATA_ULI=m
+CONFIG_SATA_VIA=m
+CONFIG_SATA_VITESSE=m
+
+#
+# PATA SFF controllers with BMDMA
+#
+CONFIG_PATA_ALI=m
+CONFIG_PATA_AMD=m
+CONFIG_PATA_ARTOP=m
+CONFIG_PATA_ATIIXP=m
+CONFIG_PATA_ATP867X=m
+CONFIG_PATA_CMD64X=m
+CONFIG_PATA_CYPRESS=m
+CONFIG_PATA_EFAR=m
+CONFIG_PATA_HPT366=m
+CONFIG_PATA_HPT37X=m
+CONFIG_PATA_HPT3X2N=m
+CONFIG_PATA_HPT3X3=m
+# CONFIG_PATA_HPT3X3_DMA is not set
+CONFIG_PATA_IT8213=m
+CONFIG_PATA_IT821X=m
+CONFIG_PATA_JMICRON=m
+CONFIG_PATA_MARVELL=m
+CONFIG_PATA_NETCELL=m
+CONFIG_PATA_NINJA32=m
+CONFIG_PATA_NS87415=m
+CONFIG_PATA_OLDPIIX=m
+CONFIG_PATA_OPTIDMA=m
+CONFIG_PATA_PDC2027X=m
+CONFIG_PATA_PDC_OLD=m
+# CONFIG_PATA_RADISYS is not set
+CONFIG_PATA_RDC=m
+CONFIG_PATA_SCH=m
+CONFIG_PATA_SERVERWORKS=m
+CONFIG_PATA_SIL680=m
+CONFIG_PATA_SIS=m
+CONFIG_PATA_TOSHIBA=m
+CONFIG_PATA_TRIFLEX=m
+CONFIG_PATA_VIA=m
+CONFIG_PATA_WINBOND=m
+
+#
+# PIO-only SFF controllers
+#
+CONFIG_PATA_CMD640_PCI=m
+CONFIG_PATA_MPIIX=m
+CONFIG_PATA_NS87410=m
+CONFIG_PATA_OPTI=m
+CONFIG_PATA_PCMCIA=m
+# CONFIG_PATA_RZ1000 is not set
+
+#
+# Generic fallback / legacy drivers
+#
+CONFIG_PATA_ACPI=m
+CONFIG_ATA_GENERIC=m
+# CONFIG_PATA_LEGACY is not set
+CONFIG_MD=y
+CONFIG_BLK_DEV_MD=y
+CONFIG_MD_AUTODETECT=y
+CONFIG_MD_LINEAR=m
+CONFIG_MD_RAID0=m
+CONFIG_MD_RAID1=m
+CONFIG_MD_RAID10=m
+CONFIG_MD_RAID456=m
+CONFIG_MD_MULTIPATH=m
+CONFIG_MD_FAULTY=m
+# CONFIG_MD_CLUSTER is not set
+CONFIG_BCACHE=m
+# CONFIG_BCACHE_DEBUG is not set
+# CONFIG_BCACHE_CLOSURES_DEBUG is not set
+CONFIG_BLK_DEV_DM_BUILTIN=y
+CONFIG_BLK_DEV_DM=y
+CONFIG_DM_DEBUG=y
+CONFIG_DM_BUFIO=y
+# CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING is not set
+CONFIG_DM_BIO_PRISON=m
+CONFIG_DM_PERSISTENT_DATA=m
+# CONFIG_DM_UNSTRIPED is not set
+CONFIG_DM_CRYPT=m
+CONFIG_DM_SNAPSHOT=y
+CONFIG_DM_THIN_PROVISIONING=m
+CONFIG_DM_CACHE=m
+CONFIG_DM_CACHE_SMQ=m
+# CONFIG_DM_WRITECACHE is not set
+# CONFIG_DM_ERA is not set
+# CONFIG_DM_CLONE is not set
+CONFIG_DM_MIRROR=y
+CONFIG_DM_LOG_USERSPACE=m
+CONFIG_DM_RAID=m
+CONFIG_DM_ZERO=y
+CONFIG_DM_MULTIPATH=m
+CONFIG_DM_MULTIPATH_QL=m
+CONFIG_DM_MULTIPATH_ST=m
+CONFIG_DM_DELAY=m
+# CONFIG_DM_DUST is not set
+# CONFIG_DM_INIT is not set
+CONFIG_DM_UEVENT=y
+CONFIG_DM_FLAKEY=m
+CONFIG_DM_VERITY=m
+# CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG is not set
+# CONFIG_DM_VERITY_FEC is not set
+CONFIG_DM_SWITCH=m
+# CONFIG_DM_LOG_WRITES is not set
+# CONFIG_DM_INTEGRITY is not set
+CONFIG_TARGET_CORE=m
+CONFIG_TCM_IBLOCK=m
+CONFIG_TCM_FILEIO=m
+CONFIG_TCM_PSCSI=m
+CONFIG_TCM_USER2=m
+CONFIG_LOOPBACK_TARGET=m
+CONFIG_TCM_FC=m
+CONFIG_ISCSI_TARGET=m
+# CONFIG_ISCSI_TARGET_CXGB4 is not set
+CONFIG_FUSION=y
+CONFIG_FUSION_SPI=m
+CONFIG_FUSION_FC=m
+CONFIG_FUSION_SAS=m
+CONFIG_FUSION_MAX_SGE=40
+CONFIG_FUSION_CTL=m
+CONFIG_FUSION_LAN=m
+CONFIG_FUSION_LOGGING=y
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_FIREWIRE is not set
+# CONFIG_FIREWIRE_NOSY is not set
+# end of IEEE 1394 (FireWire) support
+
+# CONFIG_MACINTOSH_DRIVERS is not set
+CONFIG_NETDEVICES=y
+CONFIG_MII=m
+CONFIG_NET_CORE=y
+CONFIG_BONDING=m
+CONFIG_DUMMY=m
+# CONFIG_WIREGUARD is not set
+CONFIG_EQUALIZER=m
+CONFIG_NET_FC=y
+CONFIG_IFB=m
+CONFIG_NET_TEAM=m
+CONFIG_NET_TEAM_MODE_BROADCAST=m
+CONFIG_NET_TEAM_MODE_ROUNDROBIN=m
+CONFIG_NET_TEAM_MODE_RANDOM=m
+CONFIG_NET_TEAM_MODE_ACTIVEBACKUP=m
+CONFIG_NET_TEAM_MODE_LOADBALANCE=m
+CONFIG_MACVLAN=m
+CONFIG_MACVTAP=m
+CONFIG_IPVLAN_L3S=y
+CONFIG_IPVLAN=m
+# CONFIG_IPVTAP is not set
+CONFIG_VXLAN=m
+CONFIG_GENEVE=m
+# CONFIG_BAREUDP is not set
+# CONFIG_GTP is not set
+# CONFIG_MACSEC is not set
+CONFIG_NETCONSOLE=m
+CONFIG_NETCONSOLE_DYNAMIC=y
+CONFIG_NETPOLL=y
+CONFIG_NET_POLL_CONTROLLER=y
+CONFIG_TUN=m
+CONFIG_TAP=m
+# CONFIG_TUN_VNET_CROSS_LE is not set
+CONFIG_VETH=m
+CONFIG_VIRTIO_NET=m
+CONFIG_NLMON=m
+# CONFIG_NET_VRF is not set
+CONFIG_SUNGEM_PHY=m
+# CONFIG_ARCNET is not set
+
+#
+# Distributed Switch Architecture drivers
+#
+CONFIG_B53=m
+# CONFIG_B53_MDIO_DRIVER is not set
+# CONFIG_B53_MMAP_DRIVER is not set
+# CONFIG_B53_SRAB_DRIVER is not set
+# CONFIG_B53_SERDES is not set
+CONFIG_NET_DSA_BCM_SF2=m
+# CONFIG_NET_DSA_LOOP is not set
+# CONFIG_NET_DSA_LANTIQ_GSWIP is not set
+# CONFIG_NET_DSA_MT7530 is not set
+CONFIG_NET_DSA_MV88E6060=m
+# CONFIG_NET_DSA_MICROCHIP_KSZ9477 is not set
+# CONFIG_NET_DSA_MICROCHIP_KSZ8795 is not set
+CONFIG_NET_DSA_MV88E6XXX=m
+CONFIG_NET_DSA_MV88E6XXX_GLOBAL2=y
+# CONFIG_NET_DSA_MV88E6XXX_PTP is not set
+# CONFIG_NET_DSA_AR9331 is not set
+# CONFIG_NET_DSA_QCA8K is not set
+# CONFIG_NET_DSA_REALTEK_SMI is not set
+# CONFIG_NET_DSA_SMSC_LAN9303_I2C is not set
+# CONFIG_NET_DSA_SMSC_LAN9303_MDIO is not set
+# CONFIG_NET_DSA_VITESSE_VSC73XX_PLATFORM is not set
+# end of Distributed Switch Architecture drivers
+
+CONFIG_ETHERNET=y
+CONFIG_MDIO=m
+CONFIG_NET_VENDOR_3COM=y
+CONFIG_PCMCIA_3C574=m
+CONFIG_PCMCIA_3C589=m
+CONFIG_VORTEX=m
+CONFIG_TYPHOON=m
+CONFIG_NET_VENDOR_ADAPTEC=y
+CONFIG_ADAPTEC_STARFIRE=m
+CONFIG_NET_VENDOR_AGERE=y
+CONFIG_ET131X=m
+CONFIG_NET_VENDOR_ALACRITECH=y
+# CONFIG_SLICOSS is not set
+CONFIG_NET_VENDOR_ALTEON=y
+CONFIG_ACENIC=m
+# CONFIG_ACENIC_OMIT_TIGON_I is not set
+CONFIG_ALTERA_TSE=m
+CONFIG_NET_VENDOR_AMAZON=y
+# CONFIG_ENA_ETHERNET is not set
+CONFIG_NET_VENDOR_AMD=y
+CONFIG_AMD8111_ETH=m
+CONFIG_PCNET32=m
+CONFIG_PCMCIA_NMCLAN=m
+# CONFIG_AMD_XGBE is not set
+CONFIG_NET_VENDOR_AQUANTIA=y
+# CONFIG_AQTION is not set
+CONFIG_NET_VENDOR_ARC=y
+CONFIG_NET_VENDOR_ATHEROS=y
+CONFIG_ATL2=m
+CONFIG_ATL1=m
+CONFIG_ATL1E=m
+CONFIG_ATL1C=m
+CONFIG_ALX=m
+CONFIG_NET_VENDOR_AURORA=y
+# CONFIG_AURORA_NB8800 is not set
+CONFIG_NET_VENDOR_BROADCOM=y
+CONFIG_B44=m
+CONFIG_B44_PCI_AUTOSELECT=y
+CONFIG_B44_PCICORE_AUTOSELECT=y
+CONFIG_B44_PCI=y
+CONFIG_BCMGENET=m
+CONFIG_BNX2=m
+CONFIG_CNIC=m
+CONFIG_TIGON3=m
+CONFIG_TIGON3_HWMON=y
+CONFIG_BNX2X=m
+CONFIG_BNX2X_SRIOV=y
+# CONFIG_SYSTEMPORT is not set
+# CONFIG_BNXT is not set
+CONFIG_NET_VENDOR_BROCADE=y
+CONFIG_BNA=m
+CONFIG_NET_VENDOR_CADENCE=y
+# CONFIG_MACB is not set
+CONFIG_NET_VENDOR_CAVIUM=y
+# CONFIG_THUNDER_NIC_PF is not set
+# CONFIG_THUNDER_NIC_VF is not set
+# CONFIG_THUNDER_NIC_BGX is not set
+# CONFIG_THUNDER_NIC_RGX is not set
+# CONFIG_CAVIUM_PTP is not set
+# CONFIG_LIQUIDIO is not set
+# CONFIG_LIQUIDIO_VF is not set
+CONFIG_NET_VENDOR_CHELSIO=y
+CONFIG_CHELSIO_T1=m
+CONFIG_CHELSIO_T1_1G=y
+CONFIG_CHELSIO_T3=m
+CONFIG_CHELSIO_T4=m
+# CONFIG_CHELSIO_T4_DCB is not set
+CONFIG_CHELSIO_T4VF=m
+CONFIG_CHELSIO_LIB=m
+CONFIG_NET_VENDOR_CISCO=y
+CONFIG_ENIC=m
+CONFIG_NET_VENDOR_CORTINA=y
+# CONFIG_CX_ECAT is not set
+CONFIG_DNET=m
+CONFIG_NET_VENDOR_DEC=y
+CONFIG_NET_TULIP=y
+CONFIG_DE2104X=m
+CONFIG_DE2104X_DSL=0
+CONFIG_TULIP=m
+# CONFIG_TULIP_MWI is not set
+CONFIG_TULIP_MMIO=y
+# CONFIG_TULIP_NAPI is not set
+CONFIG_DE4X5=m
+CONFIG_WINBOND_840=m
+CONFIG_DM9102=m
+CONFIG_ULI526X=m
+CONFIG_PCMCIA_XIRCOM=m
+CONFIG_NET_VENDOR_DLINK=y
+CONFIG_DL2K=m
+CONFIG_SUNDANCE=m
+# CONFIG_SUNDANCE_MMIO is not set
+CONFIG_NET_VENDOR_EMULEX=y
+CONFIG_BE2NET=m
+CONFIG_BE2NET_HWMON=y
+CONFIG_BE2NET_BE2=y
+CONFIG_BE2NET_BE3=y
+CONFIG_BE2NET_LANCER=y
+CONFIG_BE2NET_SKYHAWK=y
+CONFIG_NET_VENDOR_EZCHIP=y
+# CONFIG_NET_VENDOR_FUJITSU is not set
+CONFIG_NET_VENDOR_GOOGLE=y
+# CONFIG_GVE is not set
+CONFIG_NET_VENDOR_HUAWEI=y
+# CONFIG_HINIC is not set
+# CONFIG_NET_VENDOR_I825XX is not set
+CONFIG_NET_VENDOR_INTEL=y
+CONFIG_E100=m
+CONFIG_E1000=m
+CONFIG_E1000E=m
+CONFIG_E1000E_HWTS=y
+CONFIG_IGB=m
+CONFIG_IGB_HWMON=y
+CONFIG_IGB_DCA=y
+CONFIG_IGBVF=m
+CONFIG_IXGB=m
+CONFIG_IXGBE=m
+CONFIG_IXGBE_HWMON=y
+CONFIG_IXGBE_DCA=y
+CONFIG_IXGBE_DCB=y
+CONFIG_IXGBEVF=m
+CONFIG_I40E=m
+# CONFIG_I40E_DCB is not set
+CONFIG_IAVF=m
+CONFIG_I40EVF=m
+# CONFIG_ICE is not set
+CONFIG_FM10K=m
+# CONFIG_IGC is not set
+CONFIG_JME=m
+CONFIG_NET_VENDOR_MARVELL=y
+CONFIG_MVMDIO=m
+CONFIG_SKGE=m
+# CONFIG_SKGE_DEBUG is not set
+CONFIG_SKGE_GENESIS=y
+CONFIG_SKY2=m
+# CONFIG_SKY2_DEBUG is not set
+CONFIG_NET_VENDOR_MELLANOX=y
+CONFIG_MLX4_EN=m
+CONFIG_MLX4_EN_DCB=y
+CONFIG_MLX4_CORE=m
+CONFIG_MLX4_DEBUG=y
+CONFIG_MLX4_CORE_GEN2=y
+CONFIG_MLX5_CORE=m
+# CONFIG_MLX5_FPGA is not set
+# CONFIG_MLX5_CORE_EN is not set
+# CONFIG_MLXSW_CORE is not set
+# CONFIG_MLXFW is not set
+CONFIG_NET_VENDOR_MICREL=y
+# CONFIG_KS8842 is not set
+# CONFIG_KS8851_MLL is not set
+CONFIG_KSZ884X_PCI=m
+CONFIG_NET_VENDOR_MICROCHIP=y
+# CONFIG_LAN743X is not set
+CONFIG_NET_VENDOR_MICROSEMI=y
+# CONFIG_MSCC_OCELOT_SWITCH is not set
+CONFIG_NET_VENDOR_MYRI=y
+CONFIG_MYRI10GE=m
+CONFIG_MYRI10GE_DCA=y
+CONFIG_FEALNX=m
+CONFIG_NET_VENDOR_NATSEMI=y
+CONFIG_NATSEMI=m
+CONFIG_NS83820=m
+CONFIG_NET_VENDOR_NETERION=y
+CONFIG_S2IO=m
+CONFIG_VXGE=m
+# CONFIG_VXGE_DEBUG_TRACE_ALL is not set
+CONFIG_NET_VENDOR_NETRONOME=y
+# CONFIG_NFP is not set
+CONFIG_NET_VENDOR_NI=y
+# CONFIG_NI_XGE_MANAGEMENT_ENET is not set
+CONFIG_NET_VENDOR_8390=y
+CONFIG_PCMCIA_AXNET=m
+CONFIG_NE2K_PCI=m
+CONFIG_PCMCIA_PCNET=m
+CONFIG_NET_VENDOR_NVIDIA=y
+CONFIG_FORCEDETH=m
+CONFIG_NET_VENDOR_OKI=y
+CONFIG_ETHOC=m
+CONFIG_NET_VENDOR_PACKET_ENGINES=y
+CONFIG_HAMACHI=m
+CONFIG_YELLOWFIN=m
+CONFIG_NET_VENDOR_PENSANDO=y
+# CONFIG_IONIC is not set
+CONFIG_NET_VENDOR_QLOGIC=y
+CONFIG_QLA3XXX=m
+CONFIG_QLCNIC=m
+CONFIG_QLCNIC_SRIOV=y
+CONFIG_QLCNIC_DCB=y
+CONFIG_QLCNIC_HWMON=y
+CONFIG_NETXEN_NIC=m
+# CONFIG_QED is not set
+# CONFIG_NET_VENDOR_QUALCOMM is not set
+CONFIG_NET_VENDOR_RDC=y
+CONFIG_R6040=m
+CONFIG_NET_VENDOR_REALTEK=y
+CONFIG_ATP=m
+CONFIG_8139CP=m
+CONFIG_8139TOO=m
+# CONFIG_8139TOO_PIO is not set
+# CONFIG_8139TOO_TUNE_TWISTER is not set
+CONFIG_8139TOO_8129=y
+# CONFIG_8139_OLD_RX_RESET is not set
+CONFIG_R8169=m
+CONFIG_NET_VENDOR_RENESAS=y
+CONFIG_NET_VENDOR_ROCKER=y
+CONFIG_ROCKER=m
+# CONFIG_NET_VENDOR_SAMSUNG is not set
+# CONFIG_NET_VENDOR_SEEQ is not set
+CONFIG_NET_VENDOR_SOLARFLARE=y
+CONFIG_SFC=m
+CONFIG_SFC_MCDI_MON=y
+CONFIG_SFC_SRIOV=y
+CONFIG_SFC_MCDI_LOGGING=y
+# CONFIG_SFC_FALCON is not set
+CONFIG_NET_VENDOR_SILAN=y
+CONFIG_SC92031=m
+CONFIG_NET_VENDOR_SIS=y
+CONFIG_SIS900=m
+CONFIG_SIS190=m
+CONFIG_NET_VENDOR_SMSC=y
+CONFIG_PCMCIA_SMC91C92=m
+CONFIG_EPIC100=m
+CONFIG_SMSC911X=m
+CONFIG_SMSC9420=m
+CONFIG_NET_VENDOR_SOCIONEXT=y
+CONFIG_NET_VENDOR_STMICRO=y
+CONFIG_STMMAC_ETH=m
+# CONFIG_STMMAC_SELFTESTS is not set
+# CONFIG_STMMAC_PLATFORM is not set
+CONFIG_DWMAC_INTEL=m
+# CONFIG_STMMAC_PCI is not set
+CONFIG_NET_VENDOR_SUN=y
+CONFIG_HAPPYMEAL=m
+CONFIG_SUNGEM=m
+CONFIG_CASSINI=m
+CONFIG_NIU=m
+CONFIG_NET_VENDOR_SYNOPSYS=y
+# CONFIG_DWC_XLGMAC is not set
+CONFIG_NET_VENDOR_TEHUTI=y
+CONFIG_TEHUTI=m
+CONFIG_NET_VENDOR_TI=y
+# CONFIG_TI_CPSW_PHY_SEL is not set
+CONFIG_TLAN=m
+CONFIG_NET_VENDOR_VIA=y
+CONFIG_VIA_RHINE=m
+CONFIG_VIA_RHINE_MMIO=y
+CONFIG_VIA_VELOCITY=m
+CONFIG_NET_VENDOR_WIZNET=y
+CONFIG_WIZNET_W5100=m
+CONFIG_WIZNET_W5300=m
+# CONFIG_WIZNET_BUS_DIRECT is not set
+# CONFIG_WIZNET_BUS_INDIRECT is not set
+CONFIG_WIZNET_BUS_ANY=y
+CONFIG_NET_VENDOR_XILINX=y
+# CONFIG_XILINX_AXI_EMAC is not set
+# CONFIG_XILINX_LL_TEMAC is not set
+CONFIG_NET_VENDOR_XIRCOM=y
+CONFIG_PCMCIA_XIRC2PS=m
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_NET_SB1000 is not set
+CONFIG_MDIO_DEVICE=y
+CONFIG_MDIO_BUS=y
+CONFIG_MDIO_BCM_UNIMAC=m
+CONFIG_MDIO_BITBANG=m
+# CONFIG_MDIO_GPIO is not set
+# CONFIG_MDIO_MSCC_MIIM is not set
+# CONFIG_MDIO_MVUSB is not set
+# CONFIG_MDIO_THUNDER is not set
+CONFIG_MDIO_XPCS=m
+CONFIG_PHYLINK=m
+CONFIG_PHYLIB=y
+CONFIG_SWPHY=y
+# CONFIG_LED_TRIGGER_PHY is not set
+
+#
+# MII PHY device drivers
+#
+# CONFIG_SFP is not set
+# CONFIG_ADIN_PHY is not set
+CONFIG_AMD_PHY=m
+# CONFIG_AQUANTIA_PHY is not set
+# CONFIG_AX88796B_PHY is not set
+CONFIG_BCM7XXX_PHY=m
+CONFIG_BCM87XX_PHY=m
+CONFIG_BCM_NET_PHYLIB=m
+CONFIG_BROADCOM_PHY=m
+# CONFIG_BCM84881_PHY is not set
+CONFIG_CICADA_PHY=m
+# CONFIG_CORTINA_PHY is not set
+CONFIG_DAVICOM_PHY=m
+# CONFIG_DP83822_PHY is not set
+# CONFIG_DP83TC811_PHY is not set
+# CONFIG_DP83848_PHY is not set
+# CONFIG_DP83867_PHY is not set
+# CONFIG_DP83869_PHY is not set
+CONFIG_FIXED_PHY=y
+CONFIG_ICPLUS_PHY=m
+# CONFIG_INTEL_XWAY_PHY is not set
+CONFIG_LSI_ET1011C_PHY=m
+CONFIG_LXT_PHY=m
+CONFIG_MARVELL_PHY=m
+# CONFIG_MARVELL_10G_PHY is not set
+CONFIG_MICREL_PHY=m
+# CONFIG_MICROCHIP_PHY is not set
+# CONFIG_MICROCHIP_T1_PHY is not set
+# CONFIG_MICROSEMI_PHY is not set
+CONFIG_NATIONAL_PHY=m
+# CONFIG_NXP_TJA11XX_PHY is not set
+CONFIG_QSEMI_PHY=m
+CONFIG_REALTEK_PHY=m
+# CONFIG_RENESAS_PHY is not set
+# CONFIG_ROCKCHIP_PHY is not set
+CONFIG_SMSC_PHY=m
+CONFIG_STE10XP=m
+# CONFIG_TERANETICS_PHY is not set
+CONFIG_VITESSE_PHY=m
+# CONFIG_XILINX_GMII2RGMII is not set
+# CONFIG_PLIP is not set
+CONFIG_PPP=m
+CONFIG_PPP_BSDCOMP=m
+CONFIG_PPP_DEFLATE=m
+CONFIG_PPP_FILTER=y
+CONFIG_PPP_MPPE=m
+CONFIG_PPP_MULTILINK=y
+CONFIG_PPPOE=m
+CONFIG_PPTP=m
+CONFIG_PPPOL2TP=m
+CONFIG_PPP_ASYNC=m
+CONFIG_PPP_SYNC_TTY=m
+CONFIG_SLIP=m
+CONFIG_SLHC=m
+CONFIG_SLIP_COMPRESSED=y
+CONFIG_SLIP_SMART=y
+# CONFIG_SLIP_MODE_SLIP6 is not set
+# CONFIG_USB_NET_DRIVERS is not set
+# CONFIG_WLAN is not set
+
+#
+# Enable WiMAX (Networking options) to see the WiMAX drivers
+#
+# CONFIG_WAN is not set
+CONFIG_IEEE802154_DRIVERS=m
+CONFIG_IEEE802154_FAKELB=m
+# CONFIG_IEEE802154_ATUSB is not set
+# CONFIG_IEEE802154_HWSIM is not set
+CONFIG_XEN_NETDEV_FRONTEND=m
+CONFIG_XEN_NETDEV_BACKEND=m
+CONFIG_VMXNET3=m
+# CONFIG_FUJITSU_ES is not set
+CONFIG_HYPERV_NET=m
+# CONFIG_NETDEVSIM is not set
+CONFIG_NET_FAILOVER=m
+# CONFIG_ISDN is not set
+# CONFIG_NVM is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+CONFIG_INPUT_LEDS=y
+CONFIG_INPUT_FF_MEMLESS=y
+CONFIG_INPUT_POLLDEV=m
+CONFIG_INPUT_SPARSEKMAP=m
+# CONFIG_INPUT_MATRIXKMAP is not set
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+CONFIG_INPUT_JOYDEV=m
+CONFIG_INPUT_EVDEV=y
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input Device Drivers
+#
+CONFIG_INPUT_KEYBOARD=y
+# CONFIG_KEYBOARD_ADP5588 is not set
+# CONFIG_KEYBOARD_ADP5589 is not set
+CONFIG_KEYBOARD_ATKBD=y
+# CONFIG_KEYBOARD_QT1050 is not set
+# CONFIG_KEYBOARD_QT1070 is not set
+# CONFIG_KEYBOARD_QT2160 is not set
+# CONFIG_KEYBOARD_DLINK_DIR685 is not set
+# CONFIG_KEYBOARD_LKKBD is not set
+CONFIG_KEYBOARD_GPIO=m
+# CONFIG_KEYBOARD_GPIO_POLLED is not set
+# CONFIG_KEYBOARD_TCA6416 is not set
+# CONFIG_KEYBOARD_TCA8418 is not set
+# CONFIG_KEYBOARD_MATRIX is not set
+# CONFIG_KEYBOARD_LM8323 is not set
+# CONFIG_KEYBOARD_LM8333 is not set
+# CONFIG_KEYBOARD_MAX7359 is not set
+# CONFIG_KEYBOARD_MCS is not set
+# CONFIG_KEYBOARD_MPR121 is not set
+# CONFIG_KEYBOARD_NEWTON is not set
+# CONFIG_KEYBOARD_OPENCORES is not set
+# CONFIG_KEYBOARD_SAMSUNG is not set
+# CONFIG_KEYBOARD_STOWAWAY is not set
+# CONFIG_KEYBOARD_SUNKBD is not set
+# CONFIG_KEYBOARD_TM2_TOUCHKEY is not set
+# CONFIG_KEYBOARD_XTKBD is not set
+CONFIG_INPUT_MOUSE=y
+CONFIG_MOUSE_PS2=y
+CONFIG_MOUSE_PS2_ALPS=y
+CONFIG_MOUSE_PS2_BYD=y
+CONFIG_MOUSE_PS2_LOGIPS2PP=y
+CONFIG_MOUSE_PS2_SYNAPTICS=y
+CONFIG_MOUSE_PS2_SYNAPTICS_SMBUS=y
+CONFIG_MOUSE_PS2_CYPRESS=y
+CONFIG_MOUSE_PS2_LIFEBOOK=y
+CONFIG_MOUSE_PS2_TRACKPOINT=y
+CONFIG_MOUSE_PS2_ELANTECH=y
+CONFIG_MOUSE_PS2_ELANTECH_SMBUS=y
+CONFIG_MOUSE_PS2_SENTELIC=y
+# CONFIG_MOUSE_PS2_TOUCHKIT is not set
+CONFIG_MOUSE_PS2_FOCALTECH=y
+# CONFIG_MOUSE_PS2_VMMOUSE is not set
+CONFIG_MOUSE_PS2_SMBUS=y
+CONFIG_MOUSE_SERIAL=m
+CONFIG_MOUSE_APPLETOUCH=m
+CONFIG_MOUSE_BCM5974=m
+CONFIG_MOUSE_CYAPA=m
+CONFIG_MOUSE_ELAN_I2C=m
+CONFIG_MOUSE_ELAN_I2C_I2C=y
+CONFIG_MOUSE_ELAN_I2C_SMBUS=y
+CONFIG_MOUSE_VSXXXAA=m
+# CONFIG_MOUSE_GPIO is not set
+CONFIG_MOUSE_SYNAPTICS_I2C=m
+CONFIG_MOUSE_SYNAPTICS_USB=m
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TABLET is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+CONFIG_RMI4_CORE=m
+# CONFIG_RMI4_I2C is not set
+# CONFIG_RMI4_SMB is not set
+CONFIG_RMI4_F03=y
+CONFIG_RMI4_F03_SERIO=m
+CONFIG_RMI4_2D_SENSOR=y
+CONFIG_RMI4_F11=y
+CONFIG_RMI4_F12=y
+CONFIG_RMI4_F30=y
+# CONFIG_RMI4_F34 is not set
+# CONFIG_RMI4_F55 is not set
+
+#
+# Hardware I/O ports
+#
+CONFIG_SERIO=y
+CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
+CONFIG_SERIO_I8042=y
+CONFIG_SERIO_SERPORT=y
+# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_PARKBD is not set
+# CONFIG_SERIO_PCIPS2 is not set
+CONFIG_SERIO_LIBPS2=y
+CONFIG_SERIO_RAW=m
+CONFIG_SERIO_ALTERA_PS2=m
+# CONFIG_SERIO_PS2MULT is not set
+CONFIG_SERIO_ARC_PS2=m
+CONFIG_HYPERV_KEYBOARD=m
+# CONFIG_SERIO_GPIO_PS2 is not set
+# CONFIG_USERIO is not set
+CONFIG_GAMEPORT=m
+CONFIG_GAMEPORT_NS558=m
+CONFIG_GAMEPORT_L4=m
+CONFIG_GAMEPORT_EMU10K1=m
+CONFIG_GAMEPORT_FM801=m
+# end of Hardware I/O ports
+# end of Input device support
+
+#
+# Character devices
+#
+CONFIG_TTY=y
+CONFIG_VT=y
+CONFIG_CONSOLE_TRANSLATIONS=y
+CONFIG_VT_CONSOLE=y
+CONFIG_VT_CONSOLE_SLEEP=y
+CONFIG_HW_CONSOLE=y
+CONFIG_VT_HW_CONSOLE_BINDING=y
+CONFIG_UNIX98_PTYS=y
+# CONFIG_LEGACY_PTYS is not set
+CONFIG_LDISC_AUTOLOAD=y
+
+#
+# Serial drivers
+#
+CONFIG_SERIAL_EARLYCON=y
+CONFIG_SERIAL_8250=y
+# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
+CONFIG_SERIAL_8250_PNP=y
+# CONFIG_SERIAL_8250_16550A_VARIANTS is not set
+# CONFIG_SERIAL_8250_FINTEK is not set
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_DMA=y
+CONFIG_SERIAL_8250_PCI=y
+CONFIG_SERIAL_8250_EXAR=y
+CONFIG_SERIAL_8250_CS=m
+CONFIG_SERIAL_8250_NR_UARTS=32
+CONFIG_SERIAL_8250_RUNTIME_UARTS=4
+CONFIG_SERIAL_8250_EXTENDED=y
+CONFIG_SERIAL_8250_MANY_PORTS=y
+CONFIG_SERIAL_8250_SHARE_IRQ=y
+# CONFIG_SERIAL_8250_DETECT_IRQ is not set
+CONFIG_SERIAL_8250_RSA=y
+CONFIG_SERIAL_8250_DWLIB=y
+# CONFIG_SERIAL_8250_DW is not set
+# CONFIG_SERIAL_8250_RT288X is not set
+CONFIG_SERIAL_8250_LPSS=y
+CONFIG_SERIAL_8250_MID=y
+
+#
+# Non-8250 serial port support
+#
+# CONFIG_SERIAL_KGDB_NMI is not set
+# CONFIG_SERIAL_UARTLITE is not set
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_CONSOLE_POLL=y
+CONFIG_SERIAL_JSM=m
+# CONFIG_SERIAL_SCCNXP is not set
+# CONFIG_SERIAL_SC16IS7XX is not set
+# CONFIG_SERIAL_ALTERA_JTAGUART is not set
+# CONFIG_SERIAL_ALTERA_UART is not set
+CONFIG_SERIAL_ARC=m
+CONFIG_SERIAL_ARC_NR_PORTS=1
+# CONFIG_SERIAL_RP2 is not set
+# CONFIG_SERIAL_FSL_LPUART is not set
+# CONFIG_SERIAL_FSL_LINFLEXUART is not set
+# CONFIG_SERIAL_SPRD is not set
+# end of Serial drivers
+
+CONFIG_SERIAL_MCTRL_GPIO=y
+CONFIG_SERIAL_NONSTANDARD=y
+CONFIG_ROCKETPORT=m
+CONFIG_CYCLADES=m
+# CONFIG_CYZ_INTR is not set
+# CONFIG_MOXA_INTELLIO is not set
+# CONFIG_MOXA_SMARTIO is not set
+CONFIG_SYNCLINK=m
+CONFIG_SYNCLINKMP=m
+CONFIG_SYNCLINK_GT=m
+# CONFIG_ISI is not set
+CONFIG_N_HDLC=m
+CONFIG_N_GSM=m
+CONFIG_NOZOMI=m
+# CONFIG_NULL_TTY is not set
+# CONFIG_TRACE_SINK is not set
+CONFIG_HVC_DRIVER=y
+CONFIG_HVC_IRQ=y
+CONFIG_HVC_XEN=y
+CONFIG_HVC_XEN_FRONTEND=y
+# CONFIG_SERIAL_DEV_BUS is not set
+CONFIG_PRINTER=m
+CONFIG_LP_CONSOLE=y
+CONFIG_PPDEV=m
+CONFIG_VIRTIO_CONSOLE=m
+CONFIG_IPMI_HANDLER=m
+CONFIG_IPMI_DMI_DECODE=y
+CONFIG_IPMI_PLAT_DATA=y
+# CONFIG_IPMI_PANIC_EVENT is not set
+CONFIG_IPMI_DEVICE_INTERFACE=m
+CONFIG_IPMI_SI=m
+CONFIG_IPMI_SSIF=m
+CONFIG_IPMI_WATCHDOG=m
+CONFIG_IPMI_POWEROFF=m
+# CONFIG_IPMB_DEVICE_INTERFACE is not set
+CONFIG_HW_RANDOM=y
+CONFIG_HW_RANDOM_TIMERIOMEM=m
+CONFIG_HW_RANDOM_INTEL=m
+CONFIG_HW_RANDOM_AMD=m
+CONFIG_HW_RANDOM_VIA=m
+CONFIG_HW_RANDOM_VIRTIO=m
+# CONFIG_APPLICOM is not set
+
+#
+# PCMCIA character devices
+#
+# CONFIG_SYNCLINK_CS is not set
+CONFIG_CARDMAN_4000=m
+CONFIG_CARDMAN_4040=m
+# CONFIG_SCR24X is not set
+CONFIG_IPWIRELESS=m
+# end of PCMCIA character devices
+
+CONFIG_MWAVE=m
+CONFIG_DEVMEM=y
+# CONFIG_DEVKMEM is not set
+CONFIG_NVRAM=y
+CONFIG_RAW_DRIVER=y
+CONFIG_MAX_RAW_DEVS=8192
+CONFIG_DEVPORT=y
+CONFIG_HPET=y
+# CONFIG_HPET_MMAP is not set
+CONFIG_HANGCHECK_TIMER=m
+CONFIG_UV_MMTIMER=m
+CONFIG_TCG_TPM=m
+CONFIG_HW_RANDOM_TPM=y
+CONFIG_TCG_TIS_CORE=m
+CONFIG_TCG_TIS=m
+# CONFIG_TCG_TIS_I2C_ATMEL is not set
+# CONFIG_TCG_TIS_I2C_INFINEON is not set
+# CONFIG_TCG_TIS_I2C_NUVOTON is not set
+CONFIG_TCG_NSC=m
+CONFIG_TCG_ATMEL=m
+CONFIG_TCG_INFINEON=m
+# CONFIG_TCG_XEN is not set
+# CONFIG_TCG_CRB is not set
+# CONFIG_TCG_VTPM_PROXY is not set
+# CONFIG_TCG_TIS_ST33ZP24_I2C is not set
+CONFIG_TELCLOCK=m
+# CONFIG_XILLYBUS is not set
+# end of Character devices
+
+# CONFIG_RANDOM_TRUST_CPU is not set
+# CONFIG_RANDOM_TRUST_BOOTLOADER is not set
+
+#
+# I2C support
+#
+CONFIG_I2C=y
+CONFIG_ACPI_I2C_OPREGION=y
+CONFIG_I2C_BOARDINFO=y
+CONFIG_I2C_COMPAT=y
+CONFIG_I2C_CHARDEV=m
+CONFIG_I2C_MUX=m
+
+#
+# Multiplexer I2C Chip support
+#
+# CONFIG_I2C_MUX_GPIO is not set
+# CONFIG_I2C_MUX_LTC4306 is not set
+# CONFIG_I2C_MUX_PCA9541 is not set
+# CONFIG_I2C_MUX_PCA954x is not set
+# CONFIG_I2C_MUX_REG is not set
+# CONFIG_I2C_MUX_MLXCPLD is not set
+# end of Multiplexer I2C Chip support
+
+CONFIG_I2C_HELPER_AUTO=y
+CONFIG_I2C_SMBUS=m
+CONFIG_I2C_ALGOBIT=m
+CONFIG_I2C_ALGOPCA=m
+
+#
+# I2C Hardware Bus support
+#
+
+#
+# PC SMBus host controller drivers
+#
+# CONFIG_I2C_ALI1535 is not set
+# CONFIG_I2C_ALI1563 is not set
+# CONFIG_I2C_ALI15X3 is not set
+CONFIG_I2C_AMD756=m
+CONFIG_I2C_AMD756_S4882=m
+CONFIG_I2C_AMD8111=m
+# CONFIG_I2C_AMD_MP2 is not set
+CONFIG_I2C_I801=m
+CONFIG_I2C_ISCH=m
+CONFIG_I2C_ISMT=m
+CONFIG_I2C_PIIX4=m
+CONFIG_I2C_NFORCE2=m
+CONFIG_I2C_NFORCE2_S4985=m
+# CONFIG_I2C_NVIDIA_GPU is not set
+# CONFIG_I2C_SIS5595 is not set
+# CONFIG_I2C_SIS630 is not set
+CONFIG_I2C_SIS96X=m
+CONFIG_I2C_VIA=m
+CONFIG_I2C_VIAPRO=m
+
+#
+# ACPI drivers
+#
+CONFIG_I2C_SCMI=m
+
+#
+# I2C system bus drivers (mostly embedded / system-on-chip)
+#
+# CONFIG_I2C_CBUS_GPIO is not set
+CONFIG_I2C_DESIGNWARE_CORE=m
+CONFIG_I2C_DESIGNWARE_PLATFORM=m
+# CONFIG_I2C_DESIGNWARE_SLAVE is not set
+CONFIG_I2C_DESIGNWARE_PCI=m
+# CONFIG_I2C_DESIGNWARE_BAYTRAIL is not set
+# CONFIG_I2C_EMEV2 is not set
+# CONFIG_I2C_GPIO is not set
+# CONFIG_I2C_OCORES is not set
+CONFIG_I2C_PCA_PLATFORM=m
+CONFIG_I2C_SIMTEC=m
+# CONFIG_I2C_XILINX is not set
+
+#
+# External I2C/SMBus adapter drivers
+#
+CONFIG_I2C_DIOLAN_U2C=m
+CONFIG_I2C_PARPORT=m
+# CONFIG_I2C_ROBOTFUZZ_OSIF is not set
+# CONFIG_I2C_TAOS_EVM is not set
+CONFIG_I2C_TINY_USB=m
+CONFIG_I2C_VIPERBOARD=m
+
+#
+# Other I2C/SMBus bus drivers
+#
+# CONFIG_I2C_MLXCPLD is not set
+# end of I2C Hardware Bus support
+
+CONFIG_I2C_STUB=m
+CONFIG_I2C_SLAVE=y
+CONFIG_I2C_SLAVE_EEPROM=m
+# CONFIG_I2C_DEBUG_CORE is not set
+# CONFIG_I2C_DEBUG_ALGO is not set
+# CONFIG_I2C_DEBUG_BUS is not set
+# end of I2C support
+
+# CONFIG_I3C is not set
+# CONFIG_SPI is not set
+# CONFIG_SPMI is not set
+# CONFIG_HSI is not set
+CONFIG_PPS=m
+# CONFIG_PPS_DEBUG is not set
+
+#
+# PPS clients support
+#
+# CONFIG_PPS_CLIENT_KTIMER is not set
+CONFIG_PPS_CLIENT_LDISC=m
+CONFIG_PPS_CLIENT_PARPORT=m
+CONFIG_PPS_CLIENT_GPIO=m
+
+#
+# PPS generators support
+#
+
+#
+# PTP clock support
+#
+CONFIG_PTP_1588_CLOCK=m
+CONFIG_DP83640_PHY=m
+# CONFIG_PTP_1588_CLOCK_INES is not set
+CONFIG_PTP_1588_CLOCK_KVM=m
+# CONFIG_PTP_1588_CLOCK_IDT82P33 is not set
+# CONFIG_PTP_1588_CLOCK_IDTCM is not set
+# CONFIG_PTP_1588_CLOCK_VMW is not set
+# end of PTP clock support
+
+CONFIG_PINCTRL=y
+CONFIG_PINMUX=y
+CONFIG_PINCONF=y
+CONFIG_GENERIC_PINCONF=y
+# CONFIG_DEBUG_PINCTRL is not set
+# CONFIG_PINCTRL_AMD is not set
+# CONFIG_PINCTRL_MCP23S08 is not set
+# CONFIG_PINCTRL_SX150X is not set
+CONFIG_PINCTRL_BAYTRAIL=y
+CONFIG_PINCTRL_CHERRYVIEW=m
+# CONFIG_PINCTRL_LYNXPOINT is not set
+# CONFIG_PINCTRL_BROXTON is not set
+# CONFIG_PINCTRL_CANNONLAKE is not set
+# CONFIG_PINCTRL_CEDARFORK is not set
+# CONFIG_PINCTRL_DENVERTON is not set
+# CONFIG_PINCTRL_GEMINILAKE is not set
+# CONFIG_PINCTRL_ICELAKE is not set
+# CONFIG_PINCTRL_LEWISBURG is not set
+# CONFIG_PINCTRL_SUNRISEPOINT is not set
+# CONFIG_PINCTRL_TIGERLAKE is not set
+CONFIG_GPIOLIB=y
+CONFIG_GPIOLIB_FASTPATH_LIMIT=512
+CONFIG_GPIO_ACPI=y
+CONFIG_GPIOLIB_IRQCHIP=y
+# CONFIG_DEBUG_GPIO is not set
+CONFIG_GPIO_SYSFS=y
+
+#
+# Memory mapped GPIO drivers
+#
+# CONFIG_GPIO_AMDPT is not set
+# CONFIG_GPIO_DWAPB is not set
+# CONFIG_GPIO_EXAR is not set
+# CONFIG_GPIO_GENERIC_PLATFORM is not set
+CONFIG_GPIO_ICH=m
+# CONFIG_GPIO_MB86S7X is not set
+# CONFIG_GPIO_VX855 is not set
+# CONFIG_GPIO_XILINX is not set
+# CONFIG_GPIO_AMD_FCH is not set
+# end of Memory mapped GPIO drivers
+
+#
+# Port-mapped I/O GPIO drivers
+#
+# CONFIG_GPIO_F7188X is not set
+# CONFIG_GPIO_IT87 is not set
+# CONFIG_GPIO_SCH is not set
+# CONFIG_GPIO_SCH311X is not set
+# CONFIG_GPIO_WINBOND is not set
+# CONFIG_GPIO_WS16C48 is not set
+# end of Port-mapped I/O GPIO drivers
+
+#
+# I2C GPIO expanders
+#
+# CONFIG_GPIO_ADP5588 is not set
+# CONFIG_GPIO_MAX7300 is not set
+# CONFIG_GPIO_MAX732X is not set
+# CONFIG_GPIO_PCA953X is not set
+# CONFIG_GPIO_PCF857X is not set
+# CONFIG_GPIO_TPIC2810 is not set
+# end of I2C GPIO expanders
+
+#
+# MFD GPIO expanders
+#
+# end of MFD GPIO expanders
+
+#
+# PCI GPIO expanders
+#
+# CONFIG_GPIO_AMD8111 is not set
+# CONFIG_GPIO_BT8XX is not set
+# CONFIG_GPIO_ML_IOH is not set
+# CONFIG_GPIO_PCI_IDIO_16 is not set
+# CONFIG_GPIO_PCIE_IDIO_24 is not set
+# CONFIG_GPIO_RDC321X is not set
+# end of PCI GPIO expanders
+
+#
+# USB GPIO expanders
+#
+CONFIG_GPIO_VIPERBOARD=m
+# end of USB GPIO expanders
+
+# CONFIG_GPIO_MOCKUP is not set
+CONFIG_W1=m
+CONFIG_W1_CON=y
+
+#
+# 1-wire Bus Masters
+#
+# CONFIG_W1_MASTER_MATROX is not set
+CONFIG_W1_MASTER_DS2490=m
+CONFIG_W1_MASTER_DS2482=m
+CONFIG_W1_MASTER_DS1WM=m
+# CONFIG_W1_MASTER_GPIO is not set
+# CONFIG_W1_MASTER_SGI is not set
+# end of 1-wire Bus Masters
+
+#
+# 1-wire Slaves
+#
+CONFIG_W1_SLAVE_THERM=m
+CONFIG_W1_SLAVE_SMEM=m
+# CONFIG_W1_SLAVE_DS2405 is not set
+CONFIG_W1_SLAVE_DS2408=m
+# CONFIG_W1_SLAVE_DS2408_READBACK is not set
+CONFIG_W1_SLAVE_DS2413=m
+CONFIG_W1_SLAVE_DS2406=m
+CONFIG_W1_SLAVE_DS2423=m
+# CONFIG_W1_SLAVE_DS2805 is not set
+# CONFIG_W1_SLAVE_DS2430 is not set
+CONFIG_W1_SLAVE_DS2431=m
+CONFIG_W1_SLAVE_DS2433=m
+CONFIG_W1_SLAVE_DS2433_CRC=y
+# CONFIG_W1_SLAVE_DS2438 is not set
+# CONFIG_W1_SLAVE_DS250X is not set
+CONFIG_W1_SLAVE_DS2780=m
+CONFIG_W1_SLAVE_DS2781=m
+CONFIG_W1_SLAVE_DS28E04=m
+# CONFIG_W1_SLAVE_DS28E17 is not set
+# end of 1-wire Slaves
+
+# CONFIG_POWER_AVS is not set
+CONFIG_POWER_RESET=y
+# CONFIG_POWER_RESET_RESTART is not set
+CONFIG_POWER_SUPPLY=y
+# CONFIG_POWER_SUPPLY_DEBUG is not set
+CONFIG_POWER_SUPPLY_HWMON=y
+# CONFIG_PDA_POWER is not set
+# CONFIG_TEST_POWER is not set
+# CONFIG_CHARGER_ADP5061 is not set
+# CONFIG_BATTERY_DS2760 is not set
+# CONFIG_BATTERY_DS2780 is not set
+# CONFIG_BATTERY_DS2781 is not set
+# CONFIG_BATTERY_DS2782 is not set
+# CONFIG_BATTERY_SBS is not set
+# CONFIG_CHARGER_SBS is not set
+# CONFIG_MANAGER_SBS is not set
+# CONFIG_BATTERY_BQ27XXX is not set
+# CONFIG_BATTERY_MAX17040 is not set
+# CONFIG_BATTERY_MAX17042 is not set
+# CONFIG_BATTERY_MAX1721X is not set
+# CONFIG_CHARGER_ISP1704 is not set
+# CONFIG_CHARGER_MAX8903 is not set
+# CONFIG_CHARGER_LP8727 is not set
+# CONFIG_CHARGER_GPIO is not set
+# CONFIG_CHARGER_LT3651 is not set
+# CONFIG_CHARGER_BQ2415X is not set
+# CONFIG_CHARGER_BQ24190 is not set
+# CONFIG_CHARGER_BQ24257 is not set
+# CONFIG_CHARGER_BQ24735 is not set
+# CONFIG_CHARGER_BQ25890 is not set
+CONFIG_CHARGER_SMB347=m
+# CONFIG_BATTERY_GAUGE_LTC2941 is not set
+# CONFIG_CHARGER_RT9455 is not set
+CONFIG_HWMON=y
+CONFIG_HWMON_VID=m
+# CONFIG_HWMON_DEBUG_CHIP is not set
+
+#
+# Native drivers
+#
+CONFIG_SENSORS_ABITUGURU=m
+CONFIG_SENSORS_ABITUGURU3=m
+CONFIG_SENSORS_AD7414=m
+CONFIG_SENSORS_AD7418=m
+CONFIG_SENSORS_ADM1021=m
+CONFIG_SENSORS_ADM1025=m
+CONFIG_SENSORS_ADM1026=m
+CONFIG_SENSORS_ADM1029=m
+CONFIG_SENSORS_ADM1031=m
+# CONFIG_SENSORS_ADM1177 is not set
+CONFIG_SENSORS_ADM9240=m
+CONFIG_SENSORS_ADT7X10=m
+CONFIG_SENSORS_ADT7410=m
+CONFIG_SENSORS_ADT7411=m
+CONFIG_SENSORS_ADT7462=m
+CONFIG_SENSORS_ADT7470=m
+CONFIG_SENSORS_ADT7475=m
+# CONFIG_SENSORS_AS370 is not set
+CONFIG_SENSORS_ASC7621=m
+# CONFIG_SENSORS_AXI_FAN_CONTROL is not set
+CONFIG_SENSORS_K8TEMP=m
+CONFIG_SENSORS_K10TEMP=m
+CONFIG_SENSORS_FAM15H_POWER=m
+CONFIG_SENSORS_APPLESMC=m
+CONFIG_SENSORS_ASB100=m
+# CONFIG_SENSORS_ASPEED is not set
+CONFIG_SENSORS_ATXP1=m
+# CONFIG_SENSORS_DRIVETEMP is not set
+CONFIG_SENSORS_DS620=m
+CONFIG_SENSORS_DS1621=m
+CONFIG_SENSORS_DELL_SMM=m
+CONFIG_SENSORS_I5K_AMB=m
+CONFIG_SENSORS_F71805F=m
+CONFIG_SENSORS_F71882FG=m
+CONFIG_SENSORS_F75375S=m
+CONFIG_SENSORS_FSCHMD=m
+# CONFIG_SENSORS_FTSTEUTATES is not set
+CONFIG_SENSORS_GL518SM=m
+CONFIG_SENSORS_GL520SM=m
+CONFIG_SENSORS_G760A=m
+CONFIG_SENSORS_G762=m
+# CONFIG_SENSORS_HIH6130 is not set
+CONFIG_SENSORS_IBMAEM=m
+CONFIG_SENSORS_IBMPEX=m
+CONFIG_SENSORS_I5500=m
+CONFIG_SENSORS_CORETEMP=m
+CONFIG_SENSORS_IT87=m
+# CONFIG_SENSORS_JC42 is not set
+CONFIG_SENSORS_POWR1220=m
+CONFIG_SENSORS_LINEAGE=m
+CONFIG_SENSORS_LTC2945=m
+# CONFIG_SENSORS_LTC2947_I2C is not set
+# CONFIG_SENSORS_LTC2990 is not set
+CONFIG_SENSORS_LTC4151=m
+CONFIG_SENSORS_LTC4215=m
+CONFIG_SENSORS_LTC4222=m
+CONFIG_SENSORS_LTC4245=m
+CONFIG_SENSORS_LTC4260=m
+CONFIG_SENSORS_LTC4261=m
+CONFIG_SENSORS_MAX16065=m
+CONFIG_SENSORS_MAX1619=m
+CONFIG_SENSORS_MAX1668=m
+CONFIG_SENSORS_MAX197=m
+# CONFIG_SENSORS_MAX31730 is not set
+# CONFIG_SENSORS_MAX6621 is not set
+CONFIG_SENSORS_MAX6639=m
+CONFIG_SENSORS_MAX6642=m
+CONFIG_SENSORS_MAX6650=m
+CONFIG_SENSORS_MAX6697=m
+# CONFIG_SENSORS_MAX31790 is not set
+CONFIG_SENSORS_MCP3021=m
+# CONFIG_SENSORS_TC654 is not set
+CONFIG_SENSORS_LM63=m
+CONFIG_SENSORS_LM73=m
+CONFIG_SENSORS_LM75=m
+CONFIG_SENSORS_LM77=m
+CONFIG_SENSORS_LM78=m
+CONFIG_SENSORS_LM80=m
+CONFIG_SENSORS_LM83=m
+CONFIG_SENSORS_LM85=m
+CONFIG_SENSORS_LM87=m
+CONFIG_SENSORS_LM90=m
+CONFIG_SENSORS_LM92=m
+CONFIG_SENSORS_LM93=m
+CONFIG_SENSORS_LM95234=m
+CONFIG_SENSORS_LM95241=m
+CONFIG_SENSORS_LM95245=m
+CONFIG_SENSORS_PC87360=m
+CONFIG_SENSORS_PC87427=m
+CONFIG_SENSORS_NTC_THERMISTOR=m
+CONFIG_SENSORS_NCT6683=m
+CONFIG_SENSORS_NCT6775=m
+CONFIG_SENSORS_NCT7802=m
+# CONFIG_SENSORS_NCT7904 is not set
+# CONFIG_SENSORS_NPCM7XX is not set
+CONFIG_SENSORS_PCF8591=m
+CONFIG_PMBUS=m
+CONFIG_SENSORS_PMBUS=m
+CONFIG_SENSORS_ADM1275=m
+# CONFIG_SENSORS_BEL_PFE is not set
+# CONFIG_SENSORS_IBM_CFFPS is not set
+# CONFIG_SENSORS_INSPUR_IPSPS is not set
+# CONFIG_SENSORS_IR35221 is not set
+# CONFIG_SENSORS_IR38064 is not set
+# CONFIG_SENSORS_IRPS5401 is not set
+# CONFIG_SENSORS_ISL68137 is not set
+CONFIG_SENSORS_LM25066=m
+CONFIG_SENSORS_LTC2978=m
+# CONFIG_SENSORS_LTC3815 is not set
+CONFIG_SENSORS_MAX16064=m
+# CONFIG_SENSORS_MAX20730 is not set
+# CONFIG_SENSORS_MAX20751 is not set
+# CONFIG_SENSORS_MAX31785 is not set
+CONFIG_SENSORS_MAX34440=m
+CONFIG_SENSORS_MAX8688=m
+# CONFIG_SENSORS_PXE1610 is not set
+CONFIG_SENSORS_TPS40422=m
+# CONFIG_SENSORS_TPS53679 is not set
+CONFIG_SENSORS_UCD9000=m
+CONFIG_SENSORS_UCD9200=m
+# CONFIG_SENSORS_XDPE122 is not set
+CONFIG_SENSORS_ZL6100=m
+CONFIG_SENSORS_SHT15=m
+CONFIG_SENSORS_SHT21=m
+# CONFIG_SENSORS_SHT3x is not set
+CONFIG_SENSORS_SHTC1=m
+CONFIG_SENSORS_SIS5595=m
+CONFIG_SENSORS_DME1737=m
+CONFIG_SENSORS_EMC1403=m
+# CONFIG_SENSORS_EMC2103 is not set
+CONFIG_SENSORS_EMC6W201=m
+CONFIG_SENSORS_SMSC47M1=m
+CONFIG_SENSORS_SMSC47M192=m
+CONFIG_SENSORS_SMSC47B397=m
+CONFIG_SENSORS_SCH56XX_COMMON=m
+CONFIG_SENSORS_SCH5627=m
+CONFIG_SENSORS_SCH5636=m
+# CONFIG_SENSORS_STTS751 is not set
+# CONFIG_SENSORS_SMM665 is not set
+CONFIG_SENSORS_ADC128D818=m
+CONFIG_SENSORS_ADS7828=m
+CONFIG_SENSORS_AMC6821=m
+CONFIG_SENSORS_INA209=m
+CONFIG_SENSORS_INA2XX=m
+# CONFIG_SENSORS_INA3221 is not set
+# CONFIG_SENSORS_TC74 is not set
+CONFIG_SENSORS_THMC50=m
+CONFIG_SENSORS_TMP102=m
+CONFIG_SENSORS_TMP103=m
+# CONFIG_SENSORS_TMP108 is not set
+CONFIG_SENSORS_TMP401=m
+CONFIG_SENSORS_TMP421=m
+# CONFIG_SENSORS_TMP513 is not set
+CONFIG_SENSORS_VIA_CPUTEMP=m
+CONFIG_SENSORS_VIA686A=m
+CONFIG_SENSORS_VT1211=m
+CONFIG_SENSORS_VT8231=m
+# CONFIG_SENSORS_W83773G is not set
+CONFIG_SENSORS_W83781D=m
+CONFIG_SENSORS_W83791D=m
+CONFIG_SENSORS_W83792D=m
+CONFIG_SENSORS_W83793=m
+CONFIG_SENSORS_W83795=m
+# CONFIG_SENSORS_W83795_FANCTRL is not set
+CONFIG_SENSORS_W83L785TS=m
+CONFIG_SENSORS_W83L786NG=m
+CONFIG_SENSORS_W83627HF=m
+CONFIG_SENSORS_W83627EHF=m
+# CONFIG_SENSORS_XGENE is not set
+
+#
+# ACPI drivers
+#
+CONFIG_SENSORS_ACPI_POWER=m
+CONFIG_SENSORS_ATK0110=m
+CONFIG_THERMAL=y
+# CONFIG_THERMAL_STATISTICS is not set
+CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0
+CONFIG_THERMAL_HWMON=y
+CONFIG_THERMAL_WRITABLE_TRIPS=y
+CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
+# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
+# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
+CONFIG_THERMAL_GOV_FAIR_SHARE=y
+CONFIG_THERMAL_GOV_STEP_WISE=y
+CONFIG_THERMAL_GOV_BANG_BANG=y
+CONFIG_THERMAL_GOV_USER_SPACE=y
+# CONFIG_CLOCK_THERMAL is not set
+# CONFIG_DEVFREQ_THERMAL is not set
+# CONFIG_THERMAL_EMULATION is not set
+
+#
+# Intel thermal drivers
+#
+# CONFIG_INTEL_POWERCLAMP is not set
+CONFIG_X86_PKG_TEMP_THERMAL=m
+CONFIG_INTEL_SOC_DTS_IOSF_CORE=m
+CONFIG_INTEL_SOC_DTS_THERMAL=m
+
+#
+# ACPI INT340X thermal drivers
+#
+CONFIG_INT340X_THERMAL=m
+CONFIG_ACPI_THERMAL_REL=m
+# CONFIG_INT3406_THERMAL is not set
+CONFIG_PROC_THERMAL_MMIO_RAPL=y
+# end of ACPI INT340X thermal drivers
+
+# CONFIG_INTEL_PCH_THERMAL is not set
+# end of Intel thermal drivers
+
+CONFIG_WATCHDOG=y
+CONFIG_WATCHDOG_CORE=y
+# CONFIG_WATCHDOG_NOWAYOUT is not set
+CONFIG_WATCHDOG_HANDLE_BOOT_ENABLED=y
+CONFIG_WATCHDOG_OPEN_TIMEOUT=0
+# CONFIG_WATCHDOG_SYSFS is not set
+
+#
+# Watchdog Pretimeout Governors
+#
+# CONFIG_WATCHDOG_PRETIMEOUT_GOV is not set
+
+#
+# Watchdog Device Drivers
+#
+CONFIG_SOFT_WATCHDOG=m
+# CONFIG_WDAT_WDT is not set
+# CONFIG_XILINX_WATCHDOG is not set
+# CONFIG_ZIIRAVE_WATCHDOG is not set
+# CONFIG_CADENCE_WATCHDOG is not set
+# CONFIG_DW_WATCHDOG is not set
+# CONFIG_MAX63XX_WATCHDOG is not set
+# CONFIG_ACQUIRE_WDT is not set
+# CONFIG_ADVANTECH_WDT is not set
+CONFIG_ALIM1535_WDT=m
+CONFIG_ALIM7101_WDT=m
+# CONFIG_EBC_C384_WDT is not set
+CONFIG_F71808E_WDT=m
+CONFIG_SP5100_TCO=m
+CONFIG_SBC_FITPC2_WATCHDOG=m
+# CONFIG_EUROTECH_WDT is not set
+CONFIG_IB700_WDT=m
+CONFIG_IBMASR=m
+# CONFIG_WAFER_WDT is not set
+CONFIG_I6300ESB_WDT=m
+CONFIG_IE6XX_WDT=m
+CONFIG_ITCO_WDT=m
+CONFIG_ITCO_VENDOR_SUPPORT=y
+CONFIG_IT8712F_WDT=m
+CONFIG_IT87_WDT=m
+CONFIG_HP_WATCHDOG=m
+CONFIG_HPWDT_NMI_DECODING=y
+# CONFIG_SC1200_WDT is not set
+# CONFIG_PC87413_WDT is not set
+CONFIG_NV_TCO=m
+# CONFIG_60XX_WDT is not set
+# CONFIG_CPU5_WDT is not set
+CONFIG_SMSC_SCH311X_WDT=m
+# CONFIG_SMSC37B787_WDT is not set
+# CONFIG_TQMX86_WDT is not set
+CONFIG_VIA_WDT=m
+CONFIG_W83627HF_WDT=m
+CONFIG_W83877F_WDT=m
+CONFIG_W83977F_WDT=m
+CONFIG_MACHZ_WDT=m
+# CONFIG_SBC_EPX_C3_WATCHDOG is not set
+# CONFIG_INTEL_MEI_WDT is not set
+# CONFIG_NI903X_WDT is not set
+# CONFIG_NIC7018_WDT is not set
+# CONFIG_MEN_A21_WDT is not set
+CONFIG_XEN_WDT=m
+
+#
+# PCI-based Watchdog Cards
+#
+CONFIG_PCIPCWATCHDOG=m
+CONFIG_WDTPCI=m
+
+#
+# USB-based Watchdog Cards
+#
+CONFIG_USBPCWATCHDOG=m
+CONFIG_SSB_POSSIBLE=y
+CONFIG_SSB=m
+CONFIG_SSB_SPROM=y
+CONFIG_SSB_PCIHOST_POSSIBLE=y
+CONFIG_SSB_PCIHOST=y
+CONFIG_SSB_PCMCIAHOST_POSSIBLE=y
+CONFIG_SSB_PCMCIAHOST=y
+CONFIG_SSB_DRIVER_PCICORE_POSSIBLE=y
+CONFIG_SSB_DRIVER_PCICORE=y
+CONFIG_SSB_DRIVER_GPIO=y
+CONFIG_BCMA_POSSIBLE=y
+CONFIG_BCMA=m
+CONFIG_BCMA_HOST_PCI_POSSIBLE=y
+CONFIG_BCMA_HOST_PCI=y
+# CONFIG_BCMA_HOST_SOC is not set
+CONFIG_BCMA_DRIVER_PCI=y
+CONFIG_BCMA_DRIVER_GMAC_CMN=y
+CONFIG_BCMA_DRIVER_GPIO=y
+# CONFIG_BCMA_DEBUG is not set
+
+#
+# Multifunction device drivers
+#
+CONFIG_MFD_CORE=m
+# CONFIG_MFD_AS3711 is not set
+# CONFIG_PMIC_ADP5520 is not set
+# CONFIG_MFD_AAT2870_CORE is not set
+# CONFIG_MFD_BCM590XX is not set
+# CONFIG_MFD_BD9571MWV is not set
+# CONFIG_MFD_AXP20X_I2C is not set
+# CONFIG_MFD_MADERA is not set
+# CONFIG_PMIC_DA903X is not set
+# CONFIG_MFD_DA9052_I2C is not set
+# CONFIG_MFD_DA9055 is not set
+# CONFIG_MFD_DA9062 is not set
+# CONFIG_MFD_DA9063 is not set
+# CONFIG_MFD_DA9150 is not set
+# CONFIG_MFD_DLN2 is not set
+# CONFIG_MFD_MC13XXX_I2C is not set
+# CONFIG_HTC_PASIC3 is not set
+# CONFIG_HTC_I2CPLD is not set
+# CONFIG_MFD_INTEL_QUARK_I2C_GPIO is not set
+CONFIG_LPC_ICH=m
+CONFIG_LPC_SCH=m
+# CONFIG_INTEL_SOC_PMIC_CHTDC_TI is not set
+# CONFIG_MFD_INTEL_LPSS_ACPI is not set
+# CONFIG_MFD_INTEL_LPSS_PCI is not set
+# CONFIG_MFD_IQS62X is not set
+# CONFIG_MFD_JANZ_CMODIO is not set
+# CONFIG_MFD_KEMPLD is not set
+# CONFIG_MFD_88PM800 is not set
+# CONFIG_MFD_88PM805 is not set
+# CONFIG_MFD_88PM860X is not set
+# CONFIG_MFD_MAX14577 is not set
+# CONFIG_MFD_MAX77693 is not set
+# CONFIG_MFD_MAX77843 is not set
+# CONFIG_MFD_MAX8907 is not set
+# CONFIG_MFD_MAX8925 is not set
+# CONFIG_MFD_MAX8997 is not set
+# CONFIG_MFD_MAX8998 is not set
+# CONFIG_MFD_MT6397 is not set
+# CONFIG_MFD_MENF21BMC is not set
+CONFIG_MFD_VIPERBOARD=m
+# CONFIG_MFD_RETU is not set
+# CONFIG_MFD_PCF50633 is not set
+# CONFIG_MFD_RDC321X is not set
+# CONFIG_MFD_RT5033 is not set
+# CONFIG_MFD_RC5T583 is not set
+# CONFIG_MFD_SEC_CORE is not set
+# CONFIG_MFD_SI476X_CORE is not set
+CONFIG_MFD_SM501=m
+CONFIG_MFD_SM501_GPIO=y
+# CONFIG_MFD_SKY81452 is not set
+# CONFIG_MFD_SMSC is not set
+# CONFIG_ABX500_CORE is not set
+# CONFIG_MFD_SYSCON is not set
+# CONFIG_MFD_TI_AM335X_TSCADC is not set
+# CONFIG_MFD_LP3943 is not set
+# CONFIG_MFD_LP8788 is not set
+# CONFIG_MFD_TI_LMU is not set
+# CONFIG_MFD_PALMAS is not set
+# CONFIG_TPS6105X is not set
+# CONFIG_TPS65010 is not set
+# CONFIG_TPS6507X is not set
+# CONFIG_MFD_TPS65086 is not set
+# CONFIG_MFD_TPS65090 is not set
+# CONFIG_MFD_TI_LP873X is not set
+# CONFIG_MFD_TPS6586X is not set
+# CONFIG_MFD_TPS65910 is not set
+# CONFIG_MFD_TPS65912_I2C is not set
+# CONFIG_MFD_TPS80031 is not set
+# CONFIG_TWL4030_CORE is not set
+# CONFIG_TWL6040_CORE is not set
+CONFIG_MFD_WL1273_CORE=m
+# CONFIG_MFD_LM3533 is not set
+# CONFIG_MFD_TQMX86 is not set
+CONFIG_MFD_VX855=m
+# CONFIG_MFD_ARIZONA_I2C is not set
+# CONFIG_MFD_WM8400 is not set
+# CONFIG_MFD_WM831X_I2C is not set
+# CONFIG_MFD_WM8350_I2C is not set
+# CONFIG_MFD_WM8994 is not set
+# end of Multifunction device drivers
+
+# CONFIG_REGULATOR is not set
+# CONFIG_RC_CORE is not set
+# CONFIG_MEDIA_SUPPORT is not set
+
+#
+# Graphics support
+#
+# CONFIG_AGP is not set
+CONFIG_VGA_ARB=y
+CONFIG_VGA_ARB_MAX_GPUS=16
+# CONFIG_VGA_SWITCHEROO is not set
+# CONFIG_DRM is not set
+
+#
+# ARM devices
+#
+# end of ARM devices
+
+# CONFIG_DRM_XEN is not set
+
+#
+# Frame buffer Devices
+#
+# CONFIG_FB is not set
+# end of Frame buffer Devices
+
+#
+# Backlight & LCD device support
+#
+CONFIG_LCD_CLASS_DEVICE=m
+CONFIG_LCD_PLATFORM=m
+CONFIG_BACKLIGHT_CLASS_DEVICE=y
+# CONFIG_BACKLIGHT_GENERIC is not set
+CONFIG_BACKLIGHT_APPLE=m
+# CONFIG_BACKLIGHT_QCOM_WLED is not set
+# CONFIG_BACKLIGHT_SAHARA is not set
+# CONFIG_BACKLIGHT_ADP8860 is not set
+# CONFIG_BACKLIGHT_ADP8870 is not set
+# CONFIG_BACKLIGHT_LM3639 is not set
+# CONFIG_BACKLIGHT_GPIO is not set
+# CONFIG_BACKLIGHT_LV5207LP is not set
+# CONFIG_BACKLIGHT_BD6107 is not set
+# CONFIG_BACKLIGHT_ARCXCNN is not set
+# end of Backlight & LCD device support
+
+#
+# Console display driver support
+#
+CONFIG_VGA_CONSOLE=y
+CONFIG_VGACON_SOFT_SCROLLBACK=y
+CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=64
+# CONFIG_VGACON_SOFT_SCROLLBACK_PERSISTENT_ENABLE_BY_DEFAULT is not set
+CONFIG_DUMMY_CONSOLE=y
+CONFIG_DUMMY_CONSOLE_COLUMNS=80
+CONFIG_DUMMY_CONSOLE_ROWS=25
+# end of Console display driver support
+# end of Graphics support
+
+# CONFIG_SOUND is not set
+
+#
+# HID support
+#
+CONFIG_HID=y
+CONFIG_HID_BATTERY_STRENGTH=y
+CONFIG_HIDRAW=y
+CONFIG_UHID=m
+CONFIG_HID_GENERIC=y
+
+#
+# Special HID drivers
+#
+CONFIG_HID_A4TECH=y
+# CONFIG_HID_ACCUTOUCH is not set
+CONFIG_HID_ACRUX=m
+CONFIG_HID_ACRUX_FF=y
+CONFIG_HID_APPLE=y
+CONFIG_HID_APPLEIR=m
+# CONFIG_HID_ASUS is not set
+CONFIG_HID_AUREAL=m
+CONFIG_HID_BELKIN=y
+# CONFIG_HID_BETOP_FF is not set
+# CONFIG_HID_BIGBEN_FF is not set
+CONFIG_HID_CHERRY=y
+CONFIG_HID_CHICONY=y
+# CONFIG_HID_CORSAIR is not set
+# CONFIG_HID_COUGAR is not set
+# CONFIG_HID_MACALLY is not set
+# CONFIG_HID_CMEDIA is not set
+# CONFIG_HID_CP2112 is not set
+# CONFIG_HID_CREATIVE_SB0540 is not set
+CONFIG_HID_CYPRESS=y
+CONFIG_HID_DRAGONRISE=m
+CONFIG_DRAGONRISE_FF=y
+CONFIG_HID_EMS_FF=m
+# CONFIG_HID_ELAN is not set
+CONFIG_HID_ELECOM=m
+CONFIG_HID_ELO=m
+CONFIG_HID_EZKEY=y
+# CONFIG_HID_GEMBIRD is not set
+# CONFIG_HID_GFRM is not set
+# CONFIG_HID_GLORIOUS is not set
+CONFIG_HID_HOLTEK=m
+CONFIG_HOLTEK_FF=y
+CONFIG_HID_GT683R=m
+CONFIG_HID_KEYTOUCH=m
+CONFIG_HID_KYE=m
+CONFIG_HID_UCLOGIC=m
+CONFIG_HID_WALTOP=m
+# CONFIG_HID_VIEWSONIC is not set
+CONFIG_HID_GYRATION=m
+CONFIG_HID_ICADE=m
+CONFIG_HID_ITE=y
+# CONFIG_HID_JABRA is not set
+CONFIG_HID_TWINHAN=m
+CONFIG_HID_KENSINGTON=y
+CONFIG_HID_LCPOWER=m
+CONFIG_HID_LED=m
+CONFIG_HID_LENOVO=m
+CONFIG_HID_LOGITECH=y
+CONFIG_HID_LOGITECH_DJ=m
+CONFIG_HID_LOGITECH_HIDPP=m
+CONFIG_LOGITECH_FF=y
+CONFIG_LOGIRUMBLEPAD2_FF=y
+CONFIG_LOGIG940_FF=y
+CONFIG_LOGIWHEELS_FF=y
+CONFIG_HID_MAGICMOUSE=y
+# CONFIG_HID_MALTRON is not set
+# CONFIG_HID_MAYFLASH is not set
+CONFIG_HID_REDRAGON=y
+CONFIG_HID_MICROSOFT=y
+CONFIG_HID_MONTEREY=y
+CONFIG_HID_MULTITOUCH=m
+# CONFIG_HID_NTI is not set
+CONFIG_HID_NTRIG=y
+CONFIG_HID_ORTEK=m
+CONFIG_HID_PANTHERLORD=m
+CONFIG_PANTHERLORD_FF=y
+CONFIG_HID_PENMOUNT=m
+CONFIG_HID_PETALYNX=m
+CONFIG_HID_PICOLCD=m
+CONFIG_HID_PICOLCD_BACKLIGHT=y
+CONFIG_HID_PICOLCD_LCD=y
+CONFIG_HID_PICOLCD_LEDS=y
+CONFIG_HID_PLANTRONICS=m
+CONFIG_HID_PRIMAX=m
+# CONFIG_HID_RETRODE is not set
+CONFIG_HID_ROCCAT=m
+CONFIG_HID_SAITEK=m
+CONFIG_HID_SAMSUNG=m
+CONFIG_HID_SONY=m
+CONFIG_SONY_FF=y
+CONFIG_HID_SPEEDLINK=m
+# CONFIG_HID_STEAM is not set
+CONFIG_HID_STEELSERIES=m
+CONFIG_HID_SUNPLUS=m
+CONFIG_HID_RMI=m
+CONFIG_HID_GREENASIA=m
+CONFIG_GREENASIA_FF=y
+CONFIG_HID_HYPERV_MOUSE=m
+CONFIG_HID_SMARTJOYPLUS=m
+CONFIG_SMARTJOYPLUS_FF=y
+CONFIG_HID_TIVO=m
+CONFIG_HID_TOPSEED=m
+CONFIG_HID_THINGM=m
+CONFIG_HID_THRUSTMASTER=m
+CONFIG_THRUSTMASTER_FF=y
+# CONFIG_HID_UDRAW_PS3 is not set
+# CONFIG_HID_U2FZERO is not set
+CONFIG_HID_WACOM=m
+CONFIG_HID_WIIMOTE=m
+CONFIG_HID_XINMO=m
+CONFIG_HID_ZEROPLUS=m
+CONFIG_ZEROPLUS_FF=y
+CONFIG_HID_ZYDACRON=m
+CONFIG_HID_SENSOR_HUB=m
+# CONFIG_HID_SENSOR_CUSTOM_SENSOR is not set
+# CONFIG_HID_ALPS is not set
+# CONFIG_HID_MCP2221 is not set
+# end of Special HID drivers
+
+#
+# USB HID support
+#
+CONFIG_USB_HID=y
+CONFIG_HID_PID=y
+CONFIG_USB_HIDDEV=y
+# end of USB HID support
+
+#
+# I2C HID support
+#
+CONFIG_I2C_HID=m
+# end of I2C HID support
+
+#
+# Intel ISH HID support
+#
+# CONFIG_INTEL_ISH_HID is not set
+# end of Intel ISH HID support
+# end of HID support
+
+CONFIG_USB_OHCI_LITTLE_ENDIAN=y
+CONFIG_USB_SUPPORT=y
+CONFIG_USB_COMMON=y
+CONFIG_USB_LED_TRIG=y
+# CONFIG_USB_ULPI_BUS is not set
+# CONFIG_USB_CONN_GPIO is not set
+CONFIG_USB_ARCH_HAS_HCD=y
+CONFIG_USB=y
+CONFIG_USB_PCI=y
+CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
+
+#
+# Miscellaneous USB options
+#
+CONFIG_USB_DEFAULT_PERSIST=y
+# CONFIG_USB_DYNAMIC_MINORS is not set
+# CONFIG_USB_OTG is not set
+# CONFIG_USB_OTG_WHITELIST is not set
+# CONFIG_USB_LEDS_TRIGGER_USBPORT is not set
+CONFIG_USB_AUTOSUSPEND_DELAY=2
+CONFIG_USB_MON=y
+
+#
+# USB Host Controller Drivers
+#
+# CONFIG_USB_C67X00_HCD is not set
+CONFIG_USB_XHCI_HCD=y
+# CONFIG_USB_XHCI_DBGCAP is not set
+CONFIG_USB_XHCI_PCI=y
+# CONFIG_USB_XHCI_PLATFORM is not set
+CONFIG_USB_EHCI_HCD=y
+CONFIG_USB_EHCI_ROOT_HUB_TT=y
+CONFIG_USB_EHCI_TT_NEWSCHED=y
+CONFIG_USB_EHCI_PCI=y
+# CONFIG_USB_EHCI_FSL is not set
+# CONFIG_USB_EHCI_HCD_PLATFORM is not set
+# CONFIG_USB_OXU210HP_HCD is not set
+# CONFIG_USB_ISP116X_HCD is not set
+# CONFIG_USB_FOTG210_HCD is not set
+CONFIG_USB_OHCI_HCD=y
+CONFIG_USB_OHCI_HCD_PCI=y
+# CONFIG_USB_OHCI_HCD_PLATFORM is not set
+CONFIG_USB_UHCI_HCD=y
+CONFIG_USB_U132_HCD=m
+CONFIG_USB_SL811_HCD=m
+CONFIG_USB_SL811_HCD_ISO=y
+# CONFIG_USB_SL811_CS is not set
+# CONFIG_USB_R8A66597_HCD is not set
+# CONFIG_USB_HCD_BCMA is not set
+# CONFIG_USB_HCD_SSB is not set
+# CONFIG_USB_HCD_TEST_MODE is not set
+
+#
+# USB Device Class drivers
+#
+CONFIG_USB_ACM=m
+CONFIG_USB_PRINTER=m
+CONFIG_USB_WDM=m
+CONFIG_USB_TMC=m
+
+#
+# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
+#
+
+#
+# also be needed; see USB_STORAGE Help for more info
+#
+CONFIG_USB_STORAGE=m
+# CONFIG_USB_STORAGE_DEBUG is not set
+CONFIG_USB_STORAGE_REALTEK=m
+CONFIG_REALTEK_AUTOPM=y
+CONFIG_USB_STORAGE_DATAFAB=m
+CONFIG_USB_STORAGE_FREECOM=m
+CONFIG_USB_STORAGE_ISD200=m
+CONFIG_USB_STORAGE_USBAT=m
+CONFIG_USB_STORAGE_SDDR09=m
+CONFIG_USB_STORAGE_SDDR55=m
+CONFIG_USB_STORAGE_JUMPSHOT=m
+CONFIG_USB_STORAGE_ALAUDA=m
+CONFIG_USB_STORAGE_ONETOUCH=m
+CONFIG_USB_STORAGE_KARMA=m
+CONFIG_USB_STORAGE_CYPRESS_ATACB=m
+CONFIG_USB_STORAGE_ENE_UB6250=m
+CONFIG_USB_UAS=m
+
+#
+# USB Imaging devices
+#
+CONFIG_USB_MDC800=m
+CONFIG_USB_MICROTEK=m
+CONFIG_USBIP_CORE=m
+CONFIG_USBIP_VHCI_HCD=m
+CONFIG_USBIP_VHCI_HC_PORTS=8
+CONFIG_USBIP_VHCI_NR_HCS=1
+CONFIG_USBIP_HOST=m
+# CONFIG_USBIP_DEBUG is not set
+# CONFIG_USB_CDNS3 is not set
+# CONFIG_USB_MUSB_HDRC is not set
+# CONFIG_USB_DWC3 is not set
+# CONFIG_USB_DWC2 is not set
+# CONFIG_USB_CHIPIDEA is not set
+# CONFIG_USB_ISP1760 is not set
+
+#
+# USB port drivers
+#
+CONFIG_USB_USS720=m
+CONFIG_USB_SERIAL=y
+CONFIG_USB_SERIAL_CONSOLE=y
+CONFIG_USB_SERIAL_GENERIC=y
+CONFIG_USB_SERIAL_SIMPLE=m
+CONFIG_USB_SERIAL_AIRCABLE=m
+CONFIG_USB_SERIAL_ARK3116=m
+CONFIG_USB_SERIAL_BELKIN=m
+CONFIG_USB_SERIAL_CH341=m
+CONFIG_USB_SERIAL_WHITEHEAT=m
+CONFIG_USB_SERIAL_DIGI_ACCELEPORT=m
+CONFIG_USB_SERIAL_CP210X=m
+CONFIG_USB_SERIAL_CYPRESS_M8=m
+CONFIG_USB_SERIAL_EMPEG=m
+CONFIG_USB_SERIAL_FTDI_SIO=m
+CONFIG_USB_SERIAL_VISOR=m
+CONFIG_USB_SERIAL_IPAQ=m
+CONFIG_USB_SERIAL_IR=m
+CONFIG_USB_SERIAL_EDGEPORT=m
+CONFIG_USB_SERIAL_EDGEPORT_TI=m
+# CONFIG_USB_SERIAL_F81232 is not set
+# CONFIG_USB_SERIAL_F8153X is not set
+CONFIG_USB_SERIAL_GARMIN=m
+CONFIG_USB_SERIAL_IPW=m
+CONFIG_USB_SERIAL_IUU=m
+CONFIG_USB_SERIAL_KEYSPAN_PDA=m
+CONFIG_USB_SERIAL_KEYSPAN=m
+CONFIG_USB_SERIAL_KLSI=m
+CONFIG_USB_SERIAL_KOBIL_SCT=m
+CONFIG_USB_SERIAL_MCT_U232=m
+# CONFIG_USB_SERIAL_METRO is not set
+CONFIG_USB_SERIAL_MOS7720=m
+CONFIG_USB_SERIAL_MOS7715_PARPORT=y
+CONFIG_USB_SERIAL_MOS7840=m
+# CONFIG_USB_SERIAL_MXUPORT is not set
+CONFIG_USB_SERIAL_NAVMAN=m
+CONFIG_USB_SERIAL_PL2303=m
+CONFIG_USB_SERIAL_OTI6858=m
+CONFIG_USB_SERIAL_QCAUX=m
+CONFIG_USB_SERIAL_QUALCOMM=m
+CONFIG_USB_SERIAL_SPCP8X5=m
+CONFIG_USB_SERIAL_SAFE=m
+CONFIG_USB_SERIAL_SAFE_PADDED=y
+CONFIG_USB_SERIAL_SIERRAWIRELESS=m
+CONFIG_USB_SERIAL_SYMBOL=m
+CONFIG_USB_SERIAL_TI=m
+CONFIG_USB_SERIAL_CYBERJACK=m
+CONFIG_USB_SERIAL_XIRCOM=m
+CONFIG_USB_SERIAL_WWAN=m
+CONFIG_USB_SERIAL_OPTION=m
+CONFIG_USB_SERIAL_OMNINET=m
+CONFIG_USB_SERIAL_OPTICON=m
+CONFIG_USB_SERIAL_XSENS_MT=m
+# CONFIG_USB_SERIAL_WISHBONE is not set
+CONFIG_USB_SERIAL_SSU100=m
+CONFIG_USB_SERIAL_QT2=m
+# CONFIG_USB_SERIAL_UPD78F0730 is not set
+CONFIG_USB_SERIAL_DEBUG=m
+
+#
+# USB Miscellaneous drivers
+#
+CONFIG_USB_EMI62=m
+CONFIG_USB_EMI26=m
+CONFIG_USB_ADUTUX=m
+CONFIG_USB_SEVSEG=m
+CONFIG_USB_LEGOTOWER=m
+CONFIG_USB_LCD=m
+# CONFIG_USB_CYPRESS_CY7C63 is not set
+# CONFIG_USB_CYTHERM is not set
+CONFIG_USB_IDMOUSE=m
+CONFIG_USB_FTDI_ELAN=m
+CONFIG_USB_APPLEDISPLAY=m
+# CONFIG_APPLE_MFI_FASTCHARGE is not set
+CONFIG_USB_SISUSBVGA=m
+CONFIG_USB_SISUSBVGA_CON=y
+CONFIG_USB_LD=m
+CONFIG_USB_TRANCEVIBRATOR=m
+CONFIG_USB_IOWARRIOR=m
+# CONFIG_USB_TEST is not set
+# CONFIG_USB_EHSET_TEST_FIXTURE is not set
+CONFIG_USB_ISIGHTFW=m
+CONFIG_USB_YUREX=m
+CONFIG_USB_EZUSB_FX2=m
+# CONFIG_USB_HUB_USB251XB is not set
+CONFIG_USB_HSIC_USB3503=m
+# CONFIG_USB_HSIC_USB4604 is not set
+# CONFIG_USB_LINK_LAYER_TEST is not set
+# CONFIG_USB_CHAOSKEY is not set
+
+#
+# USB Physical Layer drivers
+#
+CONFIG_USB_PHY=y
+CONFIG_NOP_USB_XCEIV=m
+# CONFIG_USB_GPIO_VBUS is not set
+# CONFIG_USB_ISP1301 is not set
+# end of USB Physical Layer drivers
+
+# CONFIG_USB_GADGET is not set
+# CONFIG_TYPEC is not set
+# CONFIG_USB_ROLE_SWITCH is not set
+# CONFIG_MMC is not set
+# CONFIG_MEMSTICK is not set
+CONFIG_NEW_LEDS=y
+CONFIG_LEDS_CLASS=y
+# CONFIG_LEDS_CLASS_FLASH is not set
+# CONFIG_LEDS_BRIGHTNESS_HW_CHANGED is not set
+
+#
+# LED drivers
+#
+# CONFIG_LEDS_APU is not set
+CONFIG_LEDS_LM3530=m
+# CONFIG_LEDS_LM3532 is not set
+# CONFIG_LEDS_LM3642 is not set
+# CONFIG_LEDS_PCA9532 is not set
+# CONFIG_LEDS_GPIO is not set
+CONFIG_LEDS_LP3944=m
+# CONFIG_LEDS_LP3952 is not set
+CONFIG_LEDS_LP55XX_COMMON=m
+CONFIG_LEDS_LP5521=m
+CONFIG_LEDS_LP5523=m
+CONFIG_LEDS_LP5562=m
+# CONFIG_LEDS_LP8501 is not set
+CONFIG_LEDS_CLEVO_MAIL=m
+# CONFIG_LEDS_PCA955X is not set
+# CONFIG_LEDS_PCA963X is not set
+# CONFIG_LEDS_BD2802 is not set
+CONFIG_LEDS_INTEL_SS4200=m
+# CONFIG_LEDS_TCA6507 is not set
+# CONFIG_LEDS_TLC591XX is not set
+# CONFIG_LEDS_LM355x is not set
+
+#
+# LED driver for blink(1) USB RGB LED is under Special HID drivers (HID_THINGM)
+#
+CONFIG_LEDS_BLINKM=m
+# CONFIG_LEDS_MLXCPLD is not set
+# CONFIG_LEDS_MLXREG is not set
+# CONFIG_LEDS_USER is not set
+# CONFIG_LEDS_NIC78BX is not set
+# CONFIG_LEDS_TI_LMU_COMMON is not set
+
+#
+# LED Triggers
+#
+CONFIG_LEDS_TRIGGERS=y
+CONFIG_LEDS_TRIGGER_TIMER=m
+CONFIG_LEDS_TRIGGER_ONESHOT=m
+# CONFIG_LEDS_TRIGGER_DISK is not set
+CONFIG_LEDS_TRIGGER_HEARTBEAT=m
+CONFIG_LEDS_TRIGGER_BACKLIGHT=m
+# CONFIG_LEDS_TRIGGER_CPU is not set
+# CONFIG_LEDS_TRIGGER_ACTIVITY is not set
+CONFIG_LEDS_TRIGGER_GPIO=m
+CONFIG_LEDS_TRIGGER_DEFAULT_ON=m
+
+#
+# iptables trigger is under Netfilter config (LED target)
+#
+CONFIG_LEDS_TRIGGER_TRANSIENT=m
+CONFIG_LEDS_TRIGGER_CAMERA=m
+# CONFIG_LEDS_TRIGGER_PANIC is not set
+# CONFIG_LEDS_TRIGGER_NETDEV is not set
+# CONFIG_LEDS_TRIGGER_PATTERN is not set
+CONFIG_LEDS_TRIGGER_AUDIO=m
+CONFIG_ACCESSIBILITY=y
+CONFIG_A11Y_BRAILLE_CONSOLE=y
+CONFIG_INFINIBAND=m
+CONFIG_INFINIBAND_USER_MAD=m
+CONFIG_INFINIBAND_USER_ACCESS=m
+# CONFIG_INFINIBAND_EXP_LEGACY_VERBS_NEW_UAPI is not set
+CONFIG_INFINIBAND_USER_MEM=y
+CONFIG_INFINIBAND_ON_DEMAND_PAGING=y
+CONFIG_INFINIBAND_ADDR_TRANS=y
+CONFIG_INFINIBAND_ADDR_TRANS_CONFIGFS=y
+CONFIG_INFINIBAND_MTHCA=m
+CONFIG_INFINIBAND_MTHCA_DEBUG=y
+CONFIG_INFINIBAND_CXGB4=m
+# CONFIG_INFINIBAND_EFA is not set
+# CONFIG_INFINIBAND_I40IW is not set
+CONFIG_MLX4_INFINIBAND=m
+CONFIG_MLX5_INFINIBAND=m
+# CONFIG_INFINIBAND_OCRDMA is not set
+# CONFIG_INFINIBAND_VMWARE_PVRDMA is not set
+# CONFIG_INFINIBAND_USNIC is not set
+# CONFIG_INFINIBAND_BNXT_RE is not set
+# CONFIG_INFINIBAND_RDMAVT is not set
+# CONFIG_RDMA_RXE is not set
+# CONFIG_RDMA_SIW is not set
+CONFIG_INFINIBAND_IPOIB=m
+CONFIG_INFINIBAND_IPOIB_CM=y
+CONFIG_INFINIBAND_IPOIB_DEBUG=y
+CONFIG_INFINIBAND_IPOIB_DEBUG_DATA=y
+CONFIG_INFINIBAND_SRP=m
+CONFIG_INFINIBAND_SRPT=m
+CONFIG_INFINIBAND_ISER=m
+CONFIG_INFINIBAND_ISERT=m
+# CONFIG_INFINIBAND_OPA_VNIC is not set
+CONFIG_EDAC_ATOMIC_SCRUB=y
+CONFIG_EDAC_SUPPORT=y
+CONFIG_EDAC=y
+CONFIG_EDAC_LEGACY_SYSFS=y
+# CONFIG_EDAC_DEBUG is not set
+CONFIG_EDAC_DECODE_MCE=m
+# CONFIG_EDAC_GHES is not set
+CONFIG_EDAC_AMD64=m
+# CONFIG_EDAC_AMD64_ERROR_INJECTION is not set
+CONFIG_EDAC_E752X=m
+CONFIG_EDAC_I82975X=m
+CONFIG_EDAC_I3000=m
+CONFIG_EDAC_I3200=m
+CONFIG_EDAC_IE31200=m
+CONFIG_EDAC_X38=m
+CONFIG_EDAC_I5400=m
+CONFIG_EDAC_I7CORE=m
+CONFIG_EDAC_I5000=m
+CONFIG_EDAC_I5100=m
+CONFIG_EDAC_I7300=m
+CONFIG_EDAC_SBRIDGE=m
+# CONFIG_EDAC_SKX is not set
+# CONFIG_EDAC_I10NM is not set
+# CONFIG_EDAC_PND2 is not set
+CONFIG_RTC_LIB=y
+CONFIG_RTC_MC146818_LIB=y
+CONFIG_RTC_CLASS=y
+CONFIG_RTC_HCTOSYS=y
+CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
+# CONFIG_RTC_SYSTOHC is not set
+# CONFIG_RTC_DEBUG is not set
+CONFIG_RTC_NVMEM=y
+
+#
+# RTC interfaces
+#
+CONFIG_RTC_INTF_SYSFS=y
+CONFIG_RTC_INTF_PROC=y
+CONFIG_RTC_INTF_DEV=y
+# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
+# CONFIG_RTC_DRV_TEST is not set
+
+#
+# I2C RTC drivers
+#
+# CONFIG_RTC_DRV_ABB5ZES3 is not set
+# CONFIG_RTC_DRV_ABEOZ9 is not set
+# CONFIG_RTC_DRV_ABX80X is not set
+CONFIG_RTC_DRV_DS1307=m
+# CONFIG_RTC_DRV_DS1307_CENTURY is not set
+CONFIG_RTC_DRV_DS1374=m
+CONFIG_RTC_DRV_DS1374_WDT=y
+CONFIG_RTC_DRV_DS1672=m
+CONFIG_RTC_DRV_MAX6900=m
+CONFIG_RTC_DRV_RS5C372=m
+CONFIG_RTC_DRV_ISL1208=m
+CONFIG_RTC_DRV_ISL12022=m
+CONFIG_RTC_DRV_X1205=m
+CONFIG_RTC_DRV_PCF8523=m
+CONFIG_RTC_DRV_PCF85063=m
+# CONFIG_RTC_DRV_PCF85363 is not set
+CONFIG_RTC_DRV_PCF8563=m
+CONFIG_RTC_DRV_PCF8583=m
+CONFIG_RTC_DRV_M41T80=m
+CONFIG_RTC_DRV_M41T80_WDT=y
+CONFIG_RTC_DRV_BQ32K=m
+# CONFIG_RTC_DRV_S35390A is not set
+CONFIG_RTC_DRV_FM3130=m
+# CONFIG_RTC_DRV_RX8010 is not set
+CONFIG_RTC_DRV_RX8581=m
+CONFIG_RTC_DRV_RX8025=m
+CONFIG_RTC_DRV_EM3027=m
+# CONFIG_RTC_DRV_RV3028 is not set
+# CONFIG_RTC_DRV_RV8803 is not set
+# CONFIG_RTC_DRV_SD3078 is not set
+
+#
+# SPI RTC drivers
+#
+CONFIG_RTC_I2C_AND_SPI=y
+
+#
+# SPI and I2C RTC drivers
+#
+CONFIG_RTC_DRV_DS3232=m
+CONFIG_RTC_DRV_DS3232_HWMON=y
+CONFIG_RTC_DRV_PCF2127=m
+CONFIG_RTC_DRV_RV3029C2=m
+CONFIG_RTC_DRV_RV3029_HWMON=y
+
+#
+# Platform RTC drivers
+#
+CONFIG_RTC_DRV_CMOS=y
+CONFIG_RTC_DRV_DS1286=m
+CONFIG_RTC_DRV_DS1511=m
+CONFIG_RTC_DRV_DS1553=m
+# CONFIG_RTC_DRV_DS1685_FAMILY is not set
+CONFIG_RTC_DRV_DS1742=m
+CONFIG_RTC_DRV_DS2404=m
+CONFIG_RTC_DRV_STK17TA8=m
+# CONFIG_RTC_DRV_M48T86 is not set
+CONFIG_RTC_DRV_M48T35=m
+CONFIG_RTC_DRV_M48T59=m
+CONFIG_RTC_DRV_MSM6242=m
+CONFIG_RTC_DRV_BQ4802=m
+CONFIG_RTC_DRV_RP5C01=m
+CONFIG_RTC_DRV_V3020=m
+
+#
+# on-CPU RTC drivers
+#
+# CONFIG_RTC_DRV_FTRTC010 is not set
+
+#
+# HID Sensor RTC drivers
+#
+CONFIG_DMADEVICES=y
+# CONFIG_DMADEVICES_DEBUG is not set
+
+#
+# DMA Devices
+#
+CONFIG_DMA_ENGINE=y
+CONFIG_DMA_VIRTUAL_CHANNELS=y
+CONFIG_DMA_ACPI=y
+# CONFIG_ALTERA_MSGDMA is not set
+# CONFIG_INTEL_IDMA64 is not set
+# CONFIG_INTEL_IDXD is not set
+CONFIG_INTEL_IOATDMA=m
+CONFIG_INTEL_MIC_X100_DMA=m
+# CONFIG_PLX_DMA is not set
+# CONFIG_QCOM_HIDMA_MGMT is not set
+# CONFIG_QCOM_HIDMA is not set
+CONFIG_DW_DMAC_CORE=y
+CONFIG_DW_DMAC=m
+CONFIG_DW_DMAC_PCI=y
+# CONFIG_DW_EDMA is not set
+# CONFIG_DW_EDMA_PCIE is not set
+CONFIG_HSU_DMA=y
+# CONFIG_SF_PDMA is not set
+
+#
+# DMA Clients
+#
+CONFIG_ASYNC_TX_DMA=y
+# CONFIG_DMATEST is not set
+CONFIG_DMA_ENGINE_RAID=y
+
+#
+# DMABUF options
+#
+# CONFIG_SYNC_FILE is not set
+# CONFIG_DMABUF_MOVE_NOTIFY is not set
+# CONFIG_DMABUF_HEAPS is not set
+# end of DMABUF options
+
+CONFIG_DCA=m
+CONFIG_AUXDISPLAY=y
+# CONFIG_HD44780 is not set
+CONFIG_KS0108=m
+CONFIG_KS0108_PORT=0x378
+CONFIG_KS0108_DELAY=2
+# CONFIG_IMG_ASCII_LCD is not set
+# CONFIG_PARPORT_PANEL is not set
+# CONFIG_CHARLCD_BL_OFF is not set
+# CONFIG_CHARLCD_BL_ON is not set
+CONFIG_CHARLCD_BL_FLASH=y
+# CONFIG_PANEL is not set
+CONFIG_UIO=m
+CONFIG_UIO_CIF=m
+# CONFIG_UIO_PDRV_GENIRQ is not set
+# CONFIG_UIO_DMEM_GENIRQ is not set
+CONFIG_UIO_AEC=m
+CONFIG_UIO_SERCOS3=m
+CONFIG_UIO_PCI_GENERIC=m
+# CONFIG_UIO_NETX is not set
+# CONFIG_UIO_PRUSS is not set
+# CONFIG_UIO_MF624 is not set
+# CONFIG_UIO_HV_GENERIC is not set
+CONFIG_VFIO_IOMMU_TYPE1=m
+CONFIG_VFIO_VIRQFD=m
+CONFIG_VFIO=m
+# CONFIG_VFIO_NOIOMMU is not set
+CONFIG_VFIO_PCI=m
+CONFIG_VFIO_PCI_VGA=y
+CONFIG_VFIO_PCI_MMAP=y
+CONFIG_VFIO_PCI_INTX=y
+CONFIG_VFIO_PCI_IGD=y
+# CONFIG_VFIO_MDEV is not set
+CONFIG_IRQ_BYPASS_MANAGER=m
+# CONFIG_VIRT_DRIVERS is not set
+CONFIG_VIRTIO=m
+CONFIG_VIRTIO_MENU=y
+CONFIG_VIRTIO_PCI=m
+CONFIG_VIRTIO_PCI_LEGACY=y
+CONFIG_VIRTIO_BALLOON=m
+# CONFIG_VIRTIO_INPUT is not set
+CONFIG_VIRTIO_MMIO=m
+# CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set
+# CONFIG_VDPA is not set
+CONFIG_VHOST_IOTLB=m
+CONFIG_VHOST_DPN=y
+CONFIG_VHOST=m
+CONFIG_VHOST_MENU=y
+CONFIG_VHOST_NET=m
+CONFIG_VHOST_SCSI=m
+# CONFIG_VHOST_VSOCK is not set
+# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set
+
+#
+# Microsoft Hyper-V guest support
+#
+CONFIG_HYPERV=m
+CONFIG_HYPERV_TIMER=y
+CONFIG_HYPERV_UTILS=m
+CONFIG_HYPERV_BALLOON=m
+# end of Microsoft Hyper-V guest support
+
+#
+# Xen driver support
+#
+CONFIG_XEN_BALLOON=y
+# CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is not set
+CONFIG_XEN_SCRUB_PAGES_DEFAULT=y
+CONFIG_XEN_DEV_EVTCHN=m
+CONFIG_XEN_BACKEND=y
+CONFIG_XENFS=m
+CONFIG_XEN_COMPAT_XENFS=y
+CONFIG_XEN_SYS_HYPERVISOR=y
+CONFIG_XEN_XENBUS_FRONTEND=y
+CONFIG_XEN_GNTDEV=m
+CONFIG_XEN_GRANT_DEV_ALLOC=m
+# CONFIG_XEN_GRANT_DMA_ALLOC is not set
+CONFIG_SWIOTLB_XEN=y
+CONFIG_XEN_PCIDEV_BACKEND=m
+# CONFIG_XEN_PVCALLS_FRONTEND is not set
+# CONFIG_XEN_PVCALLS_BACKEND is not set
+# CONFIG_XEN_SCSI_BACKEND is not set
+CONFIG_XEN_PRIVCMD=m
+CONFIG_XEN_ACPI_PROCESSOR=m
+# CONFIG_XEN_MCE_LOG is not set
+CONFIG_XEN_HAVE_PVMMU=y
+CONFIG_XEN_EFI=y
+CONFIG_XEN_AUTO_XLATE=y
+CONFIG_XEN_ACPI=y
+CONFIG_XEN_SYMS=y
+CONFIG_XEN_HAVE_VPMU=y
+# end of Xen driver support
+
+# CONFIG_GREYBUS is not set
+# CONFIG_STAGING is not set
+CONFIG_X86_PLATFORM_DEVICES=y
+CONFIG_ACPI_WMI=m
+CONFIG_WMI_BMOF=m
+CONFIG_ALIENWARE_WMI=m
+# CONFIG_HUAWEI_WMI is not set
+# CONFIG_INTEL_WMI_THUNDERBOLT is not set
+CONFIG_MXM_WMI=m
+# CONFIG_PEAQ_WMI is not set
+# CONFIG_XIAOMI_WMI is not set
+CONFIG_ACERHDF=m
+# CONFIG_ACER_WIRELESS is not set
+CONFIG_ACER_WMI=m
+CONFIG_APPLE_GMUX=m
+CONFIG_ASUS_LAPTOP=m
+# CONFIG_ASUS_WIRELESS is not set
+CONFIG_ASUS_WMI=m
+CONFIG_ASUS_NB_WMI=m
+CONFIG_EEEPC_LAPTOP=m
+CONFIG_EEEPC_WMI=m
+CONFIG_DCDBAS=m
+# CONFIG_DELL_SMBIOS is not set
+# CONFIG_DELL_RBU is not set
+CONFIG_DELL_SMO8800=m
+CONFIG_DELL_WMI_AIO=m
+# CONFIG_DELL_WMI_LED is not set
+CONFIG_FUJITSU_LAPTOP=m
+CONFIG_FUJITSU_TABLET=m
+# CONFIG_GPD_POCKET_FAN is not set
+CONFIG_HP_ACCEL=m
+CONFIG_HP_WIRELESS=m
+CONFIG_HP_WMI=m
+# CONFIG_IBM_RTL is not set
+CONFIG_SENSORS_HDAPS=m
+CONFIG_THINKPAD_ACPI=m
+# CONFIG_THINKPAD_ACPI_DEBUGFACILITIES is not set
+# CONFIG_THINKPAD_ACPI_DEBUG is not set
+# CONFIG_THINKPAD_ACPI_UNSAFE_LEDS is not set
+CONFIG_THINKPAD_ACPI_VIDEO=y
+CONFIG_THINKPAD_ACPI_HOTKEY_POLL=y
+# CONFIG_INTEL_ATOMISP2_PM is not set
+# CONFIG_INTEL_HID_EVENT is not set
+# CONFIG_INTEL_INT0002_VGPIO is not set
+# CONFIG_INTEL_MENLOW is not set
+# CONFIG_INTEL_VBTN is not set
+# CONFIG_SURFACE_3_BUTTON is not set
+# CONFIG_SURFACE_3_POWER_OPREGION is not set
+# CONFIG_SURFACE_PRO3_BUTTON is not set
+CONFIG_MSI_WMI=m
+# CONFIG_PCENGINES_APU2 is not set
+CONFIG_SAMSUNG_LAPTOP=m
+CONFIG_SAMSUNG_Q10=m
+CONFIG_TOSHIBA_BT_RFKILL=m
+CONFIG_TOSHIBA_HAPS=m
+# CONFIG_TOSHIBA_WMI is not set
+CONFIG_ACPI_CMPC=m
+# CONFIG_LG_LAPTOP is not set
+CONFIG_PANASONIC_LAPTOP=m
+# CONFIG_SYSTEM76_ACPI is not set
+CONFIG_TOPSTAR_LAPTOP=m
+# CONFIG_I2C_MULTI_INSTANTIATE is not set
+# CONFIG_MLX_PLATFORM is not set
+CONFIG_INTEL_IPS=m
+CONFIG_INTEL_RST=m
+CONFIG_INTEL_SMARTCONNECT=y
+
+#
+# Intel Speed Select Technology interface support
+#
+# CONFIG_INTEL_SPEED_SELECT_INTERFACE is not set
+# end of Intel Speed Select Technology interface support
+
+# CONFIG_INTEL_TURBO_MAX_3 is not set
+# CONFIG_INTEL_UNCORE_FREQ_CONTROL is not set
+# CONFIG_INTEL_PMC_CORE is not set
+# CONFIG_INTEL_PMC_IPC is not set
+# CONFIG_INTEL_PUNIT_IPC is not set
+CONFIG_PMC_ATOM=y
+# CONFIG_MFD_CROS_EC is not set
+# CONFIG_CHROME_PLATFORMS is not set
+# CONFIG_MELLANOX_PLATFORM is not set
+CONFIG_CLKDEV_LOOKUP=y
+CONFIG_HAVE_CLK_PREPARE=y
+CONFIG_COMMON_CLK=y
+
+#
+# Common Clock Framework
+#
+# CONFIG_COMMON_CLK_MAX9485 is not set
+# CONFIG_COMMON_CLK_SI5341 is not set
+# CONFIG_COMMON_CLK_SI5351 is not set
+# CONFIG_COMMON_CLK_SI544 is not set
+# CONFIG_COMMON_CLK_CDCE706 is not set
+# CONFIG_COMMON_CLK_CS2000_CP is not set
+# end of Common Clock Framework
+
+# CONFIG_HWSPINLOCK is not set
+
+#
+# Clock Source drivers
+#
+CONFIG_CLKEVT_I8253=y
+CONFIG_I8253_LOCK=y
+CONFIG_CLKBLD_I8253=y
+# end of Clock Source drivers
+
+CONFIG_MAILBOX=y
+CONFIG_PCC=y
+# CONFIG_ALTERA_MBOX is not set
+CONFIG_IOMMU_IOVA=y
+CONFIG_IOASID=y
+CONFIG_IOMMU_API=y
+CONFIG_IOMMU_SUPPORT=y
+
+#
+# Generic IOMMU Pagetable Support
+#
+# end of Generic IOMMU Pagetable Support
+
+# CONFIG_IOMMU_DEBUGFS is not set
+# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set
+CONFIG_IOMMU_DMA=y
+CONFIG_AMD_IOMMU=y
+CONFIG_AMD_IOMMU_V2=m
+CONFIG_DMAR_TABLE=y
+CONFIG_INTEL_IOMMU=y
+# CONFIG_INTEL_IOMMU_SVM is not set
+# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
+CONFIG_INTEL_IOMMU_FLOPPY_WA=y
+# CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON is not set
+CONFIG_IRQ_REMAP=y
+CONFIG_HYPERV_IOMMU=y
+
+#
+# Remoteproc drivers
+#
+# CONFIG_REMOTEPROC is not set
+# end of Remoteproc drivers
+
+#
+# Rpmsg drivers
+#
+# CONFIG_RPMSG_QCOM_GLINK_RPM is not set
+# CONFIG_RPMSG_VIRTIO is not set
+# end of Rpmsg drivers
+
+# CONFIG_SOUNDWIRE is not set
+
+#
+# SOC (System On Chip) specific Drivers
+#
+
+#
+# Amlogic SoC drivers
+#
+# end of Amlogic SoC drivers
+
+#
+# Aspeed SoC drivers
+#
+# end of Aspeed SoC drivers
+
+#
+# Broadcom SoC drivers
+#
+# end of Broadcom SoC drivers
+
+#
+# NXP/Freescale QorIQ SoC drivers
+#
+# end of NXP/Freescale QorIQ SoC drivers
+
+#
+# i.MX SoC drivers
+#
+# end of i.MX SoC drivers
+
+#
+# Qualcomm SoC drivers
+#
+# end of Qualcomm SoC drivers
+
+# CONFIG_SOC_TI is not set
+
+#
+# Xilinx SoC drivers
+#
+# CONFIG_XILINX_VCU is not set
+# end of Xilinx SoC drivers
+# end of SOC (System On Chip) specific Drivers
+
+CONFIG_PM_DEVFREQ=y
+
+#
+# DEVFREQ Governors
+#
+CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND=m
+# CONFIG_DEVFREQ_GOV_PERFORMANCE is not set
+# CONFIG_DEVFREQ_GOV_POWERSAVE is not set
+# CONFIG_DEVFREQ_GOV_USERSPACE is not set
+# CONFIG_DEVFREQ_GOV_PASSIVE is not set
+
+#
+# DEVFREQ Drivers
+#
+# CONFIG_PM_DEVFREQ_EVENT is not set
+CONFIG_EXTCON=y
+
+#
+# Extcon Device Drivers
+#
+# CONFIG_EXTCON_FSA9480 is not set
+# CONFIG_EXTCON_GPIO is not set
+# CONFIG_EXTCON_INTEL_INT3496 is not set
+# CONFIG_EXTCON_MAX3355 is not set
+# CONFIG_EXTCON_PTN5150 is not set
+# CONFIG_EXTCON_RT8973A is not set
+# CONFIG_EXTCON_SM5502 is not set
+# CONFIG_EXTCON_USB_GPIO is not set
+# CONFIG_MEMORY is not set
+# CONFIG_IIO is not set
+CONFIG_NTB=m
+# CONFIG_NTB_MSI is not set
+# CONFIG_NTB_AMD is not set
+# CONFIG_NTB_IDT is not set
+# CONFIG_NTB_INTEL is not set
+# CONFIG_NTB_SWITCHTEC is not set
+# CONFIG_NTB_PINGPONG is not set
+# CONFIG_NTB_TOOL is not set
+# CONFIG_NTB_PERF is not set
+# CONFIG_NTB_TRANSPORT is not set
+# CONFIG_VME_BUS is not set
+# CONFIG_PWM is not set
+
+#
+# IRQ chip support
+#
+# end of IRQ chip support
+
+# CONFIG_IPACK_BUS is not set
+CONFIG_RESET_CONTROLLER=y
+# CONFIG_RESET_BRCMSTB_RESCAL is not set
+# CONFIG_RESET_TI_SYSCON is not set
+
+#
+# PHY Subsystem
+#
+CONFIG_GENERIC_PHY=y
+# CONFIG_BCM_KONA_USB2_PHY is not set
+# CONFIG_PHY_PXA_28NM_HSIC is not set
+# CONFIG_PHY_PXA_28NM_USB2 is not set
+# CONFIG_PHY_INTEL_EMMC is not set
+# end of PHY Subsystem
+
+CONFIG_POWERCAP=y
+CONFIG_INTEL_RAPL_CORE=m
+CONFIG_INTEL_RAPL=m
+# CONFIG_IDLE_INJECT is not set
+# CONFIG_MCB is not set
+
+#
+# Performance monitor support
+#
+# end of Performance monitor support
+
+CONFIG_RAS=y
+# CONFIG_RAS_CEC is not set
+# CONFIG_USB4 is not set
+
+#
+# Android
+#
+# CONFIG_ANDROID is not set
+# end of Android
+
+# CONFIG_LIBNVDIMM is not set
+# CONFIG_DAX is not set
+CONFIG_NVMEM=y
+CONFIG_NVMEM_SYSFS=y
+
+#
+# HW tracing support
+#
+# CONFIG_STM is not set
+# CONFIG_INTEL_TH is not set
+# end of HW tracing support
+
+# CONFIG_FPGA is not set
+# CONFIG_TEE is not set
+CONFIG_PM_OPP=y
+# CONFIG_UNISYS_VISORBUS is not set
+# CONFIG_SIOX is not set
+# CONFIG_SLIMBUS is not set
+# CONFIG_INTERCONNECT is not set
+# CONFIG_COUNTER is not set
+# CONFIG_MOST is not set
+# end of Device Drivers
+
+#
+# File systems
+#
+CONFIG_DCACHE_WORD_ACCESS=y
+# CONFIG_VALIDATE_FS_PARSER is not set
+CONFIG_FS_IOMAP=y
+# CONFIG_EXT2_FS is not set
+# CONFIG_EXT3_FS is not set
+CONFIG_EXT4_FS=y
+CONFIG_EXT4_USE_FOR_EXT2=y
+CONFIG_EXT4_FS_POSIX_ACL=y
+CONFIG_EXT4_FS_SECURITY=y
+# CONFIG_EXT4_DEBUG is not set
+CONFIG_JBD2=y
+# CONFIG_JBD2_DEBUG is not set
+CONFIG_FS_MBCACHE=y
+CONFIG_REISERFS_FS=m
+# CONFIG_REISERFS_CHECK is not set
+CONFIG_REISERFS_PROC_INFO=y
+CONFIG_REISERFS_FS_XATTR=y
+CONFIG_REISERFS_FS_POSIX_ACL=y
+CONFIG_REISERFS_FS_SECURITY=y
+CONFIG_JFS_FS=m
+CONFIG_JFS_POSIX_ACL=y
+CONFIG_JFS_SECURITY=y
+# CONFIG_JFS_DEBUG is not set
+# CONFIG_JFS_STATISTICS is not set
+CONFIG_XFS_FS=m
+CONFIG_XFS_QUOTA=y
+CONFIG_XFS_POSIX_ACL=y
+# CONFIG_XFS_RT is not set
+# CONFIG_XFS_ONLINE_SCRUB is not set
+# CONFIG_XFS_WARN is not set
+# CONFIG_XFS_DEBUG is not set
+CONFIG_GFS2_FS=m
+CONFIG_GFS2_FS_LOCKING_DLM=y
+CONFIG_OCFS2_FS=m
+CONFIG_OCFS2_FS_O2CB=m
+CONFIG_OCFS2_FS_USERSPACE_CLUSTER=m
+# CONFIG_OCFS2_FS_STATS is not set
+# CONFIG_OCFS2_DEBUG_MASKLOG is not set
+# CONFIG_OCFS2_DEBUG_FS is not set
+CONFIG_BTRFS_FS=m
+CONFIG_BTRFS_FS_POSIX_ACL=y
+# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set
+# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
+# CONFIG_BTRFS_DEBUG is not set
+# CONFIG_BTRFS_ASSERT is not set
+# CONFIG_BTRFS_FS_REF_VERIFY is not set
+CONFIG_NILFS2_FS=m
+CONFIG_F2FS_FS=m
+CONFIG_F2FS_STAT_FS=y
+CONFIG_F2FS_FS_XATTR=y
+CONFIG_F2FS_FS_POSIX_ACL=y
+CONFIG_F2FS_FS_SECURITY=y
+# CONFIG_F2FS_CHECK_FS is not set
+# CONFIG_F2FS_IO_TRACE is not set
+# CONFIG_F2FS_FAULT_INJECTION is not set
+# CONFIG_F2FS_FS_COMPRESSION is not set
+# CONFIG_FS_DAX is not set
+CONFIG_FS_POSIX_ACL=y
+CONFIG_EXPORTFS=y
+# CONFIG_EXPORTFS_BLOCK_OPS is not set
+CONFIG_FILE_LOCKING=y
+CONFIG_MANDATORY_FILE_LOCKING=y
+CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_ALGS=m
+# CONFIG_FS_VERITY is not set
+CONFIG_FSNOTIFY=y
+CONFIG_DNOTIFY=y
+CONFIG_INOTIFY_USER=y
+CONFIG_FANOTIFY=y
+CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
+CONFIG_QUOTA=y
+CONFIG_QUOTA_NETLINK_INTERFACE=y
+# CONFIG_PRINT_QUOTA_WARNING is not set
+# CONFIG_QUOTA_DEBUG is not set
+CONFIG_QUOTA_TREE=y
+# CONFIG_QFMT_V1 is not set
+CONFIG_QFMT_V2=y
+CONFIG_QUOTACTL=y
+CONFIG_QUOTACTL_COMPAT=y
+CONFIG_AUTOFS4_FS=y
+CONFIG_AUTOFS_FS=y
+CONFIG_FUSE_FS=m
+CONFIG_CUSE=m
+# CONFIG_VIRTIO_FS is not set
+CONFIG_OVERLAY_FS=m
+# CONFIG_OVERLAY_FS_REDIRECT_DIR is not set
+CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW=y
+# CONFIG_OVERLAY_FS_INDEX is not set
+# CONFIG_OVERLAY_FS_XINO_AUTO is not set
+# CONFIG_OVERLAY_FS_METACOPY is not set
+
+#
+# Caches
+#
+CONFIG_FSCACHE=m
+CONFIG_FSCACHE_STATS=y
+# CONFIG_FSCACHE_HISTOGRAM is not set
+# CONFIG_FSCACHE_DEBUG is not set
+CONFIG_FSCACHE_OBJECT_LIST=y
+CONFIG_CACHEFILES=m
+# CONFIG_CACHEFILES_DEBUG is not set
+# CONFIG_CACHEFILES_HISTOGRAM is not set
+# end of Caches
+
+#
+# CD-ROM/DVD Filesystems
+#
+CONFIG_ISO9660_FS=m
+CONFIG_JOLIET=y
+CONFIG_ZISOFS=y
+CONFIG_UDF_FS=m
+# end of CD-ROM/DVD Filesystems
+
+#
+# DOS/FAT/EXFAT/NT Filesystems
+#
+CONFIG_FAT_FS=m
+CONFIG_MSDOS_FS=m
+CONFIG_VFAT_FS=m
+CONFIG_FAT_DEFAULT_CODEPAGE=437
+CONFIG_FAT_DEFAULT_IOCHARSET="ascii"
+# CONFIG_FAT_DEFAULT_UTF8 is not set
+# CONFIG_EXFAT_FS is not set
+# CONFIG_NTFS_FS is not set
+# end of DOS/FAT/EXFAT/NT Filesystems
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_PROC_VMCORE=y
+# CONFIG_PROC_VMCORE_DEVICE_DUMP is not set
+CONFIG_PROC_SYSCTL=y
+CONFIG_PROC_PAGE_MONITOR=y
+# CONFIG_PROC_CHILDREN is not set
+CONFIG_PROC_PID_ARCH_STATUS=y
+CONFIG_KERNFS=y
+CONFIG_SYSFS=y
+CONFIG_TMPFS=y
+CONFIG_TMPFS_POSIX_ACL=y
+CONFIG_TMPFS_XATTR=y
+CONFIG_HUGETLBFS=y
+CONFIG_HUGETLB_PAGE=y
+CONFIG_MEMFD_CREATE=y
+CONFIG_ARCH_HAS_GIGANTIC_PAGE=y
+CONFIG_CONFIGFS_FS=y
+CONFIG_EFIVAR_FS=y
+# end of Pseudo filesystems
+
+CONFIG_MISC_FILESYSTEMS=y
+# CONFIG_ORANGEFS_FS is not set
+# CONFIG_ADFS_FS is not set
+CONFIG_AFFS_FS=m
+CONFIG_ECRYPT_FS=m
+# CONFIG_ECRYPT_FS_MESSAGING is not set
+CONFIG_HFS_FS=m
+CONFIG_HFSPLUS_FS=m
+CONFIG_BEFS_FS=m
+# CONFIG_BEFS_DEBUG is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+CONFIG_CRAMFS=m
+CONFIG_CRAMFS_BLOCKDEV=y
+CONFIG_SQUASHFS=m
+CONFIG_SQUASHFS_FILE_CACHE=y
+# CONFIG_SQUASHFS_FILE_DIRECT is not set
+CONFIG_SQUASHFS_DECOMP_SINGLE=y
+# CONFIG_SQUASHFS_DECOMP_MULTI is not set
+# CONFIG_SQUASHFS_DECOMP_MULTI_PERCPU is not set
+CONFIG_SQUASHFS_XATTR=y
+CONFIG_SQUASHFS_ZLIB=y
+CONFIG_SQUASHFS_LZ4=y
+CONFIG_SQUASHFS_LZO=y
+CONFIG_SQUASHFS_XZ=y
+# CONFIG_SQUASHFS_ZSTD is not set
+# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set
+# CONFIG_SQUASHFS_EMBEDDED is not set
+CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3
+# CONFIG_VXFS_FS is not set
+CONFIG_MINIX_FS=m
+# CONFIG_OMFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_QNX6FS_FS is not set
+CONFIG_ROMFS_FS=m
+CONFIG_ROMFS_BACKED_BY_BLOCK=y
+CONFIG_ROMFS_ON_BLOCK=y
+CONFIG_PSTORE=y
+CONFIG_PSTORE_DEFLATE_COMPRESS=y
+# CONFIG_PSTORE_LZO_COMPRESS is not set
+# CONFIG_PSTORE_LZ4_COMPRESS is not set
+# CONFIG_PSTORE_LZ4HC_COMPRESS is not set
+# CONFIG_PSTORE_842_COMPRESS is not set
+# CONFIG_PSTORE_ZSTD_COMPRESS is not set
+CONFIG_PSTORE_COMPRESS=y
+CONFIG_PSTORE_DEFLATE_COMPRESS_DEFAULT=y
+CONFIG_PSTORE_COMPRESS_DEFAULT="deflate"
+# CONFIG_PSTORE_CONSOLE is not set
+# CONFIG_PSTORE_PMSG is not set
+# CONFIG_PSTORE_FTRACE is not set
+CONFIG_PSTORE_RAM=m
+CONFIG_SYSV_FS=m
+CONFIG_UFS_FS=m
+# CONFIG_UFS_FS_WRITE is not set
+# CONFIG_UFS_DEBUG is not set
+# CONFIG_EROFS_FS is not set
+CONFIG_NETWORK_FILESYSTEMS=y
+CONFIG_NFS_FS=m
+# CONFIG_NFS_V2 is not set
+CONFIG_NFS_V3=m
+CONFIG_NFS_V3_ACL=y
+CONFIG_NFS_V4=m
+CONFIG_NFS_SWAP=y
+CONFIG_NFS_V4_1=y
+CONFIG_NFS_V4_2=y
+CONFIG_PNFS_FILE_LAYOUT=m
+CONFIG_PNFS_BLOCK=m
+CONFIG_PNFS_FLEXFILE_LAYOUT=m
+CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org"
+# CONFIG_NFS_V4_1_MIGRATION is not set
+CONFIG_NFS_V4_SECURITY_LABEL=y
+CONFIG_NFS_FSCACHE=y
+# CONFIG_NFS_USE_LEGACY_DNS is not set
+CONFIG_NFS_USE_KERNEL_DNS=y
+CONFIG_NFS_DEBUG=y
+CONFIG_NFS_DISABLE_UDP_SUPPORT=y
+CONFIG_NFSD=m
+CONFIG_NFSD_V2_ACL=y
+CONFIG_NFSD_V3=y
+CONFIG_NFSD_V3_ACL=y
+CONFIG_NFSD_V4=y
+# CONFIG_NFSD_BLOCKLAYOUT is not set
+# CONFIG_NFSD_SCSILAYOUT is not set
+# CONFIG_NFSD_FLEXFILELAYOUT is not set
+CONFIG_NFSD_V4_SECURITY_LABEL=y
+CONFIG_GRACE_PERIOD=m
+CONFIG_LOCKD=m
+CONFIG_LOCKD_V4=y
+CONFIG_NFS_ACL_SUPPORT=m
+CONFIG_NFS_COMMON=y
+CONFIG_SUNRPC=m
+CONFIG_SUNRPC_GSS=m
+CONFIG_SUNRPC_BACKCHANNEL=y
+CONFIG_SUNRPC_SWAP=y
+CONFIG_RPCSEC_GSS_KRB5=m
+# CONFIG_SUNRPC_DISABLE_INSECURE_ENCTYPES is not set
+CONFIG_SUNRPC_DEBUG=y
+CONFIG_SUNRPC_XPRT_RDMA=m
+CONFIG_CEPH_FS=m
+CONFIG_CEPH_FSCACHE=y
+CONFIG_CEPH_FS_POSIX_ACL=y
+CONFIG_CEPH_FS_SECURITY_LABEL=y
+CONFIG_CIFS=m
+# CONFIG_CIFS_STATS2 is not set
+CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y
+CONFIG_CIFS_WEAK_PW_HASH=y
+CONFIG_CIFS_UPCALL=y
+CONFIG_CIFS_XATTR=y
+CONFIG_CIFS_POSIX=y
+CONFIG_CIFS_DEBUG=y
+# CONFIG_CIFS_DEBUG2 is not set
+# CONFIG_CIFS_DEBUG_DUMP_KEYS is not set
+CONFIG_CIFS_DFS_UPCALL=y
+# CONFIG_CIFS_SMB_DIRECT is not set
+CONFIG_CIFS_FSCACHE=y
+CONFIG_CODA_FS=m
+# CONFIG_AFS_FS is not set
+CONFIG_9P_FS=m
+CONFIG_9P_FSCACHE=y
+CONFIG_9P_FS_POSIX_ACL=y
+CONFIG_9P_FS_SECURITY=y
+CONFIG_NLS=y
+CONFIG_NLS_DEFAULT="utf8"
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_CODEPAGE_737=m
+CONFIG_NLS_CODEPAGE_775=m
+CONFIG_NLS_CODEPAGE_850=m
+CONFIG_NLS_CODEPAGE_852=m
+CONFIG_NLS_CODEPAGE_855=m
+CONFIG_NLS_CODEPAGE_857=m
+CONFIG_NLS_CODEPAGE_860=m
+CONFIG_NLS_CODEPAGE_861=m
+CONFIG_NLS_CODEPAGE_862=m
+CONFIG_NLS_CODEPAGE_863=m
+CONFIG_NLS_CODEPAGE_864=m
+CONFIG_NLS_CODEPAGE_865=m
+CONFIG_NLS_CODEPAGE_866=m
+CONFIG_NLS_CODEPAGE_869=m
+CONFIG_NLS_CODEPAGE_936=m
+CONFIG_NLS_CODEPAGE_950=m
+CONFIG_NLS_CODEPAGE_932=m
+CONFIG_NLS_CODEPAGE_949=m
+CONFIG_NLS_CODEPAGE_874=m
+CONFIG_NLS_ISO8859_8=m
+CONFIG_NLS_CODEPAGE_1250=m
+CONFIG_NLS_CODEPAGE_1251=m
+CONFIG_NLS_ASCII=y
+CONFIG_NLS_ISO8859_1=m
+CONFIG_NLS_ISO8859_2=m
+CONFIG_NLS_ISO8859_3=m
+CONFIG_NLS_ISO8859_4=m
+CONFIG_NLS_ISO8859_5=m
+CONFIG_NLS_ISO8859_6=m
+CONFIG_NLS_ISO8859_7=m
+CONFIG_NLS_ISO8859_9=m
+CONFIG_NLS_ISO8859_13=m
+CONFIG_NLS_ISO8859_14=m
+CONFIG_NLS_ISO8859_15=m
+CONFIG_NLS_KOI8_R=m
+CONFIG_NLS_KOI8_U=m
+CONFIG_NLS_MAC_ROMAN=m
+CONFIG_NLS_MAC_CELTIC=m
+CONFIG_NLS_MAC_CENTEURO=m
+CONFIG_NLS_MAC_CROATIAN=m
+CONFIG_NLS_MAC_CYRILLIC=m
+CONFIG_NLS_MAC_GAELIC=m
+CONFIG_NLS_MAC_GREEK=m
+CONFIG_NLS_MAC_ICELAND=m
+CONFIG_NLS_MAC_INUIT=m
+CONFIG_NLS_MAC_ROMANIAN=m
+CONFIG_NLS_MAC_TURKISH=m
+CONFIG_NLS_UTF8=m
+CONFIG_DLM=m
+CONFIG_DLM_DEBUG=y
+# CONFIG_UNICODE is not set
+CONFIG_IO_WQ=y
+# end of File systems
+
+#
+# Security options
+#
+CONFIG_KEYS=y
+# CONFIG_KEYS_REQUEST_CACHE is not set
+CONFIG_PERSISTENT_KEYRINGS=y
+CONFIG_BIG_KEYS=y
+CONFIG_TRUSTED_KEYS=m
+CONFIG_ENCRYPTED_KEYS=m
+# CONFIG_KEY_DH_OPERATIONS is not set
+# CONFIG_SECURITY_DMESG_RESTRICT is not set
+CONFIG_SECURITY=y
+CONFIG_SECURITY_WRITABLE_HOOKS=y
+CONFIG_SECURITYFS=y
+CONFIG_SECURITY_NETWORK=y
+CONFIG_PAGE_TABLE_ISOLATION=y
+# CONFIG_SECURITY_INFINIBAND is not set
+CONFIG_SECURITY_NETWORK_XFRM=y
+# CONFIG_SECURITY_PATH is not set
+CONFIG_INTEL_TXT=y
+CONFIG_LSM_MMAP_MIN_ADDR=65536
+CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y
+CONFIG_HARDENED_USERCOPY=y
+CONFIG_HARDENED_USERCOPY_FALLBACK=y
+# CONFIG_FORTIFY_SOURCE is not set
+# CONFIG_STATIC_USERMODEHELPER is not set
+CONFIG_SECURITY_SELINUX=y
+CONFIG_SECURITY_SELINUX_BOOTPARAM=y
+CONFIG_SECURITY_SELINUX_DISABLE=y
+CONFIG_SECURITY_SELINUX_DEVELOP=y
+CONFIG_SECURITY_SELINUX_AVC_STATS=y
+CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
+CONFIG_SECURITY_SELINUX_SIDTAB_HASH_BITS=9
+CONFIG_SECURITY_SELINUX_SID2STR_CACHE_SIZE=256
+# CONFIG_SECURITY_SMACK is not set
+# CONFIG_SECURITY_TOMOYO is not set
+# CONFIG_SECURITY_APPARMOR is not set
+# CONFIG_SECURITY_LOADPIN is not set
+# CONFIG_SECURITY_YAMA is not set
+# CONFIG_SECURITY_SAFESETID is not set
+# CONFIG_SECURITY_LOCKDOWN_LSM is not set
+# CONFIG_INTEGRITY is not set
+# CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT is not set
+CONFIG_DEFAULT_SECURITY_SELINUX=y
+# CONFIG_DEFAULT_SECURITY_DAC is not set
+CONFIG_LSM="lockdown,yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor,bpf"
+
+#
+# Kernel hardening options
+#
+
+#
+# Memory initialization
+#
+CONFIG_INIT_STACK_NONE=y
+# CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set
+# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set
+# end of Memory initialization
+# end of Kernel hardening options
+# end of Security options
+
+CONFIG_XOR_BLOCKS=m
+CONFIG_ASYNC_CORE=m
+CONFIG_ASYNC_MEMCPY=m
+CONFIG_ASYNC_XOR=m
+CONFIG_ASYNC_PQ=m
+CONFIG_ASYNC_RAID6_RECOV=m
+CONFIG_CRYPTO=y
+
+#
+# Crypto core or helper
+#
+CONFIG_CRYPTO_FIPS=y
+CONFIG_CRYPTO_ALGAPI=y
+CONFIG_CRYPTO_ALGAPI2=y
+CONFIG_CRYPTO_AEAD=y
+CONFIG_CRYPTO_AEAD2=y
+CONFIG_CRYPTO_SKCIPHER=y
+CONFIG_CRYPTO_SKCIPHER2=y
+CONFIG_CRYPTO_HASH=y
+CONFIG_CRYPTO_HASH2=y
+CONFIG_CRYPTO_RNG=y
+CONFIG_CRYPTO_RNG2=y
+CONFIG_CRYPTO_RNG_DEFAULT=y
+CONFIG_CRYPTO_AKCIPHER2=y
+CONFIG_CRYPTO_AKCIPHER=y
+CONFIG_CRYPTO_KPP2=y
+CONFIG_CRYPTO_KPP=m
+CONFIG_CRYPTO_ACOMP2=y
+CONFIG_CRYPTO_MANAGER=y
+CONFIG_CRYPTO_MANAGER2=y
+CONFIG_CRYPTO_USER=m
+# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
+# CONFIG_CRYPTO_MANAGER_EXTRA_TESTS is not set
+CONFIG_CRYPTO_GF128MUL=y
+CONFIG_CRYPTO_NULL=y
+CONFIG_CRYPTO_NULL2=y
+CONFIG_CRYPTO_PCRYPT=m
+CONFIG_CRYPTO_CRYPTD=y
+CONFIG_CRYPTO_AUTHENC=m
+CONFIG_CRYPTO_TEST=m
+CONFIG_CRYPTO_SIMD=y
+CONFIG_CRYPTO_GLUE_HELPER_X86=y
+CONFIG_CRYPTO_ENGINE=m
+
+#
+# Public-key cryptography
+#
+CONFIG_CRYPTO_RSA=y
+CONFIG_CRYPTO_DH=m
+# CONFIG_CRYPTO_ECDH is not set
+# CONFIG_CRYPTO_ECRDSA is not set
+# CONFIG_CRYPTO_CURVE25519 is not set
+# CONFIG_CRYPTO_CURVE25519_X86 is not set
+
+#
+# Authenticated Encryption with Associated Data
+#
+CONFIG_CRYPTO_CCM=m
+CONFIG_CRYPTO_GCM=y
+# CONFIG_CRYPTO_CHACHA20POLY1305 is not set
+# CONFIG_CRYPTO_AEGIS128 is not set
+# CONFIG_CRYPTO_AEGIS128_AESNI_SSE2 is not set
+CONFIG_CRYPTO_SEQIV=y
+CONFIG_CRYPTO_ECHAINIV=m
+
+#
+# Block modes
+#
+CONFIG_CRYPTO_CBC=y
+# CONFIG_CRYPTO_CFB is not set
+CONFIG_CRYPTO_CTR=y
+CONFIG_CRYPTO_CTS=m
+CONFIG_CRYPTO_ECB=y
+CONFIG_CRYPTO_LRW=y
+# CONFIG_CRYPTO_OFB is not set
+CONFIG_CRYPTO_PCBC=m
+CONFIG_CRYPTO_XTS=y
+# CONFIG_CRYPTO_KEYWRAP is not set
+# CONFIG_CRYPTO_NHPOLY1305_SSE2 is not set
+# CONFIG_CRYPTO_NHPOLY1305_AVX2 is not set
+# CONFIG_CRYPTO_ADIANTUM is not set
+CONFIG_CRYPTO_ESSIV=m
+
+#
+# Hash modes
+#
+CONFIG_CRYPTO_CMAC=m
+CONFIG_CRYPTO_HMAC=y
+CONFIG_CRYPTO_XCBC=m
+CONFIG_CRYPTO_VMAC=m
+
+#
+# Digest
+#
+CONFIG_CRYPTO_CRC32C=y
+CONFIG_CRYPTO_CRC32C_INTEL=m
+CONFIG_CRYPTO_CRC32=m
+CONFIG_CRYPTO_CRC32_PCLMUL=m
+CONFIG_CRYPTO_XXHASH=m
+CONFIG_CRYPTO_BLAKE2B=m
+# CONFIG_CRYPTO_BLAKE2S is not set
+# CONFIG_CRYPTO_BLAKE2S_X86 is not set
+CONFIG_CRYPTO_CRCT10DIF=y
+CONFIG_CRYPTO_CRCT10DIF_PCLMUL=m
+CONFIG_CRYPTO_GHASH=y
+# CONFIG_CRYPTO_POLY1305 is not set
+# CONFIG_CRYPTO_POLY1305_X86_64 is not set
+CONFIG_CRYPTO_MD4=m
+CONFIG_CRYPTO_MD5=y
+CONFIG_CRYPTO_MICHAEL_MIC=m
+CONFIG_CRYPTO_RMD128=m
+CONFIG_CRYPTO_RMD160=m
+CONFIG_CRYPTO_RMD256=m
+CONFIG_CRYPTO_RMD320=m
+CONFIG_CRYPTO_SHA1=y
+CONFIG_CRYPTO_SHA1_SSSE3=m
+CONFIG_CRYPTO_SHA256_SSSE3=m
+CONFIG_CRYPTO_SHA512_SSSE3=m
+CONFIG_CRYPTO_SHA256=y
+CONFIG_CRYPTO_SHA512=m
+# CONFIG_CRYPTO_SHA3 is not set
+# CONFIG_CRYPTO_SM3 is not set
+# CONFIG_CRYPTO_STREEBOG is not set
+CONFIG_CRYPTO_TGR192=m
+CONFIG_CRYPTO_WP512=m
+CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL=m
+
+#
+# Ciphers
+#
+CONFIG_CRYPTO_AES=y
+# CONFIG_CRYPTO_AES_TI is not set
+CONFIG_CRYPTO_AES_NI_INTEL=y
+CONFIG_CRYPTO_ANUBIS=m
+CONFIG_CRYPTO_ARC4=m
+CONFIG_CRYPTO_BLOWFISH=m
+CONFIG_CRYPTO_BLOWFISH_COMMON=m
+CONFIG_CRYPTO_BLOWFISH_X86_64=m
+CONFIG_CRYPTO_CAMELLIA=m
+CONFIG_CRYPTO_CAMELLIA_X86_64=m
+CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64=m
+CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64=m
+CONFIG_CRYPTO_CAST_COMMON=m
+CONFIG_CRYPTO_CAST5=m
+CONFIG_CRYPTO_CAST5_AVX_X86_64=m
+CONFIG_CRYPTO_CAST6=m
+CONFIG_CRYPTO_CAST6_AVX_X86_64=m
+CONFIG_CRYPTO_DES=m
+CONFIG_CRYPTO_DES3_EDE_X86_64=m
+CONFIG_CRYPTO_FCRYPT=m
+CONFIG_CRYPTO_KHAZAD=m
+CONFIG_CRYPTO_SALSA20=m
+# CONFIG_CRYPTO_CHACHA20 is not set
+# CONFIG_CRYPTO_CHACHA20_X86_64 is not set
+CONFIG_CRYPTO_SEED=m
+CONFIG_CRYPTO_SERPENT=m
+CONFIG_CRYPTO_SERPENT_SSE2_X86_64=m
+CONFIG_CRYPTO_SERPENT_AVX_X86_64=m
+CONFIG_CRYPTO_SERPENT_AVX2_X86_64=m
+# CONFIG_CRYPTO_SM4 is not set
+CONFIG_CRYPTO_TEA=m
+CONFIG_CRYPTO_TWOFISH=m
+CONFIG_CRYPTO_TWOFISH_COMMON=m
+CONFIG_CRYPTO_TWOFISH_X86_64=m
+CONFIG_CRYPTO_TWOFISH_X86_64_3WAY=m
+CONFIG_CRYPTO_TWOFISH_AVX_X86_64=m
+
+#
+# Compression
+#
+CONFIG_CRYPTO_DEFLATE=y
+CONFIG_CRYPTO_LZO=y
+# CONFIG_CRYPTO_842 is not set
+CONFIG_CRYPTO_LZ4=m
+CONFIG_CRYPTO_LZ4HC=m
+# CONFIG_CRYPTO_ZSTD is not set
+
+#
+# Random Number Generation
+#
+CONFIG_CRYPTO_ANSI_CPRNG=m
+CONFIG_CRYPTO_DRBG_MENU=y
+CONFIG_CRYPTO_DRBG_HMAC=y
+# CONFIG_CRYPTO_DRBG_HASH is not set
+# CONFIG_CRYPTO_DRBG_CTR is not set
+CONFIG_CRYPTO_DRBG=y
+CONFIG_CRYPTO_JITTERENTROPY=y
+CONFIG_CRYPTO_USER_API=y
+CONFIG_CRYPTO_USER_API_HASH=y
+CONFIG_CRYPTO_USER_API_SKCIPHER=y
+# CONFIG_CRYPTO_USER_API_RNG is not set
+# CONFIG_CRYPTO_USER_API_AEAD is not set
+# CONFIG_CRYPTO_STATS is not set
+CONFIG_CRYPTO_HASH_INFO=y
+
+#
+# Crypto library routines
+#
+CONFIG_CRYPTO_LIB_AES=y
+CONFIG_CRYPTO_LIB_ARC4=m
+# CONFIG_CRYPTO_LIB_BLAKE2S is not set
+# CONFIG_CRYPTO_LIB_CHACHA is not set
+# CONFIG_CRYPTO_LIB_CURVE25519 is not set
+CONFIG_CRYPTO_LIB_DES=m
+CONFIG_CRYPTO_LIB_POLY1305_RSIZE=11
+# CONFIG_CRYPTO_LIB_POLY1305 is not set
+# CONFIG_CRYPTO_LIB_CHACHA20POLY1305 is not set
+CONFIG_CRYPTO_LIB_SHA256=y
+CONFIG_CRYPTO_HW=y
+CONFIG_CRYPTO_DEV_PADLOCK=m
+CONFIG_CRYPTO_DEV_PADLOCK_AES=m
+CONFIG_CRYPTO_DEV_PADLOCK_SHA=m
+# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set
+# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set
+CONFIG_CRYPTO_DEV_CCP=y
+CONFIG_CRYPTO_DEV_CCP_DD=m
+CONFIG_CRYPTO_DEV_SP_CCP=y
+CONFIG_CRYPTO_DEV_CCP_CRYPTO=m
+CONFIG_CRYPTO_DEV_SP_PSP=y
+# CONFIG_CRYPTO_DEV_CCP_DEBUGFS is not set
+CONFIG_CRYPTO_DEV_QAT=m
+CONFIG_CRYPTO_DEV_QAT_DH895xCC=m
+# CONFIG_CRYPTO_DEV_QAT_C3XXX is not set
+# CONFIG_CRYPTO_DEV_QAT_C62X is not set
+# CONFIG_CRYPTO_DEV_QAT_DH895xCCVF is not set
+# CONFIG_CRYPTO_DEV_QAT_C3XXXVF is not set
+# CONFIG_CRYPTO_DEV_QAT_C62XVF is not set
+# CONFIG_CRYPTO_DEV_NITROX_CNN55XX is not set
+# CONFIG_CRYPTO_DEV_CHELSIO is not set
+CONFIG_CRYPTO_DEV_VIRTIO=m
+# CONFIG_CRYPTO_DEV_SAFEXCEL is not set
+# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set
+CONFIG_ASYMMETRIC_KEY_TYPE=y
+CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
+# CONFIG_ASYMMETRIC_TPM_KEY_SUBTYPE is not set
+CONFIG_X509_CERTIFICATE_PARSER=y
+# CONFIG_PKCS8_PRIVATE_KEY_PARSER is not set
+CONFIG_PKCS7_MESSAGE_PARSER=y
+# CONFIG_PKCS7_TEST_KEY is not set
+CONFIG_SIGNED_PE_FILE_VERIFICATION=y
+
+#
+# Certificates for signature checking
+#
+CONFIG_MODULE_SIG_KEY="certs/signing_key.pem"
+CONFIG_SYSTEM_TRUSTED_KEYRING=y
+CONFIG_SYSTEM_TRUSTED_KEYS=""
+# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
+# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
+# CONFIG_SYSTEM_BLACKLIST_KEYRING is not set
+# end of Certificates for signature checking
+
+CONFIG_BINARY_PRINTF=y
+
+#
+# Library routines
+#
+CONFIG_RAID6_PQ=m
+CONFIG_RAID6_PQ_BENCHMARK=y
+# CONFIG_PACKING is not set
+CONFIG_BITREVERSE=y
+CONFIG_GENERIC_STRNCPY_FROM_USER=y
+CONFIG_GENERIC_STRNLEN_USER=y
+CONFIG_GENERIC_NET_UTILS=y
+CONFIG_GENERIC_FIND_FIRST_BIT=y
+CONFIG_CORDIC=m
+CONFIG_RATIONAL=y
+CONFIG_GENERIC_PCI_IOMAP=y
+CONFIG_GENERIC_IOMAP=y
+CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
+CONFIG_ARCH_HAS_FAST_MULTIPLIER=y
+CONFIG_CRC_CCITT=y
+CONFIG_CRC16=y
+CONFIG_CRC_T10DIF=y
+CONFIG_CRC_ITU_T=m
+CONFIG_CRC32=y
+# CONFIG_CRC32_SELFTEST is not set
+CONFIG_CRC32_SLICEBY8=y
+# CONFIG_CRC32_SLICEBY4 is not set
+# CONFIG_CRC32_SARWATE is not set
+# CONFIG_CRC32_BIT is not set
+CONFIG_CRC64=m
+# CONFIG_CRC4 is not set
+# CONFIG_CRC7 is not set
+CONFIG_LIBCRC32C=m
+CONFIG_CRC8=m
+CONFIG_XXHASH=y
+# CONFIG_RANDOM32_SELFTEST is not set
+CONFIG_ZLIB_INFLATE=y
+CONFIG_ZLIB_DEFLATE=y
+CONFIG_LZO_COMPRESS=y
+CONFIG_LZO_DECOMPRESS=y
+CONFIG_LZ4_COMPRESS=m
+CONFIG_LZ4HC_COMPRESS=m
+CONFIG_LZ4_DECOMPRESS=y
+CONFIG_ZSTD_COMPRESS=m
+CONFIG_ZSTD_DECOMPRESS=m
+CONFIG_XZ_DEC=y
+CONFIG_XZ_DEC_X86=y
+CONFIG_XZ_DEC_POWERPC=y
+CONFIG_XZ_DEC_IA64=y
+CONFIG_XZ_DEC_ARM=y
+CONFIG_XZ_DEC_ARMTHUMB=y
+CONFIG_XZ_DEC_SPARC=y
+CONFIG_XZ_DEC_BCJ=y
+# CONFIG_XZ_DEC_TEST is not set
+CONFIG_DECOMPRESS_GZIP=y
+CONFIG_DECOMPRESS_BZIP2=y
+CONFIG_DECOMPRESS_LZMA=y
+CONFIG_DECOMPRESS_XZ=y
+CONFIG_DECOMPRESS_LZO=y
+CONFIG_DECOMPRESS_LZ4=y
+CONFIG_GENERIC_ALLOCATOR=y
+CONFIG_REED_SOLOMON=m
+CONFIG_REED_SOLOMON_ENC8=y
+CONFIG_REED_SOLOMON_DEC8=y
+CONFIG_TEXTSEARCH=y
+CONFIG_TEXTSEARCH_KMP=m
+CONFIG_TEXTSEARCH_BM=m
+CONFIG_TEXTSEARCH_FSM=m
+CONFIG_BTREE=y
+CONFIG_INTERVAL_TREE=y
+CONFIG_XARRAY_MULTI=y
+CONFIG_ASSOCIATIVE_ARRAY=y
+CONFIG_HAS_IOMEM=y
+CONFIG_HAS_IOPORT_MAP=y
+CONFIG_HAS_DMA=y
+CONFIG_NEED_SG_DMA_LENGTH=y
+CONFIG_NEED_DMA_MAP_STATE=y
+CONFIG_ARCH_DMA_ADDR_T_64BIT=y
+CONFIG_SWIOTLB=y
+# CONFIG_DMA_API_DEBUG is not set
+CONFIG_SGL_ALLOC=y
+CONFIG_CHECK_SIGNATURE=y
+CONFIG_CPU_RMAP=y
+CONFIG_DQL=y
+CONFIG_GLOB=y
+# CONFIG_GLOB_SELFTEST is not set
+CONFIG_NLATTR=y
+CONFIG_LRU_CACHE=m
+CONFIG_CLZ_TAB=y
+CONFIG_IRQ_POLL=y
+CONFIG_MPILIB=y
+CONFIG_DIMLIB=y
+CONFIG_OID_REGISTRY=y
+CONFIG_UCS2_STRING=y
+CONFIG_HAVE_GENERIC_VDSO=y
+CONFIG_GENERIC_GETTIMEOFDAY=y
+CONFIG_GENERIC_VDSO_TIME_NS=y
+CONFIG_FONT_SUPPORT=y
+CONFIG_FONT_8x16=y
+CONFIG_FONT_AUTOSELECT=y
+CONFIG_SG_POOL=y
+CONFIG_ARCH_HAS_PMEM_API=y
+CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE=y
+CONFIG_ARCH_HAS_UACCESS_MCSAFE=y
+CONFIG_ARCH_STACKWALK=y
+CONFIG_SBITMAP=y
+# CONFIG_STRING_SELFTEST is not set
+# end of Library routines
+
+#
+# Kernel hacking
+#
+
+#
+# printk and dmesg options
+#
+CONFIG_PRINTK_TIME=y
+# CONFIG_PRINTK_CALLER is not set
+CONFIG_CONSOLE_LOGLEVEL_DEFAULT=7
+CONFIG_CONSOLE_LOGLEVEL_QUIET=4
+CONFIG_MESSAGE_LOGLEVEL_DEFAULT=4
+CONFIG_BOOT_PRINTK_DELAY=y
+CONFIG_DYNAMIC_DEBUG=y
+CONFIG_SYMBOLIC_ERRNAME=y
+CONFIG_DEBUG_BUGVERBOSE=y
+# end of printk and dmesg options
+
+#
+# Compile-time checks and compiler options
+#
+CONFIG_DEBUG_INFO=y
+CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+# CONFIG_DEBUG_INFO_REDUCED is not set
+# CONFIG_DEBUG_INFO_SPLIT is not set
+# CONFIG_DEBUG_INFO_DWARF4 is not set
+# CONFIG_DEBUG_INFO_BTF is not set
+# CONFIG_GDB_SCRIPTS is not set
+CONFIG_ENABLE_MUST_CHECK=y
+CONFIG_FRAME_WARN=2048
+# CONFIG_STRIP_ASM_SYMS is not set
+CONFIG_READABLE_ASM=y
+# CONFIG_HEADERS_INSTALL is not set
+# CONFIG_DEBUG_SECTION_MISMATCH is not set
+CONFIG_SECTION_MISMATCH_WARN_ONLY=y
+CONFIG_STACK_VALIDATION=y
+# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
+# end of Compile-time checks and compiler options
+
+#
+# Generic Kernel Debugging Instruments
+#
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x0
+CONFIG_MAGIC_SYSRQ_SERIAL=y
+CONFIG_MAGIC_SYSRQ_SERIAL_SEQUENCE=""
+CONFIG_DEBUG_FS=y
+CONFIG_HAVE_ARCH_KGDB=y
+CONFIG_KGDB=y
+CONFIG_KGDB_SERIAL_CONSOLE=y
+CONFIG_KGDB_TESTS=y
+# CONFIG_KGDB_TESTS_ON_BOOT is not set
+CONFIG_KGDB_LOW_LEVEL_TRAP=y
+CONFIG_KGDB_KDB=y
+CONFIG_KDB_DEFAULT_ENABLE=0x1
+CONFIG_KDB_KEYBOARD=y
+CONFIG_KDB_CONTINUE_CATASTROPHIC=0
+CONFIG_ARCH_HAS_UBSAN_SANITIZE_ALL=y
+# CONFIG_UBSAN is not set
+# end of Generic Kernel Debugging Instruments
+
+CONFIG_DEBUG_KERNEL=y
+CONFIG_DEBUG_MISC=y
+
+#
+# Memory Debugging
+#
+# CONFIG_PAGE_EXTENSION is not set
+# CONFIG_DEBUG_PAGEALLOC is not set
+# CONFIG_PAGE_OWNER is not set
+# CONFIG_PAGE_POISONING is not set
+# CONFIG_DEBUG_PAGE_REF is not set
+CONFIG_DEBUG_RODATA_TEST=y
+CONFIG_GENERIC_PTDUMP=y
+# CONFIG_PTDUMP_DEBUGFS is not set
+# CONFIG_DEBUG_OBJECTS is not set
+# CONFIG_SLUB_DEBUG_ON is not set
+# CONFIG_SLUB_STATS is not set
+CONFIG_HAVE_DEBUG_KMEMLEAK=y
+# CONFIG_DEBUG_KMEMLEAK is not set
+# CONFIG_DEBUG_STACK_USAGE is not set
+CONFIG_SCHED_STACK_END_CHECK=y
+CONFIG_DEBUG_VM=y
+# CONFIG_DEBUG_VM_VMACACHE is not set
+# CONFIG_DEBUG_VM_RB is not set
+# CONFIG_DEBUG_VM_PGFLAGS is not set
+CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y
+# CONFIG_DEBUG_VIRTUAL is not set
+CONFIG_DEBUG_MEMORY_INIT=y
+# CONFIG_DEBUG_PER_CPU_MAPS is not set
+CONFIG_HAVE_ARCH_KASAN=y
+CONFIG_HAVE_ARCH_KASAN_VMALLOC=y
+CONFIG_CC_HAS_KASAN_GENERIC=y
+# CONFIG_KASAN is not set
+CONFIG_KASAN_STACK=1
+# end of Memory Debugging
+
+CONFIG_DEBUG_SHIRQ=y
+
+#
+# Debug Oops, Lockups and Hangs
+#
+# CONFIG_PANIC_ON_OOPS is not set
+CONFIG_PANIC_ON_OOPS_VALUE=0
+CONFIG_PANIC_TIMEOUT=0
+CONFIG_LOCKUP_DETECTOR=y
+CONFIG_SOFTLOCKUP_DETECTOR=y
+# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set
+CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0
+CONFIG_HARDLOCKUP_DETECTOR_PERF=y
+CONFIG_HARDLOCKUP_CHECK_TIMESTAMP=y
+CONFIG_HARDLOCKUP_DETECTOR=y
+# CONFIG_BOOTPARAM_HARDLOCKUP_PANIC is not set
+CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=0
+CONFIG_DETECT_HUNG_TASK=y
+CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=120
+# CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set
+CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=0
+# CONFIG_WQ_WATCHDOG is not set
+# CONFIG_TEST_LOCKUP is not set
+# end of Debug Oops, Lockups and Hangs
+
+#
+# Scheduler Debugging
+#
+CONFIG_SCHED_DEBUG=y
+CONFIG_SCHED_INFO=y
+CONFIG_SCHEDSTATS=y
+# end of Scheduler Debugging
+
+# CONFIG_DEBUG_TIMEKEEPING is not set
+
+#
+# Lock Debugging (spinlocks, mutexes, etc...)
+#
+CONFIG_LOCK_DEBUGGING_SUPPORT=y
+CONFIG_PROVE_LOCKING=y
+# CONFIG_PROVE_RAW_LOCK_NESTING is not set
+# CONFIG_LOCK_STAT is not set
+CONFIG_DEBUG_RT_MUTEXES=y
+CONFIG_DEBUG_SPINLOCK=y
+CONFIG_DEBUG_MUTEXES=y
+CONFIG_DEBUG_WW_MUTEX_SLOWPATH=y
+CONFIG_DEBUG_RWSEMS=y
+CONFIG_DEBUG_LOCK_ALLOC=y
+CONFIG_LOCKDEP=y
+# CONFIG_DEBUG_LOCKDEP is not set
+CONFIG_DEBUG_ATOMIC_SLEEP=y
+# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
+# CONFIG_LOCK_TORTURE_TEST is not set
+# CONFIG_WW_MUTEX_SELFTEST is not set
+# end of Lock Debugging (spinlocks, mutexes, etc...)
+
+CONFIG_TRACE_IRQFLAGS=y
+CONFIG_STACKTRACE=y
+# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
+# CONFIG_DEBUG_KOBJECT is not set
+
+#
+# Debug kernel data structures
+#
+CONFIG_DEBUG_LIST=y
+# CONFIG_DEBUG_PLIST is not set
+# CONFIG_DEBUG_SG is not set
+# CONFIG_DEBUG_NOTIFIERS is not set
+# CONFIG_BUG_ON_DATA_CORRUPTION is not set
+# end of Debug kernel data structures
+
+# CONFIG_DEBUG_CREDENTIALS is not set
+
+#
+# RCU Debugging
+#
+CONFIG_PROVE_RCU=y
+CONFIG_TORTURE_TEST=m
+# CONFIG_RCU_PERF_TEST is not set
+CONFIG_RCU_TORTURE_TEST=m
+CONFIG_RCU_CPU_STALL_TIMEOUT=60
+# CONFIG_RCU_TRACE is not set
+# CONFIG_RCU_EQS_DEBUG is not set
+# end of RCU Debugging
+
+# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set
+# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
+# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set
+CONFIG_LATENCYTOP=y
+CONFIG_USER_STACKTRACE_SUPPORT=y
+CONFIG_NOP_TRACER=y
+CONFIG_HAVE_FUNCTION_TRACER=y
+CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
+CONFIG_HAVE_DYNAMIC_FTRACE=y
+CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
+CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
+CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
+CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
+CONFIG_HAVE_FENTRY=y
+CONFIG_HAVE_C_RECORDMCOUNT=y
+CONFIG_TRACER_MAX_TRACE=y
+CONFIG_TRACE_CLOCK=y
+CONFIG_RING_BUFFER=y
+CONFIG_EVENT_TRACING=y
+CONFIG_CONTEXT_SWITCH_TRACER=y
+CONFIG_RING_BUFFER_ALLOW_SWAP=y
+CONFIG_PREEMPTIRQ_TRACEPOINTS=y
+CONFIG_TRACING=y
+CONFIG_GENERIC_TRACER=y
+CONFIG_TRACING_SUPPORT=y
+CONFIG_FTRACE=y
+# CONFIG_BOOTTIME_TRACING is not set
+CONFIG_FUNCTION_TRACER=y
+CONFIG_FUNCTION_GRAPH_TRACER=y
+CONFIG_DYNAMIC_FTRACE=y
+CONFIG_DYNAMIC_FTRACE_WITH_REGS=y
+CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
+CONFIG_FUNCTION_PROFILER=y
+CONFIG_STACK_TRACER=y
+# CONFIG_PREEMPTIRQ_EVENTS is not set
+# CONFIG_IRQSOFF_TRACER is not set
+CONFIG_SCHED_TRACER=y
+# CONFIG_HWLAT_TRACER is not set
+# CONFIG_MMIOTRACE is not set
+CONFIG_FTRACE_SYSCALLS=y
+CONFIG_TRACER_SNAPSHOT=y
+# CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP is not set
+CONFIG_BRANCH_PROFILE_NONE=y
+# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
+# CONFIG_PROFILE_ALL_BRANCHES is not set
+CONFIG_BLK_DEV_IO_TRACE=y
+CONFIG_KPROBE_EVENTS=y
+# CONFIG_KPROBE_EVENTS_ON_NOTRACE is not set
+CONFIG_UPROBE_EVENTS=y
+CONFIG_DYNAMIC_EVENTS=y
+CONFIG_PROBE_EVENTS=y
+CONFIG_FTRACE_MCOUNT_RECORD=y
+# CONFIG_HIST_TRIGGERS is not set
+# CONFIG_TRACE_EVENT_INJECT is not set
+# CONFIG_TRACEPOINT_BENCHMARK is not set
+CONFIG_RING_BUFFER_BENCHMARK=m
+# CONFIG_TRACE_EVAL_MAP_FILE is not set
+# CONFIG_FTRACE_STARTUP_TEST is not set
+# CONFIG_RING_BUFFER_STARTUP_TEST is not set
+# CONFIG_PREEMPTIRQ_DELAY_TEST is not set
+# CONFIG_KPROBE_EVENT_GEN_TEST is not set
+# CONFIG_PROVIDE_OHCI1394_DMA_INIT is not set
+# CONFIG_SAMPLES is not set
+CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y
+CONFIG_STRICT_DEVMEM=y
+# CONFIG_IO_STRICT_DEVMEM is not set
+
+#
+# x86 Debugging
+#
+CONFIG_TRACE_IRQFLAGS_SUPPORT=y
+CONFIG_EARLY_PRINTK_USB=y
+# CONFIG_X86_VERBOSE_BOOTUP is not set
+CONFIG_EARLY_PRINTK=y
+CONFIG_EARLY_PRINTK_DBGP=y
+# CONFIG_EARLY_PRINTK_USB_XDBC is not set
+# CONFIG_EFI_PGT_DUMP is not set
+# CONFIG_DEBUG_WX is not set
+CONFIG_DOUBLEFAULT=y
+# CONFIG_DEBUG_TLBFLUSH is not set
+CONFIG_HAVE_MMIOTRACE_SUPPORT=y
+CONFIG_X86_DECODER_SELFTEST=y
+CONFIG_IO_DELAY_0X80=y
+# CONFIG_IO_DELAY_0XED is not set
+# CONFIG_IO_DELAY_UDELAY is not set
+# CONFIG_IO_DELAY_NONE is not set
+CONFIG_DEBUG_BOOT_PARAMS=y
+# CONFIG_CPA_DEBUG is not set
+# CONFIG_DEBUG_ENTRY is not set
+# CONFIG_DEBUG_NMI_SELFTEST is not set
+CONFIG_X86_DEBUG_FPU=y
+# CONFIG_PUNIT_ATOM_DEBUG is not set
+CONFIG_UNWINDER_ORC=y
+# CONFIG_UNWINDER_FRAME_POINTER is not set
+# end of x86 Debugging
+
+#
+# Kernel Testing and Coverage
+#
+# CONFIG_KUNIT is not set
+# CONFIG_NOTIFIER_ERROR_INJECTION is not set
+CONFIG_FUNCTION_ERROR_INJECTION=y
+# CONFIG_FAULT_INJECTION is not set
+CONFIG_ARCH_HAS_KCOV=y
+CONFIG_CC_HAS_SANCOV_TRACE_PC=y
+# CONFIG_KCOV is not set
+CONFIG_RUNTIME_TESTING_MENU=y
+# CONFIG_LKDTM is not set
+# CONFIG_TEST_LIST_SORT is not set
+# CONFIG_TEST_MIN_HEAP is not set
+# CONFIG_TEST_SORT is not set
+# CONFIG_KPROBES_SANITY_TEST is not set
+# CONFIG_BACKTRACE_SELF_TEST is not set
+# CONFIG_RBTREE_TEST is not set
+# CONFIG_REED_SOLOMON_TEST is not set
+# CONFIG_INTERVAL_TREE_TEST is not set
+# CONFIG_PERCPU_TEST is not set
+CONFIG_ATOMIC64_SELFTEST=y
+CONFIG_ASYNC_RAID6_TEST=m
+# CONFIG_TEST_HEXDUMP is not set
+# CONFIG_TEST_STRING_HELPERS is not set
+# CONFIG_TEST_STRSCPY is not set
+CONFIG_TEST_KSTRTOX=y
+# CONFIG_TEST_PRINTF is not set
+# CONFIG_TEST_BITMAP is not set
+# CONFIG_TEST_BITFIELD is not set
+# CONFIG_TEST_UUID is not set
+# CONFIG_TEST_XARRAY is not set
+# CONFIG_TEST_OVERFLOW is not set
+# CONFIG_TEST_RHASHTABLE is not set
+# CONFIG_TEST_HASH is not set
+# CONFIG_TEST_IDA is not set
+# CONFIG_TEST_LKM is not set
+# CONFIG_TEST_VMALLOC is not set
+# CONFIG_TEST_USER_COPY is not set
+# CONFIG_TEST_BPF is not set
+# CONFIG_TEST_BLACKHOLE_DEV is not set
+# CONFIG_FIND_BIT_BENCHMARK is not set
+# CONFIG_TEST_FIRMWARE is not set
+# CONFIG_TEST_SYSCTL is not set
+# CONFIG_TEST_UDELAY is not set
+# CONFIG_TEST_STATIC_KEYS is not set
+# CONFIG_TEST_KMOD is not set
+# CONFIG_TEST_MEMCAT_P is not set
+# CONFIG_TEST_STACKINIT is not set
+# CONFIG_TEST_MEMINIT is not set
+# CONFIG_MEMTEST is not set
+# CONFIG_HYPERV_TESTING is not set
+# end of Kernel Testing and Coverage
+# end of Kernel hacking
+EOF
--- /dev/null
+#!/bin/bash
+set -ex
+
+# Prep the config
+echo "Updating the kernel config"
+make olddefconfig
+
+# This way we get the commit hash embedded into the package metadata
+# and 'uname -r' in all cases - whether it's some random commit or an
+# annotated tag. Example package names:
+# uname -r: 4.9.0-rc4-ceph-g156db39ecfbd
+# deb: linux-image-4.9.0-rc4-ceph-g156db39ecfbd_4.9.0-rc4-ceph-g156db39ecfbd-1_amd64.deb
+# rpm: kernel-4.9.0_rc4_ceph_g156db39ecfbd-2.x86_64.rpm
+if ! grep -q "^CONFIG_LOCALVERSION_AUTO=y" .config; then
+ echo "CONFIG_LOCALVERSION_AUTO is not set, check kernel-config-*.sh"
+ exit 1
+fi
+printf -- '-ceph-g%s' ${GIT_COMMIT:0:12} > .scmversion
+
+kernelrelease=$(make -s kernelrelease)
--- /dev/null
+#!/bin/bash
+#
+# Ceph distributed storage system
+#
+# Copyright (C) 2016 Red Hat <contact@redhat.com>
+#
+# Author: Boris Ranto <branto@redhat.com>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+set -ex
+HOST=$(hostname --short)
+echo "Building on $(hostname)"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " KEYID=${KEYID}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BUILD SOURCE=$COPYARTIFACT_BUILD_NUMBER_CEPH_SETUP"
+echo "*****"
+env
+echo "*****"
+
+# ---- Pin target distro from Jenkins env (not host) ----
+pick_first() { printf "%s\n" "$1" | tr ' ,;' '\n' | sed -n '1p'; }
+
+TARGET_DIST_RAW="${DIST:-}"
+if [[ -z "$TARGET_DIST_RAW" && -n "${DISTROS:-}" ]]; then
+ TARGET_DIST_RAW="$(pick_first "$DISTROS")"
+fi
+if [[ -z "$TARGET_DIST_RAW" ]]; then
+ echo "ERROR: Neither DIST nor DISTROS set; cannot determine target distro." >&2
+ exit 1
+fi
+
+case "$TARGET_DIST_RAW" in
+ rocky10)
+ export DIST="rocky10"
+ export DISTRO="rocky"
+ export RELEASE="10"
+ export NORMAL_DISTRO="rocky"
+ export NORMAL_DISTRO_VERSION="10"
+ export BUILD_IN_CONTAINER=1 # hint for later script
+ ;;
+ el9|centos9|c9s|centos-stream9)
+ export DIST="el9"
+ export DISTRO="centos"
+ export RELEASE="9"
+ export NORMAL_DISTRO="centos"
+ export NORMAL_DISTRO_VERSION="9"
+ export BUILD_IN_CONTAINER= # host build OK
+ ;;
+ *)
+ # Fallback: if you have more targets, add them here. Last resort: host detect.
+ echo "WARN: Unrecognized target '$TARGET_DIST_RAW' — falling back to host detection for NORMAL_DISTRO only."
+ # We still avoid mutating DIST/DISTRO/RELEASE unless you want to.
+ ;;
+esac
+
+echo "Pinned target: DIST=${DIST} DISTRO=${DISTRO} RELEASE=${RELEASE}"
+echo "Shaman will report NORMAL_DISTRO=${NORMAL_DISTRO} NORMAL_DISTRO_VERSION=${NORMAL_DISTRO_VERSION}"
+
+if test $(id -u) != 0 ; then
+ SUDO=sudo
+fi
+export LC_ALL=C # the following is vulnerable to i18n
+
+if test -f /etc/redhat-release ; then
+ $SUDO yum install -y bc
+ $SUDO yum install -y elfutils-libelf-devel # for ORC unwinder
+ $SUDO yum install -y flex bison # for Kconfig
+ $SUDO yum install -y dwarves
+ $SUDO yum install -y elfutils-devel # for dwarf.h
+fi
+
+if which apt-get > /dev/null ; then
+ $SUDO apt-get install -y bc
+ $SUDO apt-get install -y lsb-release
+ $SUDO apt-get install -y libelf-dev # for ORC unwinder
+ $SUDO apt-get install -y flex bison # for Kconfig
+ $SUDO apt-get install -y dwarves
+ $SUDO apt-get install -y libdw-dev # for dwarf.h
+fi
+
+case $DISTRO in
+rhel|centos|fedora|sles|opensuse-leap)
+ case $DISTRO in
+ opensuse)
+ $SUDO zypper -y yum-utils
+ ;;
+ *)
+ $SUDO yum install -y yum-utils mock
+ ;;
+ esac
+ ;;
+*)
+ echo "$DISTRO is unknown, dependencies will have to be installed manually."
+ ;;
+esac
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+make_chacractl_config $chacra_url
+
+BRANCH=$(branch_slash_filter $BRANCH)
+
+# Make sure we execute at the top level directory
+cd "$WORKSPACE"
+
+# Clean the git repo
+git clean -fxd
+
+# Export the SHA1 so links work here: https://shaman.ceph.com/builds/kernel/
+# This gets sent to update_build_status which calls submit_build_status in build_utils.sh.
+export SHA1="${GIT_COMMIT}"
+
+# create build status in shaman
+update_build_status "started" "kernel" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $ARCH
--- /dev/null
+#!/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+if test -f /etc/redhat-release ; then
+ exit 0
+fi
--- /dev/null
+#!/bin/bash
+set -ex
+
+# only do work if we are a RPM distro
+if [[ ! -f /etc/redhat-release && ! -f /usr/bin/zypper ]] ; then
+ exit 0
+fi
--- /dev/null
+- job:
+ name: kernel
+ project-type: matrix
+ defaults: global
+ display-name: 'kernel'
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - github:
+ url: https://github.com/ceph/ceph-client
+ concurrent: true
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos9, noble, jammy and focal"
+ default: "centos9 focal jammy noble rocky10"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64"
+ default: "x86_64"
+
+ - bool:
+ name: THROWAWAY
+ description: "
+Default: False. When True it will not POST binaries to chacra. Artifacts will not be around for long. Useful to test builds."
+ default: false
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ execution-strategy:
+ combination-filter: DIST==AVAILABLE_DIST && ARCH==AVAILABLE_ARCH
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - huge
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos9
+ - rocky10
+ - focal
+ - jammy
+ - noble
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+ scm:
+ - raw:
+ xml: |
+ <scm class="hudson.plugins.git.GitSCM">
+ <configVersion>2</configVersion>
+ <userRemoteConfigs>
+ <hudson.plugins.git.UserRemoteConfig>
+ <name>origin</name>
+ <refspec>+refs/heads/*:refs/remotes/origin/*</refspec>
+ <url>https://github.com/ceph/ceph-client.git</url>
+ </hudson.plugins.git.UserRemoteConfig>
+ </userRemoteConfigs>
+ <branches>
+ <hudson.plugins.git.BranchSpec>
+ <name>$BRANCH</name>
+ </hudson.plugins.git.BranchSpec>
+ </branches>
+ <disableSubmodules>false</disableSubmodules>
+ <recursiveSubmodules>false</recursiveSubmodules>
+ <doGenerateSubmoduleConfigurations>false</doGenerateSubmoduleConfigurations>
+ <remotePoll>false</remotePoll>
+ <gitTool>Default</gitTool>
+ <submoduleCfg class="list"/>
+ <reference/>
+ <gitConfigName/>
+ <gitConfigEmail/>
+ <extensions>
+ <hudson.plugins.git.extensions.impl.CloneOption>
+ <shallow>true</shallow>
+ <noTags>true</noTags>
+ <timeout>20</timeout>
+ </hudson.plugins.git.extensions.impl.CloneOption>
+ <hudson.plugins.git.extensions.impl.CheckoutOption>
+ <timeout>20</timeout>
+ </hudson.plugins.git.extensions.impl.CheckoutOption>
+ <hudson.plugins.git.extensions.impl.WipeWorkspace/>
+ </extensions>
+ </scm>
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ rm -rf dist
+ rm -rf venv
+ rm -rf release
+ # debian build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_deb
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/kernel-config-deb.sh
+ - ../../build/prepare_config
+ - ../../build/build_deb
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_rpm
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/kernel-config-rpm.sh
+ - ../../build/prepare_config
+ - ../../build/build_rpm
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/failure
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+#!/bin/bash
+
+set -e
+set -x
+
+function main() {
+ # install some of our dependencies
+ pushd "$WORKSPACE"
+ git clone gti@github.com:ceph/teuthology
+ pushd "$WORKSPACE/teuthology"
+ git remote -v
+ ./bootstrap
+ curl -XGET -L paddles.front.sepia.ceph.com/nodes | jq '[.[] | select(.description == null or .description == "None") | select(.locked == true)] | group_by(.locked_by) | .[] | {locked_by: .[0].locked_by, name: [ .[].name | tostring] | join(" ")} | select(.locked_by | tostring| test("scheduled")|not)'
+ popd
+ exit $?
+}
+
+main "$@"
--- /dev/null
+- job:
+ name: lab-cop
+ node: small && xenial
+ defaults: global
+ display-name: 'lab-cop'
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/build
+
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# run tox by recreating the environment and in verbose mode
+# by default this will run all environments defined
+$VENV/tox -rv
--- /dev/null
+- job:
+ name: merfi-pull-requests
+ project-type: freestyle
+ defaults: global
+ display-name: 'Merfi: Pull Requests'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/alfredodeza/merfi/
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ admin-list:
+ - alfredodeza
+ - ktdreyer
+ - zmc
+ - andrewschoen
+ - dmick
+ org-list:
+ - ceph
+ white-list:
+ - jcsp
+ - gregsfortytwo
+ - GregMeno
+ - dzafman
+ - dillaman
+ - dachary
+ - liewegas
+ - idryomov
+ - vasukulkarni
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: false
+ auto-close-on-fail: false
+
+ scm:
+ - git:
+ url: https://github.com/alfredodeza/merfi.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "ansible" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+cd "$WORKSPACE/deploy/playbooks/"
+$VENV/ansible-playbook -i "localhost," -c local local_deploy.yml --extra-vars="branch=$BRANCH jenkins_prado_token=$JENKINS_PRADO_TOKEN prado_token=$PRADO_TOKEN"
--- /dev/null
+- scm:
+ name: mita
+ scm:
+ - git:
+ url: https://github.com/ceph/mita.git
+ branches:
+ - main
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: true
+
+- job:
+ name: mita-deploy
+ node: built-in
+ description: "This job clones mita and deploys it to its production server based on the BRANCH value"
+ display-name: 'mita-deploy'
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: 25
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/mita
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build, defaults to 'main'"
+ default: "main"
+ scm:
+ - mita
+
+ triggers:
+ - github
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+if test "$DISTRO" != "debian" -a "$DISTRO" != "ubuntu"; then
+ exit 0
+fi
+
+
+cd $WORKSPACE/ntirpc
+NTIRPC_VERSION=`git describe --long | sed -e 's/v//1;'`
+
+rm -rf .git
+
+cd $WORKSPACE
+
+## Build the source tarball
+NTIRPC_ORIG_TAR_GZ="libntirpc_${NTIRPC_VERSION}.orig.tar.gz"
+tar czf ${NTIRPC_ORIG_TAR_GZ} ntirpc
+
+cd $WORKSPACE/nfs-ganesha-debian
+git checkout ${NTIRPC_DEBIAN_BRANCH}
+cd $WORKSPACE/ntirpc
+
+# add debian directory next to src
+cp -r $WORKSPACE/nfs-ganesha-debian/debian $WORKSPACE/ntirpc/
+
+## Prepare the debian files
+# Bump the changelog
+dch -v "$NTIRPC_VERSION-1${DIST}" "$NTIRPC_VERSION for download.ceph.com"
+
+# Create .dsc and source tarball, we don't care about signing changes or source package
+sudo dpkg-buildpackage -S -us -uc -d
+
+## Setup the pbuilder
+setup_pbuilder use_gcc
+PBUILDDIR="/srv/debian-base"
+
+## Build with pbuilder
+echo "Building ntirpc debs"
+
+sudo pbuilder --clean \
+ --distribution $DIST \
+ --basetgz $PBUILDDIR/$DIST.tgz
+
+# add missing packages and components to pbuilder
+sudo pbuilder update \
+ --distribution $DIST \
+ --basetgz $PBUILDDIR/$DIST.tgz \
+ --extrapackages "cmake libkrb5-dev libjemalloc-dev debhelper apt-transport-https apt-utils ca-certificates" \
+ --components "main restricted universe multiverse" \
+ --override-config
+
+sudo pbuilder build \
+ --distribution $DIST \
+ --basetgz $PBUILDDIR/$DIST.tgz \
+ --buildresult $WORKSPACE/dist/ntirpc/deb/ \
+ $WORKSPACE/libntirpc_${NTIRPC_VERSION}-1${DIST}.dsc
+
+sudo chown -R jenkins-build:jenkins-build $WORKSPACE/dist/ntirpc/deb
+cd $WORKSPACE/dist/ntirpc/deb
+apt-ftparchive packages . > Packages
+
+# for debugging
+cat Packages
+
+cd $WORKSPACE
+
+REPO_URL="https://shaman.ceph.com/api/repos/ceph/$CEPH_BRANCH/$CEPH_SHA1/$DISTRO/$DIST/repo"
+TIME_LIMIT=1200
+INTERVAL=30
+REPO_FOUND=0
+
+# poll shaman for up to 10 minutes
+while [ "$SECONDS" -le "$TIME_LIMIT" ]
+do
+ SHAMAN_MIRROR=`curl --fail -L ${REPO_URL} || true`
+ if [[ ${SHAMAN_MIRROR} ]]; then
+ echo "Ceph debian lib repo exists in shaman"
+ REPO_FOUND=1
+ break
+ else
+ sleep $INTERVAL
+ fi
+done
+
+if [[ "$REPO_FOUND" -eq 0 ]]; then
+ echo "Ceph debian lib repo does NOT exist in shaman"
+ exit 1
+fi
+
+# make sure any shaman list file is removed. At some point if all nodes
+# are clean this will not be needed.
+sudo rm -f /etc/apt/sources.list.d/shaman*
+
+# We need this for system and to run the cmake
+sudo apt-get update
+
+# Normalize variables across rpm/deb builds
+NORMAL_DISTRO=$DISTRO
+NORMAL_DISTRO_VERSION=$DIST
+NORMAL_ARCH=$ARCH
+
+# create build status in shaman
+update_build_status "started" "nfs-ganesha-stable" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+cd $WORKSPACE/nfs-ganesha-debian
+git checkout ${NFS_GANESHA_DEBIAN_BRANCH}
+
+cd $WORKSPACE/nfs-ganesha
+
+PACKAGE_MANAGER_VERSION="`git describe --long | sed 's/V//1'`-1${DIST}"
+
+# Version is in format X.XdevX-X-SHA1
+VERSION=`git describe --long | sed -e 's/V//1'`
+
+rm -rf .git
+
+cd $WORKSPACE
+
+## Build the source tarball
+NFS_GANESHA_ORIG_TAR_GZ="nfs-ganesha_${VERSION}.orig.tar.gz"
+tar czf ${NFS_GANESHA_ORIG_TAR_GZ} nfs-ganesha/src
+
+# remove old version
+rm -rf $WORKSPACE/nfs-ganesha
+
+# unpack just the src
+tar xzf ${NFS_GANESHA_ORIG_TAR_GZ}
+
+cd $WORKSPACE/nfs-ganesha
+
+# add debian directory next to src
+cp -r $WORKSPACE/nfs-ganesha-debian/debian $WORKSPACE/nfs-ganesha/
+
+## Get some basic information about the system and the repository
+DEB_ARCH=$(dpkg-architecture -qDEB_BUILD_ARCH)
+
+## Prepare the debian files
+# Bump the changelog
+dch -v "$VERSION-1${DIST}" "$VERSION for download.ceph.com"
+
+# Create .dsc and source tarball, we don't care about signing changes or source package
+sudo dpkg-buildpackage -S -us -uc -d
+
+## Build with pbuilder
+echo "Building nfs-ganesha debs"
+
+sudo pbuilder --clean \
+ --distribution $DIST \
+ --basetgz $PBUILDDIR/$DIST.tgz
+
+mkdir -p $WORKSPACE/dist/deb
+
+# add missing packages and components to pbuilder
+sudo pbuilder update \
+ --distribution $DIST \
+ --basetgz $PBUILDDIR/$DIST.tgz \
+ --extrapackages "apt-transport-https apt-utils ca-certificates debhelper python-all liblttng-ust0 liblttng-ust-dev liblttng-ctl-dev pkgconf quilt" \
+ --components "main restricted universe multiverse" \
+ --override-config
+
+sudo pbuilder update \
+ --distribution $DIST \
+ --basetgz $PBUILDDIR/$DIST.tgz \
+ --removepackages "librados2 libcephfs2 librgw2 librados-dev libcephfs-dev librgw-dev libntirpc-dev" \
+ --override-config
+
+sudo pbuilder update \
+ --distribution $DIST \
+ --basetgz $PBUILDDIR/$DIST.tgz \
+ --extrapackages "libntirpc-dev" \
+ --othermirror "deb [trusted=yes] file://$WORKSPACE/dist/ntirpc/deb ./" \
+ --bindmounts "$WORKSPACE/dist/ntirpc/deb" \
+ --override-config
+
+# use libcephfs and librgw from shaman
+sudo pbuilder update \
+ --distribution $DIST \
+ --basetgz $PBUILDDIR/$DIST.tgz \
+ --extrapackages "librados2 libcephfs2 librgw2 librados-dev libcephfs-dev librgw-dev" \
+ --othermirror "${SHAMAN_MIRROR}" \
+ --override-config
+
+echo "Building debs for $DIST"
+sudo pbuilder build \
+ --distribution $DIST \
+ --basetgz $PBUILDDIR/$DIST.tgz \
+ --buildresult $WORKSPACE/dist/nfs-ganesha/deb/ \
+ --debbuildopts "-j`grep -c processor /proc/cpuinfo`" \
+ $WORKSPACE/nfs-ganesha_${VERSION}-1${DIST}.dsc
+
+
+## Upload the created debs to chacra
+chacra_endpoint="nfs-ganesha-stable/${NFS_GANESHA_BRANCH}/${GIT_COMMIT}/${DISTRO}/${DIST}"
+chacra_repo_endpoint="${chacra_endpoint}/flavors/${FLAVOR}"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+
+if [ "$THROWAWAY" = false ] ; then
+ # push binaries to chacra
+ find $WORKSPACE/dist/nfs-ganesha/deb | egrep "*\.(changes|deb|dsc|gz)$" | egrep -v "(Packages|Sources|Contents)" | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ find $WORKSPACE/dist/ntirpc/deb | egrep "*\.(changes|deb|dsc|gz)$" | egrep -v "(Packages|Sources|Contents)" | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ # write json file with build info
+ # version and package_manager version are needed for teuthology
+ cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$VERSION",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+ # post the json to repo-extra json to chacra
+ curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_repo_endpoint}/extra/
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_repo_endpoint}
+fi
+
+echo "Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_repo_endpoint}"
+
+# update shaman with the completed build status
+SHA1=${GIT_COMMIT}
+update_build_status "completed" "nfs-ganesha-stable" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+sudo rm -rf $WORKSPACE/dist
+
+# this job adds custom shaman repositories which can cause issues at build time
+# for other jobs so they need to be properly removed
+sudo rm -f /etc/yum.repos.d/shaman*
+sudo rm -f /etc/apt/sources.list.d/shaman*
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+# Only do actual work when we are an RPM distro
+if test "$DISTRO" != "fedora" -a "$DISTRO" != "centos" -a "$DISTRO" != "rhel"; then
+ exit 0
+fi
+
+get_rpm_dist
+
+# Make sure old rpms are not leftover on the system
+sudo yum remove -y librados2 librgw2 libcephfs2 librados-devel librgw-devel libcephfs-devel
+
+# Make sure old repo metadata are not leftover on the system
+sudo yum clean all
+
+# Get .repo file from appropriate shaman build
+REPO_URL="https://shaman.ceph.com/api/repos/ceph/$CEPH_BRANCH/$CEPH_SHA1/$DISTRO/$RELEASE/flavors/default/repo"
+TIME_LIMIT=1200
+INTERVAL=30
+REPO_FOUND=0
+
+# poll shaman for up to 10 minutes
+while [ "$SECONDS" -le "$TIME_LIMIT" ]
+do
+ if `curl --fail -L $REPO_URL > $WORKSPACE/shaman.repo`; then
+ echo "Ceph repo file has been added from shaman"
+ REPO_FOUND=1
+ break
+ else
+ sleep $INTERVAL
+ fi
+done
+
+if [[ "$REPO_FOUND" -eq 0 ]]; then
+ echo "Ceph lib repo does NOT exist in shaman"
+ exit 1
+fi
+
+# add shaman repos to /etc/yum.repos.d/ to install ceph libraries so enable
+#FSAL_CEPH (enabled by default) and FSAL_RGW in the .spec file when cmake command runs
+sudo cp $WORKSPACE/shaman.repo /etc/yum.repos.d/
+# for debugging
+cat /etc/yum.repos.d/shaman.repo
+xargs sudo yum install -y <<< "
+dbus-devel
+libacl-devel
+libblkid-devel
+libcap-devel
+libnfsidmap-devel
+libwbclient-devel
+krb5-devel
+librados-devel-${CEPH_VERSION}
+librgw-devel-${CEPH_VERSION}
+libcephfs-devel-${CEPH_VERSION}
+lttng-ust-devel
+"
+
+# Removed "lttng-tools-devel" from above xargs list because it isn't available in el8
+if [ $DIST = centos7 ]
+then
+ sudo yum install -y lttng-tools-devel
+fi
+
+# The libnsl2-devel package is needed on el8 builds, but not el7
+if [ $DIST = centos8 ]
+then
+ sudo yum -y install libnsl2-devel
+fi
+
+sudo yum install -y mock
+
+# Normalize variables across rpm/deb builds
+NORMAL_DISTRO=$DISTRO
+NORMAL_DISTRO_VERSION=$RELEASE
+NORMAL_ARCH=$ARCH
+
+# create build status in shaman
+update_build_status "started" "nfs-ganesha-stable" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+cd $WORKSPACE/nfs-ganesha
+
+git submodule update --init || git submodule sync
+echo "LS0tIGEvc3JjL0NNYWtlTGlzdHMudHh0CisrKyBiL3NyYy9DTWFrZUxpc3RzLnR4dApAQCAtMTAx
+MywxMSArMTAxMywxNCBAQCBlbHNlIChVU0VfU1lTVEVNX05USVJQQykKICAgc2V0KFVTRV9HU1Mg
+JHtVU0VfR1NTfSBDQUNIRSBCT09MICJVc2UgR1NTIikKICAgc2V0KENNQUtFX01PRFVMRV9QQVRI
+ICR7Q01BS0VfTU9EVUxFX1BBVEh9CiAJICAiJHtDTUFLRV9TT1VSQ0VfRElSfS9saWJudGlycGMv
+Y21ha2UvbW9kdWxlcy8iKQorICBzZXQoU0FWRV9MVFRORyAke1VTRV9MVFROR30pCisgIHNldChV
+U0VfTFRUTkcgT0ZGKQogICBhZGRfc3ViZGlyZWN0b3J5KGxpYm50aXJwYykKICAgc2V0KE5USVJQ
+Q19MSUJSQVJZIG50aXJwYykKICAgaWYgKFVTRV9MVFRORykKICAgICBzZXQoTlRJUlBDX0xJQlJB
+UlkgJHtOVElSUENfTElCUkFSWX0gbnRpcnBjX2x0dG5nKQogICBlbmRpZiAoVVNFX0xUVE5HKQor
+ICBzZXQoVVNFX0xUVE5HICR7U0FWRV9MVFROR30pCiAgIHNldChOVElSUENfSU5DTFVERV9ESVIg
+IiR7UFJPSkVDVF9TT1VSQ0VfRElSfS9saWJudGlycGMvbnRpcnBjLyIpCiAgIG1lc3NhZ2UoU1RB
+VFVTICJVc2luZyBudGlycGMgc3VibW9kdWxlIikKIGVuZGlmIChVU0VfU1lTVEVNX05USVJQQykK"| base64 -d > lttng-fix.patch
+
+patch -p1 < lttng-fix.patch
+
+mkdir build
+cd build
+
+# generate .spec file, edit .spec file for correct versions of libs and make source tarball
+if [ $DIST = centos7 ]
+then
+ cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DSTRICT_PACKAGE=ON -DUSE_FSAL_ZFS=OFF -DUSE_FSAL_GLUSTER=OFF -DUSE_FSAL_CEPH=ON -DUSE_FSAL_RGW=ON -DRADOS_URLS=ON -DUSE_RADOS_RECOV=ON -DUSE_LTTNG=ON -DUSE_ADMIN_TOOLS=ON $WORKSPACE/nfs-ganesha/src && make dist || exit 1
+else
+ # Don't enable LTTNG for el8 builds - the "lttng-tools-devel" package isn't available
+ cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DSTRICT_PACKAGE=ON -DUSE_FSAL_ZFS=OFF -DUSE_FSAL_GLUSTER=OFF -DUSE_FSAL_CEPH=ON -DUSE_FSAL_RGW=ON -DRADOS_URLS=ON -DUSE_RADOS_RECOV=ON -DUSE_ADMIN_TOOLS=ON $WORKSPACE/nfs-ganesha/src && make dist || exit 1
+fi
+
+sed -i 's/libcephfs1-devel/libcephfs-devel/' $WORKSPACE/nfs-ganesha/src/nfs-ganesha.spec
+sed -i 's/librgw2-devel/librgw-devel/' $WORKSPACE/nfs-ganesha/src/nfs-ganesha.spec
+sed -i 's/CMAKE_BUILD_TYPE=Debug/CMAKE_BUILD_TYPE=RelWithDebInfo/' $WORKSPACE/nfs-ganesha/src/nfs-ganesha.spec
+
+## Create the source rpm
+echo "Building SRPM"
+rpmbuild \
+ --define "_sourcedir ." \
+ --define "_specdir $WORKSPACE/dist" \
+ --define "_builddir $WORKSPACE/dist" \
+ --define "_srcrpmdir $WORKSPACE/dist/SRPMS" \
+ --define "_rpmdir $WORKSPACE/dist/RPMS" \
+ --nodeps -bs $WORKSPACE/nfs-ganesha/src/nfs-ganesha.spec
+SRPM=$(readlink -f $WORKSPACE/dist/SRPMS/*.src.rpm)
+
+# Add repo file to mock config. The new version of mock uses templates.
+if [ $DIST = centos7 ]
+then
+ sudo cat /etc/mock/${MOCK_TARGET}-${RELEASE}-${ARCH}.cfg /etc/mock/templates/epel-${RELEASE}.tpl > nfs-ganesha-mock.temp
+else
+ sudo cat /etc/mock/${MOCK_TARGET}-${RELEASE}-${ARCH}.cfg /etc/mock/templates/centos-${RELEASE}.tpl /etc/mock/templates/epel-${RELEASE}.tpl > nfs-ganesha-mock.temp
+fi
+sudo head -n -1 nfs-ganesha-mock.temp > nfs-ganesha.cfg
+sudo cat $WORKSPACE/shaman.repo >> nfs-ganesha.cfg
+sudo echo "\"\"\"" >> nfs-ganesha.cfg
+# for debugging
+cat nfs-ganesha.cfg
+
+## Build the binaries with mock
+echo "Building RPMs"
+sudo mock --verbose -r nfs-ganesha.cfg --scrub=all
+sudo mock --verbose -r nfs-ganesha.cfg --define "dist .el${RELEASE}" --resultdir=$WORKSPACE/dist/RPMS/ ${SRPM} || ( tail -n +1 $WORKSPACE/dist/RPMS/{root,build}.log && exit 1 )
+
+VERSION=`grep -R "#define GANESHA_VERSION \"" $WORKSPACE/nfs-ganesha/build/include/config.h | sed -e 's/#define GANESHA_VERSION "//1; s/"//1;'`
+chacra_endpoint="nfs-ganesha-stable/${NFS_GANESHA_BRANCH}/${GIT_COMMIT}/${DISTRO}/${RELEASE}"
+chacra_repo_endpoint="${chacra_endpoint}/flavors/${FLAVOR}"
+RPM_RELEASE=`grep Release $WORKSPACE/nfs-ganesha/src/nfs-ganesha.spec | sed 's/Release:[ \t]*//g' | cut -d '%' -f 1`
+RPM_VERSION=`grep Version $WORKSPACE/nfs-ganesha/src/nfs-ganesha.spec | sed 's/Version:[ \t]*//g'`
+PACKAGE_MANAGER_VERSION="$RPM_VERSION-$RPM_RELEASE"
+
+# check to make sure nfs-ganesha-ceph package built
+if [ ! -f $WORKSPACE/dist/RPMS/nfs-ganesha-ceph-*.rpm ]; then
+ echo "nfs-ganesha-ceph rpm not built!"
+ exit 1
+fi
+
+# check to make sure nfs-ganesha-rgw package built
+if [ ! -f $WORKSPACE/dist/RPMS/nfs-ganesha-rgw-*.rpm ]; then
+ echo "nfs-ganesha-rgw rpm not built!"
+ exit 1
+fi
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+if [ "$THROWAWAY" = false ] ; then
+ # push binaries to chacra
+ find $WORKSPACE/dist/SRPMS | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/source/flavors/${FLAVOR}
+ find $WORKSPACE/dist/RPMS/ | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ # write json file with build info
+ # version and package_manager version are needed for teuthology
+ cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$VERSION",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+ # post the json to repo-extra json to chacra
+ curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_repo_endpoint}/extra/
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_repo_endpoint}
+fi
+
+echo "Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_repo_endpoint}"
+
+# update shaman with the completed build status
+SHA1=${GIT_COMMIT}
+update_build_status "completed" "nfs-ganesha-stable" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+sudo rm -rf $WORKSPACE/dist
+sudo rm -f /etc/yum.repos.d/shaman*
+sudo rm -f /etc/apt/sources.list.d/shaman*
--- /dev/null
+#!/bin/bash -ex
+
+# this job adds custom shaman repositories which can cause issues at build time
+# for other jobs so they need to be properly removed
+sudo rm -f /etc/yum.repos.d/shaman*
+sudo rm -f /etc/apt/sources.list.d/shaman*
+
+
+# note: the failed_build_status call relies on normalized variable names that
+# are infered by the builds themselves. If the build fails before these are
+# set, they will be posted with empty values
+NFS_GANESHA_BRANCH=`branch_slash_filter $NFS_GANESHA_BRANCH`
+
+# update shaman with the failed build status
+failed_build_status "nfs-ganesha-stable" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
--- /dev/null
+#! /usr/bin/bash
+#
+# Ceph distributed storage system
+#
+# Copyright (C) 2016 Red Hat <contact@redhat.com>
+#
+# Author: Boris Ranto <branto@redhat.com>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+
+set -ex
+
+# Make sure we execute at the top level directory before we do anything
+cd $WORKSPACE
+
+# This will set the DISTRO and MOCK_TARGET variables
+get_distro_and_target
+
+# Perform a clean-up
+cd $WORKSPACE/nfs-ganesha
+git clean -fxd
+
+cd $WORKSPACE/nfs-ganesha-debian
+git clean -fxd
+
+cd $WORKSPACE/ntirpc
+git clean -fxd
+
+# Make sure the dist directory is clean
+cd $WORKSPACE
+rm -rf dist
+mkdir -p dist
+
+# Print some basic system info
+HOST=$(hostname --short)
+echo "Building on $(hostname) with the following env"
+echo "*****"
+env
+echo "*****"
+
+export LC_ALL=C # the following is vulnerable to i18n
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+NFS_GANESHA_BRANCH=`branch_slash_filter $NFS_GANESHA_BRANCH`
+CEPH_BRANCH=`branch_slash_filter $CEPH_BRANCH`
+BRANCH=${NFS_GANESHA_BRANCH}
+# set flavor as ceph branch libs are coming from
+FLAVOR="ceph_${CEPH_BRANCH}"
+
+# ask shaman which chacra instance to use
+chacra_url="https://chacra.ceph.com/"
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
--- /dev/null
+#!/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+if test -f /etc/redhat-release ; then
+ exit 0
+fi
--- /dev/null
+#!/bin/bash
+set -ex
+
+# only do work if we are a RPM distro
+if [[ ! -f /etc/redhat-release && ! -f /usr/bin/zypper ]] ; then
+ exit 0
+fi
--- /dev/null
+- scm:
+ name: nfs-ganesha
+ scm:
+ - git:
+ url: https://github.com/nfs-ganesha/nfs-ganesha.git
+ branches:
+ - $NFS_GANESHA_BRANCH
+ skip-tag: true
+ wipe-workspace: true
+ basedir: "nfs-ganesha"
+
+- scm:
+ name: nfs-ganesha-debian
+ scm:
+ - git:
+ url: https://github.com/nfs-ganesha/nfs-ganesha-debian.git
+ branches:
+ - $NFS_GANESHA_DEBIAN_BRANCH
+ skip-tag: true
+ wipe-workspace: true
+ basedir: "nfs-ganesha-debian"
+
+- scm:
+ name: ntirpc
+ scm:
+ - git:
+ url: https://github.com/nfs-ganesha/ntirpc.git
+ branches:
+ - $NTIRPC_BRANCH
+ skip-tag: true
+ wipe-workspace: true
+ basedir: "ntirpc"
+
+- job:
+ name: nfs-ganesha-stable
+ project-type: matrix
+ defaults: global
+ display-name: 'nfs-ganesha-stable'
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - github:
+ url: https://github.com/nfs-ganesha/nfs-ganesha
+ concurrent: true
+ parameters:
+ - string:
+ name: NFS_GANESHA_BRANCH
+ description: "The git branch (or tag) to build"
+ default: "V2.7-stable"
+
+ - string:
+ name: NTIRPC_BRANCH
+ description: "The git branch (or tag) to build"
+ default: "v1.7.3"
+
+ - string:
+ name: NTIRPC_DEBIAN_BRANCH
+ description: "The git branch (or tag) for debian build scripts for ntirpc"
+ default: "xenial-libntirpc-1.7"
+
+ - string:
+ name: NFS_GANESHA_DEBIAN_BRANCH
+ description: "The git branch (or tag) for debian build scripts for nfs-ganesha"
+ default: "xenial-nfs-ganesha-download-dot-ceph-dot-com"
+
+ - string:
+ name: CEPH_SHA1
+ description: "The SHA1 of the ceph branch"
+ default: "3a54b2b6d167d4a2a19e003a705696d4fe619afc"
+
+ - string:
+ name: CEPH_BRANCH
+ description: "The branch of Ceph to get the repo file of for libcephfs"
+ default: "nautilus"
+
+ - string:
+ name: CEPH_VERSION
+ description: "The version of Ceph to specify for installing ceph libraries"
+ default: "14.2.0"
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: bionic, xenial, centos7, centos8"
+ default: "centos7 centos8 xenial bionic"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64, and arm64"
+ default: "x86_64"
+
+ - bool:
+ name: THROWAWAY
+ description: "
+Default: False. When True it will not POST binaries to chacra. Artifacts will not be around for long. Useful to test builds."
+ default: false
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+ default: true
+
+ - string:
+ name: BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ execution-strategy:
+ combination-filter: DIST==AVAILABLE_DIST && ARCH==AVAILABLE_ARCH && (ARCH=="x86_64" || (ARCH == "arm64" && (DIST == "xenial" || DIST == "centos7")))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - huge
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - arm64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos7
+ - centos8
+ - xenial
+ - bionic
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+ triggers:
+ - github
+
+ scm:
+ - nfs-ganesha
+ - nfs-ganesha-debian
+ - ntirpc
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ sudo rm -rf dist
+ sudo rm -rf venv
+ sudo rm -rf release
+ # debian build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_deb
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_deb
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_rpm
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_rpm
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - inject:
+ properties-file: ${{WORKSPACE}}/build_info
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/failure
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+if test "$DISTRO" != "debian" -a "$DISTRO" != "ubuntu"; then
+ exit 0
+fi
+
+REPO_URL=$(curl -s "https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=${DISTRO}%2F${DIST}%2F${ARCH}&ref=${CEPH_BRANCH}" | jq -a ".[0] | .chacra_url" | tr -d '"')repo
+TIME_LIMIT=1200
+INTERVAL=30
+REPO_FOUND=0
+
+# poll shaman for up to 10 minutes
+while [ "$SECONDS" -le "$TIME_LIMIT" ]
+do
+ SHAMAN_MIRROR=`curl --fail -L ${REPO_URL} || true`
+ if [[ ${SHAMAN_MIRROR} ]]; then
+ echo "Ceph debian lib repo exists in shaman"
+ REPO_FOUND=1
+ break
+ else
+ sleep $INTERVAL
+ fi
+done
+
+if [[ "$REPO_FOUND" -eq 0 ]]; then
+ echo "Ceph debian lib repo does NOT exist in shaman"
+ exit 1
+fi
+
+# We need this for system and to run the cmake
+sudo apt-get -y update -o Acquire::Languages=none || true
+
+# Normalize variables across rpm/deb builds
+NORMAL_DISTRO=$DISTRO
+NORMAL_DISTRO_VERSION=$DIST
+NORMAL_ARCH=$ARCH
+
+# create build status in shaman
+update_build_status "started" "nfs-ganesha" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+## Setup the pbuilder
+setup_pbuilder use_gcc
+
+cd $WORKSPACE/nfs-ganesha
+git submodule update --init || git submodule sync
+
+PACKAGE_MANAGER_VERSION="`git describe --long | sed 's/V//1'`-1${DIST}"
+
+# Version is in format X.XdevX-X-SHA1
+VERSION=`git describe --long | sed -e 's/V//1'`
+
+# create and apply a patch file to turn of USE_LTTNG in libntirpc submodule
+echo "LS0tIGEvc3JjL0NNYWtlTGlzdHMudHh0CisrKyBiL3NyYy9DTWFrZUxpc3RzLnR4dApAQCAtMTA4
+OSwxMSArMTA4OSwxNCBAQAogICBzZXQoVVNFX0dTUyAke1VTRV9HU1N9IENBQ0hFIEJPT0wgIlVz
+ZSBHU1MiKQogICBzZXQoQ01BS0VfTU9EVUxFX1BBVEggJHtDTUFLRV9NT0RVTEVfUEFUSH0KIAkg
+ICIke0dBTkVTSEFfVE9QX0NNQUtFX0RJUn0vbGlibnRpcnBjL2NtYWtlL21vZHVsZXMvIikKKyAg
+c2V0KFNBVkVfTFRUTkcgJHtVU0VfTFRUTkd9KQorICBzZXQoVVNFX0xUVE5HIE9GRikKICAgYWRk
+X3N1YmRpcmVjdG9yeShsaWJudGlycGMpCiAgIHNldChOVElSUENfTElCUkFSWSBudGlycGMpCiAg
+IGlmIChVU0VfTFRUTkcpCiAgICAgc2V0KE5USVJQQ19MSUJSQVJZICR7TlRJUlBDX0xJQlJBUll9
+IG50aXJwY19sdHRuZykKICAgZW5kaWYgKFVTRV9MVFRORykKKyAgc2V0KFVTRV9MVFRORyAke1NB
+VkVfTFRUTkd9KQogICBzZXQoTlRJUlBDX0lOQ0xVREVfRElSICIke1BST0pFQ1RfU09VUkNFX0RJ
+Un0vbGlibnRpcnBjL250aXJwYy8iKQogICBtZXNzYWdlKFNUQVRVUyAiVXNpbmcgbnRpcnBjIHN1
+Ym1vZHVsZSIpCiBlbmRpZiAoVVNFX1NZU1RFTV9OVElSUEMpCg==" | base64 -d > lttng-fix.patch
+
+patch -p1 < lttng-fix.patch
+
+rm -rf .git
+
+cd $WORKSPACE
+
+## Build the source tarball
+NFS_GANESHA_ORIG_TAR_GZ="nfs-ganesha_${VERSION}.orig.tar.gz"
+tar czf ${NFS_GANESHA_ORIG_TAR_GZ} nfs-ganesha/src
+
+# remove old version
+rm -rf $WORKSPACE/nfs-ganesha
+
+# unpack just the src
+tar xzf ${NFS_GANESHA_ORIG_TAR_GZ}
+
+cd $WORKSPACE/nfs-ganesha
+
+# add debian directory next to src
+mv $WORKSPACE/nfs-ganesha-debian/debian $WORKSPACE/nfs-ganesha/
+
+# disable LizardFS FSAL see
+# https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/475437
+sed -i '/-DUSE_FSAL_RGW=/a\ -DUSE_FSAL_LIZARDFS=NO \\' debian/rules
+# use tcmalloc allocator
+sed -i '/-DUSE_FSAL_RGW=/a\ -DALLOCATOR=tcmalloc \\' debian/rules
+
+## Get some basic information about the system and the repository
+DEB_ARCH=$(dpkg-architecture -qDEB_BUILD_ARCH)
+
+## Prepare the debian files
+# Bump the changelog
+dch -v "$VERSION-1${DIST}" "$VERSION for Shaman"
+
+# Create .dsc and source tarball, we don't care about signing changes or source package
+sudo dpkg-buildpackage -S -us -uc -d
+
+## Build with pbuilder
+echo "Building debs"
+
+PBUILDDIR="/srv/debian-base"
+
+sudo pbuilder --clean
+
+mkdir -p $WORKSPACE/dist/deb
+
+# add missing packages and components to pbuilder
+sudo pbuilder update \
+ --basetgz $PBUILDDIR/$DIST.tgz \
+ --distribution $DIST \
+ --extrapackages "apt-transport-https apt-utils ca-certificates debhelper python-all liblttng-ust0 liblttng-ust-dev liblttng-ctl-dev pkgconf quilt libgoogle-perftools-dev" \
+ --components "main restricted universe multiverse"
+
+# make sure no ceph packages are left over in pbuilder env
+sudo pbuilder update \
+ --basetgz $PBUILDDIR/$DIST.tgz \
+ --distribution $DIST \
+ --removepackages "librados2 libcephfs2 librgw2 librados-dev libcephfs-dev librgw-dev" \
+ --override-config
+
+# add other mirror to pbuilder
+sudo pbuilder update \
+ --basetgz $PBUILDDIR/$DIST.tgz \
+ --distribution $DIST \
+ --othermirror "${SHAMAN_MIRROR}" \
+ --override-config
+
+# use libcephfs and librgw from shaman
+sudo pbuilder update \
+ --basetgz $PBUILDDIR/$DIST.tgz \
+ --distribution $DIST \
+ --extrapackages "librados-dev libcephfs-dev librgw-dev"
+
+echo "Building debs for $DIST"
+sudo pbuilder build \
+ --distribution $DIST \
+ --basetgz $PBUILDDIR/$DIST.tgz \
+ --buildresult $WORKSPACE/dist/deb/ \
+ --debbuildopts "-j`grep -c processor /proc/cpuinfo`" \
+ $WORKSPACE/nfs-ganesha_${VERSION}-1${DIST}.dsc
+
+## Upload the created debs to chacra
+chacra_endpoint="nfs-ganesha/${NFS_GANESHA_BRANCH}/${GIT_COMMIT}/${DISTRO}/${DIST}"
+chacra_repo_endpoint="${chacra_endpoint}/flavors/${FLAVOR}"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+
+if [ "$THROWAWAY" = false ] ; then
+ # push binaries to chacra
+ find $WORKSPACE/dist/deb | egrep "*\.(changes|deb|dsc|gz)$" | egrep -v "(Packages|Sources|Contents)" | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ # write json file with build info
+ # version and package_manager version are needed for teuthology
+ cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$VERSION",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+ # post the json to repo-extra json to chacra
+ curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_repo_endpoint}/extra/
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_repo_endpoint}
+fi
+
+echo "Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_repo_endpoint}"
+
+# update shaman with the completed build status
+SHA1=${GIT_COMMIT}
+update_build_status "completed" "nfs-ganesha" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+sudo rm -rf $WORKSPACE/dist
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+# Only do actual work when we are an RPM distro
+if test "$DISTRO" != "fedora" -a "$DISTRO" != "centos" -a "$DISTRO" != "rhel"; then
+ exit 0
+fi
+
+get_rpm_dist
+
+# Disable the google-chrome repo on el7, which is not needed (el8 doesn't have this repo).
+if [ $DIST = centos7 ]
+then
+ sudo yum-config-manager --disable google-chrome
+fi
+
+# Some -devel packages are only available in the CentOS 8 PowerTools repo. Enable it:
+if [ $DIST = centos8 ]
+then
+ sudo yum-config-manager --enable PowerTools || sudo yum-config-manager --enable powertools
+fi
+
+# Clean up Jenkins builder before each build
+sudo rm -rf /var/cache/yum/*
+sudo yum -y clean all
+sudo yum -y remove librgw-devel librgw2 librados-devel librados3 libcephfs-devel libcephfs2
+sudo yum -y autoremove
+
+# Get .repo file from appropriate shaman build
+REPO_URL=$(curl -s "https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=${DISTRO}%2F${RELEASE}%2F${ARCH}&ref=${CEPH_BRANCH}" | jq -a ".[0] | .chacra_url" | tr -d '"')repo
+TIME_LIMIT=1200
+INTERVAL=30
+REPO_FOUND=0
+
+# poll shaman for up to 10 minutes
+while [ "$SECONDS" -le "$TIME_LIMIT" ]
+do
+ if `curl --fail -L $REPO_URL > $WORKSPACE/shaman.repo`; then
+ echo "Ceph repo file has been added from shaman"
+ REPO_FOUND=1
+ break
+ else
+ sleep $INTERVAL
+ fi
+done
+
+if [[ "$REPO_FOUND" -eq 0 ]]; then
+ echo "Ceph lib repo does NOT exist in shaman"
+ exit 1
+fi
+
+# add shaman repos to /etc/yum.repos.d/ to install ceph libraries so enable
+#FSAL_CEPH (enabled by default) and FSAL_RGW in the .spec file when cmake command runs
+sudo cp $WORKSPACE/shaman.repo /etc/yum.repos.d/
+# for debugging
+cat /etc/yum.repos.d/shaman.repo
+xargs sudo yum install -y <<< "
+dbus-devel
+libacl-devel
+libblkid-devel
+libcap-devel
+libnfsidmap-devel
+libwbclient-devel
+krb5-devel
+librados-devel
+librgw-devel
+libcephfs-devel
+lttng-ust-devel
+gperftools-devel
+"
+# Removed "lttng-tools-devel" from above xargs list because it isn't available in el8
+if [ $DIST = centos7 ]
+then
+ sudo yum install -y lttng-tools-devel
+fi
+
+# The libnsl2-devel package is needed on el8 builds, but not el7
+if [ $DIST = centos8 ]
+then
+ sudo yum -y install libnsl2-devel
+fi
+
+sudo yum install -y mock
+
+# Normalize variables across rpm/deb builds
+NORMAL_DISTRO=$DISTRO
+NORMAL_DISTRO_VERSION=$RELEASE
+NORMAL_ARCH=$ARCH
+
+# create build status in shaman
+update_build_status "started" "nfs-ganesha" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+cd $WORKSPACE/nfs-ganesha
+
+git submodule update --init || git submodule sync
+
+# create and apply a patch file to turn of USE_LTTNG in libntirpc submodule
+echo "LS0tIGEvc3JjL0NNYWtlTGlzdHMudHh0CisrKyBiL3NyYy9DTWFrZUxpc3RzLnR4dApAQCAtMTA4
+OSwxMSArMTA4OSwxNCBAQAogICBzZXQoVVNFX0dTUyAke1VTRV9HU1N9IENBQ0hFIEJPT0wgIlVz
+ZSBHU1MiKQogICBzZXQoQ01BS0VfTU9EVUxFX1BBVEggJHtDTUFLRV9NT0RVTEVfUEFUSH0KIAkg
+ICIke0dBTkVTSEFfVE9QX0NNQUtFX0RJUn0vbGlibnRpcnBjL2NtYWtlL21vZHVsZXMvIikKKyAg
+c2V0KFNBVkVfTFRUTkcgJHtVU0VfTFRUTkd9KQorICBzZXQoVVNFX0xUVE5HIE9GRikKICAgYWRk
+X3N1YmRpcmVjdG9yeShsaWJudGlycGMpCiAgIHNldChOVElSUENfTElCUkFSWSBudGlycGMpCiAg
+IGlmIChVU0VfTFRUTkcpCiAgICAgc2V0KE5USVJQQ19MSUJSQVJZICR7TlRJUlBDX0xJQlJBUll9
+IG50aXJwY19sdHRuZykKICAgZW5kaWYgKFVTRV9MVFRORykKKyAgc2V0KFVTRV9MVFRORyAke1NB
+VkVfTFRUTkd9KQogICBzZXQoTlRJUlBDX0lOQ0xVREVfRElSICIke1BST0pFQ1RfU09VUkNFX0RJ
+Un0vbGlibnRpcnBjL250aXJwYy8iKQogICBtZXNzYWdlKFNUQVRVUyAiVXNpbmcgbnRpcnBjIHN1
+Ym1vZHVsZSIpCiBlbmRpZiAoVVNFX1NZU1RFTV9OVElSUEMpCg==" | base64 -d > lttng-fix.patch
+
+patch -p1 < lttng-fix.patch
+
+mkdir build
+cd build
+
+# generate .spec file, edit .spec file for correct versions of libs and make source tarball
+if [ $DIST = centos7 ]
+then
+ cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DSTRICT_PACKAGE=ON -DUSE_FSAL_ZFS=OFF -DUSE_FSAL_GLUSTER=OFF -DUSE_FSAL_CEPH=ON -DUSE_FSAL_RGW=ON -DUSE_FSAL_LIZARDFS=OFF -DRADOS_URLS=ON -DUSE_RADOS_RECOV=ON -DUSE_LTTNG=ON -DUSE_ADMIN_TOOLS=ON $WORKSPACE/nfs-ganesha/src && make dist || exit 1
+else
+ # Don't enable LTTNG for el8 builds - the "lttng-tools-devel" package isn't available
+ cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DSTRICT_PACKAGE=ON -DUSE_FSAL_ZFS=OFF -DUSE_FSAL_GLUSTER=OFF -DUSE_FSAL_CEPH=ON -DUSE_FSAL_RGW=ON -DUSE_FSAL_LIZARDFS=OFF -DRADOS_URLS=ON -DUSE_RADOS_RECOV=ON -DUSE_ADMIN_TOOLS=ON -DALLOCATOR=tcmalloc $WORKSPACE/nfs-ganesha/src && make dist || exit 1
+fi
+
+sed -i 's/libcephfs1-devel/libcephfs-devel/' $WORKSPACE/nfs-ganesha/src/nfs-ganesha.spec
+sed -i 's/librgw2-devel/librgw-devel/' $WORKSPACE/nfs-ganesha/src/nfs-ganesha.spec
+sed -i 's/CMAKE_BUILD_TYPE=Debug/CMAKE_BUILD_TYPE=RelWithDebInfo/' $WORKSPACE/nfs-ganesha/src/nfs-ganesha.spec
+
+## Create the source rpm
+echo "Building SRPM"
+rpmbuild \
+ --define "_sourcedir ." \
+ --define "_specdir $WORKSPACE/dist" \
+ --define "_builddir $WORKSPACE/dist" \
+ --define "_srcrpmdir $WORKSPACE/dist/SRPMS" \
+ --define "_rpmdir $WORKSPACE/dist/RPMS" \
+ --nodeps -bs $WORKSPACE/nfs-ganesha/src/nfs-ganesha.spec
+SRPM=$(readlink -f $WORKSPACE/dist/SRPMS/*.src.rpm)
+
+MOCKARCH=$(arch)
+# Add repo file to mock config. The new version of mock uses templates.
+if [ $DIST = centos7 ]
+then
+ sudo cat /etc/mock/${MOCK_TARGET}-${RELEASE}-${MOCKARCH}.cfg /etc/mock/templates/epel-${RELEASE}.tpl > nfs-ganesha-mock.temp
+else
+ sudo cat /etc/mock/${MOCK_TARGET}-${RELEASE}-${MOCKARCH}.cfg /etc/mock/templates/centos-stream-${RELEASE}.tpl /etc/mock/templates/epel-${RELEASE}.tpl > nfs-ganesha-mock.temp
+fi
+sudo head -n -1 nfs-ganesha-mock.temp > nfs-ganesha.cfg
+sudo cat $WORKSPACE/shaman.repo >> nfs-ganesha.cfg
+sudo echo "\"\"\"" >> nfs-ganesha.cfg
+# for debugging
+cat nfs-ganesha.cfg
+
+
+## Build the binaries with mock
+echo "Building RPMs"
+sudo mock --verbose -r nfs-ganesha.cfg --scrub=all
+sudo mock --verbose -r nfs-ganesha.cfg --define "dist .el${RELEASE}" --resultdir=$WORKSPACE/dist/RPMS/ ${SRPM} || ( tail -n +1 $WORKSPACE/dist/RPMS/{root,build}.log && exit 1 )
+
+VERSION=`grep -R "#define GANESHA_VERSION \"" $WORKSPACE/nfs-ganesha/build/include/config.h | sed -e 's/#define GANESHA_VERSION "//1; s/"//1;'`
+chacra_endpoint="nfs-ganesha/${NFS_GANESHA_BRANCH}/${GIT_COMMIT}/${DISTRO}/${RELEASE}"
+chacra_repo_endpoint="${chacra_endpoint}/flavors/${FLAVOR}"
+RPM_RELEASE=`grep Release $WORKSPACE/nfs-ganesha/src/nfs-ganesha.spec | sed 's/Release:[ \t]*//g' | cut -d '%' -f 1`
+RPM_VERSION=`grep Version $WORKSPACE/nfs-ganesha/src/nfs-ganesha.spec | sed 's/Version:[ \t]*//g'`
+PACKAGE_MANAGER_VERSION="$RPM_VERSION-$RPM_RELEASE"
+
+# check to make sure nfs-ganesha-ceph package built
+if [ ! -f $WORKSPACE/dist/RPMS/nfs-ganesha-ceph-*.rpm ]; then
+ echo "nfs-ganesha-ceph rpm not built!"
+ exit 1
+fi
+
+# check to make sure nfs-ganesha-rgw package built
+if [ ! -f $WORKSPACE/dist/RPMS/nfs-ganesha-rgw-*.rpm ]; then
+ echo "nfs-ganesha-rgw rpm not built!"
+ exit 1
+fi
+
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+if [ "$THROWAWAY" = false ] ; then
+ # push binaries to chacra
+ find $WORKSPACE/dist/SRPMS | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/source/flavors/${FLAVOR}
+ find $WORKSPACE/dist/RPMS/ | grep rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ # write json file with build info
+ # version and package_manager version are needed for teuthology
+ cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$VERSION",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+ # post the json to repo-extra json to chacra
+ curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_repo_endpoint}/extra/
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_repo_endpoint}
+fi
+
+echo "Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_repo_endpoint}"
+
+# update shaman with the completed build status
+SHA1=${GIT_COMMIT}
+update_build_status "completed" "nfs-ganesha" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+
+sudo rm -rf $WORKSPACE/dist
+sudo rm -f /etc/yum.repos.d/shaman*
+sudo rm -f /etc/apt/sources.list.d/shaman*
--- /dev/null
+#!/bin/bash -ex
+
+# note: the failed_build_status call relies on normalized variable names that
+# are infered by the builds themselves. If the build fails before these are
+# set, they will be posted with empty values
+NFS_GANESHA_BRANCH=`branch_slash_filter $NFS_GANESHA_BRANCH`
+
+# update shaman with the failed build status
+failed_build_status "nfs-ganesha" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
--- /dev/null
+#! /usr/bin/bash
+#
+# Ceph distributed storage system
+#
+# Copyright (C) 2016 Red Hat <contact@redhat.com>
+#
+# Author: Boris Ranto <branto@redhat.com>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+
+set -ex
+
+# Make sure we execute at the top level directory before we do anything
+cd $WORKSPACE
+
+# This will set the DISTRO and MOCK_TARGET variables
+get_distro_and_target
+
+# Perform a clean-up
+cd $WORKSPACE/nfs-ganesha
+git clean -fxd
+
+cd $WORKSPACE/nfs-ganesha-debian
+git clean -fxd
+
+# Make sure the dist directory is clean
+cd $WORKSPACE
+rm -rf dist
+mkdir -p dist
+
+# Print some basic system info
+HOST=$(hostname --short)
+echo "Building on $(hostname) with the following env"
+echo "*****"
+env
+echo "*****"
+
+export LC_ALL=C # the following is vulnerable to i18n
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+NFS_GANESHA_BRANCH=`branch_slash_filter $NFS_GANESHA_BRANCH`
+CEPH_BRANCH=`branch_slash_filter $CEPH_BRANCH`
+BRANCH=${NFS_GANESHA_BRANCH}
+# set flavor as ceph branch libs are coming from
+FLAVOR="ceph_${CEPH_BRANCH}"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -f -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
--- /dev/null
+#!/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+if test -f /etc/redhat-release ; then
+ exit 0
+fi
--- /dev/null
+#!/bin/bash
+set -ex
+
+# only do work if we are a RPM distro
+if [[ ! -f /etc/redhat-release && ! -f /usr/bin/zypper ]] ; then
+ exit 0
+fi
--- /dev/null
+- scm:
+ name: nfs-ganesha
+ scm:
+ - git:
+ url: https://github.com/nfs-ganesha/nfs-ganesha.git
+ branches:
+ - $NFS_GANESHA_BRANCH
+ skip-tag: true
+ wipe-workspace: true
+ basedir: "nfs-ganesha"
+
+- scm:
+ name: nfs-ganesha-debian
+ scm:
+ - git:
+ url: https://github.com/nfs-ganesha/nfs-ganesha-debian.git
+ branches:
+ - $NFS_GANESHA_DEBIAN_BRANCH
+ skip-tag: true
+ wipe-workspace: true
+ basedir: "nfs-ganesha-debian"
+- job:
+ name: nfs-ganesha
+ project-type: matrix
+ defaults: global
+ display-name: 'nfs-ganesha'
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - github:
+ url: https://github.com/nfs-ganesha/nfs-ganesha
+ - build-discarder:
+ days-to-keep: 90
+ artifact-days-to-keep: 90
+
+ concurrent: true
+ parameters:
+ - string:
+ name: NFS_GANESHA_BRANCH
+ description: "The git branch (or tag) to build"
+ default: "V5.7"
+
+ - string:
+ name: NFS_GANESHA_DEBIAN_BRANCH
+ description: "The git branch (or tag) for debian build scripts for nfs-ganesha"
+ default: "xenial-nfs-ganesha-shaman"
+
+ - string:
+ name: CEPH_BRANCH
+ description: "The branch of Ceph to get the repo file of for libcephfs"
+ default: main
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: bionic"
+ default: "bionic"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64 and arm64"
+ default: "x86_64 arm64"
+
+ - bool:
+ name: THROWAWAY
+ description: "
+Default: False. When True it will not POST binaries to chacra. Artifacts will not be around for long. Useful to test builds."
+ default: false
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ execution-strategy:
+ combination-filter: DIST==AVAILABLE_DIST && ARCH==AVAILABLE_ARCH && (ARCH=="x86_64" || (ARCH == "arm64" && DIST == "centos8"))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - huge
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - arm64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - focal
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+ triggers:
+ - github
+ - timed: "0 1,13 * * *"
+
+ scm:
+ - nfs-ganesha
+ - nfs-ganesha-debian
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ rm -rf dist
+ rm -rf venv
+ rm -rf release
+ # debian build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_deb
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_deb
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/validate_rpm
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_rpm
+ - shell: if [ -f /etc/yum.repos.d/shaman.repo ]; then sudo rm -f /etc/yum.repos.d/shaman.repo; fi
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - inject:
+ properties-file: ${{WORKSPACE}}/build_info
+
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/failure
+ - shell: if [ -f /etc/yum.repos.d/shaman.repo ]; then sudo rm -f /etc/yum.repos.d/shaman.repo; fi
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+#!/bin/bash
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+sudo yum install -y epel-release
+sudo yum --enablerepo epel install -y python36
+
+cd "$WORKSPACE/paddles"
+
+$VENV/tox -rv
--- /dev/null
+- scm:
+ name: paddles
+ scm:
+ - git:
+ url: https://github.com/ceph/paddles
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ basedir: "paddles"
+ skip-tag: true
+ wipe-workspace: true
+
+
+- job:
+ name: paddles-pull-requests
+ description: Runs tox tests for paddles on each GitHub PR
+ project-type: freestyle
+ node: python3 && centos7
+ block-downstream: false
+ block-upstream: false
+ defaults: global
+ display-name: 'paddles: Pull Requests'
+ quiet-period: 5
+ retry-count: 3
+
+
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: 15
+ artifact-num-to-keep: 15
+ - github:
+ url: https://github.com/ceph/paddles/
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ org-list:
+ - ceph
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: true
+ auto-close-on-fail: false
+
+ scm:
+ - paddles
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
--- /dev/null
+#!/bin/bash -ex
+virtualenv -p python3 ./v
+./v/bin/pip install requests
+./v/bin/python3 ./ceph-build/quay-pruner/build/prune-quay.py -v
+rm -rf ./v
+
--- /dev/null
+#!/usr/bin/env python3
+
+import argparse
+import functools
+import os
+import re
+import requests
+import sys
+
+QUAYBASE = "https://quay.ceph.io/api/v1"
+REPO = "ceph-ci/ceph"
+
+# cache shaman search results so we only have to ask once
+sha1_cache = set()
+
+# quay page ranges to fetch; hackable for testing
+start_page = 1
+page_limit = 100000
+
+NAME_RE = re.compile(
+ r'(.*)-([0-9a-f]{7})-centos-.*([0-9]+)-(x86_64|aarch64)-devel'
+)
+SHA1_RE = re.compile(r'([0-9a-f]{40})(-crimson-debug|-crimson-release|-aarch64)*')
+
+
+def get_all_quay_tags(quaytoken):
+ page = start_page
+ has_additional = True
+ ret = list()
+
+ while has_additional and page < page_limit:
+ try:
+ response = requests.get(
+ '/'.join((QUAYBASE, 'repository', REPO, 'tag')),
+ params={'page': page, 'limit': 100, 'onlyActiveTags': 'true'},
+ headers={'Authorization': 'Bearer %s' % quaytoken},
+ timeout=30,
+ )
+ response.raise_for_status()
+ except requests.exceptions.RequestException as e:
+ print(
+ 'Quay.io request',
+ response.url,
+ 'failed:',
+ e,
+ requests.reason,
+ file=sys.stderr
+ )
+ break
+ response = response.json()
+ ret.extend(response['tags'])
+ page += 1
+ has_additional = response.get('has_additional')
+ return ret
+
+
+def parse_quay_tag(tag):
+
+ mo = NAME_RE.match(tag)
+ if mo is None:
+ return None, None, None, None
+ ref = mo.group(1)
+ short_sha1 = mo.group(2)
+ el = mo.group(3)
+ arch = mo.group(4)
+ return ref, short_sha1, el, arch
+
+
+@functools.cache
+def shaman_data():
+ print('Getting repo data from shaman for ceph builds', file=sys.stderr)
+ shaman_result = None
+ params = {
+ 'project': 'ceph',
+ 'flavor': 'default',
+ 'status': 'ready',
+ }
+ try:
+ response = requests.get(
+ 'https://shaman.ceph.com/api/search/',
+ params=params,
+ timeout=30
+ )
+ response.raise_for_status()
+ shaman_result = response.json()
+ except requests.exceptions.RequestException as e:
+ print(
+ 'Shaman request',
+ response.url,
+ 'failed:',
+ e,
+ response.reason,
+ file=sys.stderr
+ )
+ return shaman_result
+
+
+def query_shaman(ref, sha1, el):
+ '''
+ filter shaman data by given criteria.
+
+ returns (error, filtered_data)
+ error is True if no data could be retrieved
+ '''
+
+ filtered = shaman_data()
+ if not filtered:
+ return True, None
+
+ if el:
+ filterlist = [el]
+ else:
+ filterlist = ['7', '8', '9']
+ filtered = [
+ rec for rec in filtered if
+ rec['distro'] == 'centos' and
+ rec['distro_version'] in filterlist
+ ]
+
+ if ref:
+ filtered = [rec for rec in filtered if rec['ref'] == ref]
+ if sha1:
+ filtered = [rec for rec in filtered if rec['sha1'] == sha1]
+ return False, filtered
+
+
+def ref_present_in_shaman(ref, short_sha1, el, arch, verbose):
+
+ if ref is None:
+ return False
+
+ error, matches = query_shaman(ref, None, el)
+ if error:
+ print('Shaman request failed')
+ # don't cache, but claim present:
+ # avoid deletion in case of transient shaman failure
+ if verbose:
+ print('Found %s (assumed because shaman request failed)' % ref)
+ return True
+
+ for match in matches:
+ if match['sha1'][0:7] == short_sha1:
+ if verbose:
+ print('Found %s in shaman: sha1 %s' % (ref, match['sha1']))
+ return True
+ return False
+
+
+def sha1_present_in_shaman(sha1, verbose):
+
+ if sha1 in sha1_cache:
+ if verbose:
+ print('Found %s in shaman sha1_cache' % sha1)
+ return True
+
+ error, matches = query_shaman(None, sha1, None)
+ if error:
+ print('Shaman request failed')
+ # don't cache, but claim present
+ # to avoid deleting on transient shaman failure
+ if verbose:
+ print('Found %s (assuming because shaman request failed)' % sha1)
+ return True
+
+ for match in matches:
+ if match['sha1'] == sha1:
+ if verbose:
+ print('Found %s in shaman' % sha1)
+ sha1_cache.add(sha1)
+ return True
+ return False
+
+
+def delete_from_quay(tagname, quaytoken, dryrun):
+ if dryrun:
+ print('Would delete from quay:', tagname)
+ return
+
+ try:
+ response = requests.delete(
+ '/'.join((QUAYBASE, 'repository', REPO, 'tag', tagname)),
+ headers={'Authorization': 'Bearer %s' % quaytoken},
+ timeout=30,
+ )
+ response.raise_for_status()
+ print('Deleted', tagname)
+ except requests.exceptions.RequestException as e:
+ print(
+ 'Problem deleting tag:',
+ tagname,
+ e,
+ response.reason,
+ file=sys.stderr
+ )
+
+
+def parse_args():
+ parser = argparse.ArgumentParser()
+ parser.add_argument('-d', '--dryrun', action='store_true',
+ help="don't actually delete")
+ parser.add_argument('-v', '--verbose', action='store_true',
+ help="say more")
+ return parser.parse_args()
+
+
+def main():
+ args = parse_args()
+
+ quaytoken = None
+ if not args.dryrun:
+ if 'QUAYTOKEN' in os.environ:
+ quaytoken = os.environ['QUAYTOKEN']
+ else:
+ quaytoken = open(
+ os.path.join(os.environ['HOME'], '.quaytoken'),
+ 'rb'
+ ).read().strip().decode()
+
+ print('Getting ceph-ci container tags from quay.ceph.io', file=sys.stderr)
+ quaytags = get_all_quay_tags(quaytoken)
+
+ # build a map of digest to name(s) for detecting "same image"
+ digest_map = dict()
+ for tag in quaytags:
+ digest = tag['manifest_digest']
+ if digest in digest_map:
+ digest_map[digest].add(tag['name'])
+ else:
+ digest_map[digest] = set((tag['name'],))
+ if args.verbose:
+ for d,l in digest_map.items():
+ print(f'{d}: {l}')
+
+ # find all full tags to delete, put them and ref tag on list
+ tags_to_delete = set()
+ for tag in quaytags:
+ name = tag['name']
+ if 'expiration' in tag or 'end_ts' in tag:
+ if args.verbose:
+ print('Skipping deleted-or-overwritten tag %s' % name)
+ continue
+
+ ref, short_sha1, el, arch = parse_quay_tag(name)
+ if ref is None:
+ if args.verbose:
+ print(
+ 'Skipping %s, not in ref-shortsha1-el-arch form' % name
+ )
+ continue
+
+ if ref_present_in_shaman(ref, short_sha1, el, arch, args.verbose):
+ if args.verbose:
+ print('Skipping %s, present in shaman' % name)
+ continue
+
+ # accumulate full and ref tags to delete; keep list of short_sha1s
+
+ tags_to_delete.add(name)
+ if args.verbose:
+ print('Marking %s for deletion' % name)
+
+ # the ref tag may already have been overwritten by a new
+ # build of the same ref, but a different sha1, so rather than
+ # deleting the ref tag, delete any tags that refer to the same
+ # image as the full tag we have in hand
+ digest = tag['manifest_digest']
+ if digest in digest_map:
+ # remove full tag name; no point in marking for delete twice
+ # (set.add would be safe, but only report if there are new marks)
+ digest_map[digest].discard(name)
+ if digest_map[digest]:
+ tags_to_delete.update(digest_map[digest])
+ if args.verbose:
+ print(f'Also marking {digest_map[digest]}, same digest')
+
+ # now find all the full-sha1 tags to delete by making a second
+ # pass and seeing if the tagname matches SHA1_RE but is gone from
+ # shaman
+ for tag in quaytags:
+
+ name = tag['name']
+ if 'expiration' in tag or 'end_ts' in tag:
+ continue
+
+ match = SHA1_RE.match(name)
+ if match:
+ sha1 = match[1]
+ if sha1_present_in_shaman(sha1, args.verbose):
+ if args.verbose:
+ print('Skipping %s, present in shaman' % name)
+ continue
+ # <sha1>-crimson tags don't have full or ref tags to go with.
+ # Delete them iff the default <sha1> tag is to be deleted
+ if match[2] in ('-crimson', '-crimson-debug', '-crimson-release') and sha1 in tags_to_delete:
+ if args.verbose:
+ print(
+ 'Marking %s for deletion: orphaned sha1 tag' % name
+ )
+ tags_to_delete.add(name)
+
+ if args.verbose:
+ print('\nDeleting tags:', sorted(tags_to_delete))
+
+ # and now delete all the ones we found
+ for tagname in tags_to_delete:
+ delete_from_quay(tagname, quaytoken, args.dryrun)
+
+
+if __name__ == "__main__":
+ sys.exit(main())
--- /dev/null
+- scm:
+ name: ceph-build
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build.git
+ branches:
+ - origin/main
+ browser-url: https://github.com/ceph/ceph-build
+ timeout: 20
+ basedir: "ceph-build"
+
+
+- job:
+ name: quay-pruner
+ node: small
+ project-type: freestyle
+ defaults: global
+ display-name: 'Quay: prune container images'
+ concurrent: true
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ artifact-days-to-keep: 15
+
+ triggers:
+ - timed: '@daily'
+
+ scm:
+ - ceph-build
+
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: quay-dot-ceph-dot-io-pruner-token
+ variable: QUAYTOKEN
--- /dev/null
+#!/bin/bash
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "ansible" "tox" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+
+# run ansible to get this current host to meet our requirements, specifying
+# a local connection and 'localhost' as the host where to execute. This might
+# look odd because we are using ceph-deploy playbooks. But the job-specific
+# requirements are the same: install different versions of Python (including
+# 2.6 and 2.7)
+#
+# These job-specific requirements aren't met by the services in charge of
+# creating Jenkins builders (mainly prado.ceph.com) because those slaves have "generic"
+# requirements and usually do not care about specific needs like Python 2.6
+
+cd "$WORKSPACE/ceph-build/ceph-deploy-pull-requests/setup/playbooks"
+$VENV/ansible-playbook -i "localhost," -c local setup.yml
+
+
+# create the build with tox
+cd $WORKSPACE/radosgw-agent
+$VENV/tox -rv
--- /dev/null
+# multiple scm requires definition of each scm with `name` so that they can be
+# referenced later in `job`
+# reference: http://docs.openstack.org/infra/jenkins-job-builder/scm.html
+- scm:
+ name: radosgw-agent
+ scm:
+ - git:
+ url: https://github.com/ceph/radosgw-agent.git
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+ basedir: "radosgw-agent"
+
+- scm:
+ name: ceph-build
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build.git
+ browser-url: https://github.com/ceph/ceph-build
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+ basedir: "ceph-build"
+
+
+- job:
+ name: radosgw-agent-pull-requests
+ node: trusty
+ project-type: freestyle
+ defaults: global
+ display-name: 'radosgw-agent: Pull Requests'
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/radosgw-agent
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ org-list:
+ - ceph
+ trigger-phrase: ''
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: false
+ auto-close-on-fail: false
+
+ scm:
+ - radosgw-agent
+ - ceph-build
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+HOST=$(hostname --short)
+echo "Building on ${HOST}"
+echo " DIST=${DIST}"
+echo " BPTAG=${BPTAG}"
+echo " WS=$WORKSPACE"
+echo " PWD=$(pwd)"
+echo " BRANCH=$BRANCH"
+echo " SHA1=$GIT_COMMIT"
+ls -l
+
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# create the .chacractl config file using global variables
+make_chacractl_config
+
+# set chacra variables
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+[ "$TEST" = true ] && chacra_ref="test" || chacra_ref="$BRANCH"
+
+if [[ -f /etc/redhat-release || -f /usr/bin/zypper ]] ; then
+ rm -rf ./dist # Remove any previous artifacts
+ mkdir -p $WORKSPACE/dist/noarch
+ mkdir -p $WORKSPACE/dist/SRPMS
+ mkdir -p $WORKSPACE/dist/SPECS
+ mkdir -p $WORKSPACE/dist/SOURCES
+
+ # create the DISTRO and DISTRO_VERSION variables
+ # get_rpm_dist is located in scripts/build_utils.sh
+ get_rpm_dist
+
+ chacra_endpoint="radosgw-agent/${chacra_ref}/${GIT_COMMIT}/${DISTRO}/${DISTRO_VERSION}"
+
+ check_binary_existence $VENV $chacra_endpoint/noarch
+
+ suse=$(uname -n | grep -ic -e suse -e sles || true)
+ if [ $suse -gt 0 ]
+ then
+ python setup.py clean
+ python setup.py bdist_rpm
+ if [ $? -eq 0 ]
+ then
+ find dist -name "*.noarch.rpm" | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/noarch
+ find dist -name "*.src.rpm" | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/source
+ fi
+ else
+ python setup.py clean
+ python setup.py sdist --formats=gztar
+ rpmdev-setuptree
+ rpmdev-wipetree
+ cp -avf ./dist/*.gz $HOME/rpmbuild/SOURCES
+ cp -avf radosgw-agent.spec $WORKSPACE/dist/SPECS
+ rpmbuild -ba $WORKSPACE/dist/SPECS/radosgw-agent.spec --target noarch
+ if [ $? -ne 0 ] ; then
+ rm -Rvf $WORKSPACE/dist/${DIST}/
+ else
+ find $HOME/rpmbuild -name "*.noarch.rpm" | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/noarch
+ find $HOME/rpmbuild -name "*.src.rpm" | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/source
+ fi
+ fi
+else
+ # XXX MAGICAL, Fix this
+ DEB_VERSION=$(dpkg-parsechangelog | sed -rne 's,^Version: (.*),\1, p')
+ BP_VERSION=${DEB_VERSION}${BPTAG}
+ DEBEMAIL="adeza@redhat.com" dch -D $DIST --force-distribution -b -v "$BP_VERSION" "$comment"
+
+ DEB_BUILD=$(lsb_release -s -c)
+ DISTRO=`python -c "exec 'import platform; print platform.linux_distribution()[0].lower()'"`
+
+ chacra_endpoint="radosgw-agent/${chacra_ref}/${GIT_COMMIT}/${DISTRO}/${DEB_BUILD}/noarch"
+
+ check_binary_existence $VENV $chacra_endpoint
+
+ dpkg-source -b .
+ # we no longer sign the .dsc or .changes files (done by default with
+ # the `-k$KEYID` flag), so explicitly tell the tool not to sign them
+ dpkg-buildpackage -uc -us
+ RC=$?
+ if [ $RC -eq 0 ] ; then
+ cd $WORKSPACE
+
+ # push binaries to chacra
+ # the binaries are created in one directory up from $WORKSPACE
+ find ../ | egrep "*\.(changes|deb|dsc|gz)$" | egrep -v "(Packages|Sources|Contents)" | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}
+ fi
+fi
--- /dev/null
+- job:
+ name: radosgw-agent
+ node: small && trusty
+ project-type: matrix
+ defaults: global
+ display-name: 'radosgw-agent'
+ concurrent: true
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch or tag to build"
+ default: main
+
+ - bool:
+ name: RELEASE
+ description: "If checked, it will use the key for releases, otherwise it will use the autosign one."
+ default: true
+
+ - bool:
+ name: TEST
+ description: "
+If this is unchecked, then the builds will be pushed to chacra with the correct ref. This is the default.
+
+If this is checked, then the builds will be pushed to chacra under the 'test' ref."
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+ default: true
+
+ scm:
+ - git:
+ skip-tag: true
+ url: https://github.com/ceph/radosgw-agent.git
+ branches:
+ - $BRANCH
+ browser: auto
+ timeout: 20
+
+ axes:
+ - axis:
+ type: label-expression
+ name: ARCH
+ values:
+ - x86_64
+ - axis:
+ type: label-expression
+ name: DIST
+ values:
+ - wheezy
+ - precise
+ - trusty
+ - jessie
+ - centos6
+ - centos7
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
--- /dev/null
+- job:
+ name: rtslib-fb-trigger
+ node: built-in
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: 1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/rtslib-fb
+ discard-old-builds: true
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/rtslib-fb.git
+ branches:
+ - 'origin/main*'
+ - 'origin/testing*'
+ - 'origin/wip*'
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - trigger-builds:
+ - project: 'rtslib-fb'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+BRANCH=`branch_slash_filter $BRANCH`
+
+# Only do actual work when we are an RPM distro
+if test "$DISTRO" != "fedora" -a "$DISTRO" != "centos" -a "$DISTRO" != "rhel"; then
+ exit 0
+fi
+
+## Install any setup-time deps
+# We need these for the build
+sudo yum install -y python-devel epydoc python-setuptools systemd-units
+
+# We use fpm to create the package
+sudo yum install -y rubygems ruby-devel
+sudo gem install fpm
+
+
+## Get some basic information about the system and the repository
+# Get version
+get_rpm_dist
+VERSION="$(git describe --abbrev=0 --tags HEAD | sed -e 's/v//1;')"
+REVISION="$(git describe --tags HEAD | sed -e 's/v//1;' | cut -d - -f 2- | sed 's/-/./')"
+if [ "$VERSION" = "$REVISION" ]; then
+ REVISION="1"
+fi
+
+## Create the package
+# Make sure there are no other packages, first
+rm -f *.rpm
+
+# Adjust the version dependency on pyudev since EL7 doesn't have 0.16
+sed -i "s/'pyudev >=[^']*'/'pyudev >= 0.15'/" setup.py
+
+# Create the package
+fpm -s python -t rpm -n python-rtslib -v ${VERSION} --iteration ${REVISION} -d python-kmod -d python-six -d python-pyudev setup.py
+
+
+## Upload the created RPMs to chacra
+chacra_endpoint="python-rtslib/${BRANCH}/${GIT_COMMIT}/${DISTRO}/${RELEASE}"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+find *.rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/noarch/
+PACKAGE_MANAGER_VERSION=$(rpm --queryformat '%{VERSION}-%{RELEASE}\n' -qp $(find *.rpm | egrep "\.noarch\.rpm" | head -1))
+
+# write json file with build info
+cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$VERSION",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+# post the json to repo-extra json to chacra
+curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_endpoint}/extra/
+
+# start repo creation
+$VENV/chacractl repo update ${chacra_endpoint}
+
+echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
--- /dev/null
+#! /usr/bin/bash
+#
+# Ceph distributed storage system
+#
+# Copyright (C) 2016 Red Hat <contact@redhat.com>
+#
+# Author: Boris Ranto <branto@redhat.com>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+set -ex
+
+# Make sure we execute at the top level directory before we do anything
+cd $WORKSPACE
+
+# This will set the DISTRO and MOCK_TARGET variables
+get_distro_and_target
+
+# Perform a clean-up
+git clean -fxd
+
+# Make sure the dist directory is clean
+rm -rf dist
+mkdir -p dist
+
+# Print some basic system info
+HOST=$(hostname --short)
+echo "Building on $(hostname) with the following env"
+echo "*****"
+env
+echo "*****"
+
+export LC_ALL=C # the following is vulnerable to i18n
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -f -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
--- /dev/null
+- job:
+ name: rtslib-fb
+ project-type: matrix
+ defaults: global
+ display-name: 'rtslib-fb'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: main
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: xenial, centos7, centos6, trusty-pbuilder, precise, wheezy, and jessie"
+ default: "centos7"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64"
+ default: "x86_64"
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ execution-strategy:
+ combination-filter: DIST==AVAILABLE_DIST && ARCH==AVAILABLE_ARCH && (ARCH=="x86_64" || (ARCH == "arm64" && (DIST == "xenial" || DIST == "centos7")))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - small
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos7
+ - axis:
+ type: dynamic
+ name: DIST
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+ scm:
+ - git:
+ url: git@github.com:ceph/rtslib-fb.git
+ # Use the SSH key attached to the ceph-jenkins GitHub account.
+ credentials-id: 'jenkins-build'
+ branches:
+ - $BRANCH
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ rm -rf dist
+ rm -rf venv
+ rm -rf release
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_rpm
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
--- /dev/null
+- job:
+ name: samba-trigger
+ node: built-in
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: 1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/samba
+ discard-old-builds: true
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/samba.git
+ branches:
+ - 'origin/ceph*'
+ - 'origin/main*'
+ - 'origin/wip-*'
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - trigger-builds:
+ - project: 'samba'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+# Only do actual work when we are a DEB distro
+if test "$DISTRO" != "debian" -a "$DISTRO" != "ubuntu"; then
+ exit 0
+fi
+
+## Get the desired CEPH_BRANCH/CEPH_SHA1 ceph repo
+# Get repo file from appropriate shaman build
+REPO_URL="https://shaman.ceph.com/api/repos/ceph/$CEPH_BRANCH/$CEPH_SHA1/$DISTRO/$DIST/repo"
+TIME_LIMIT=1200
+INTERVAL=30
+REPO_FOUND=0
+
+# poll shaman for up to 10 minutes
+while [ "$SECONDS" -le "$TIME_LIMIT" ]
+do
+ if `curl --fail -L ${REPO_URL} > $WORKSPACE/shaman.list`; then
+ echo "Ceph debian lib repo exists in shaman"
+ REPO_FOUND=1
+ break
+ else
+ sleep $INTERVAL
+ fi
+done
+
+if [[ "$REPO_FOUND" -eq 0 ]]; then
+ echo "Ceph debian lib repo does NOT exist in shaman"
+ exit 1
+fi
+
+# Copy the repo
+sudo cp $WORKSPACE/shaman.list /etc/apt/sources.list.d/
+
+## Install any setup-time deps
+# Make sure we use the latest ceph versions, remove any old bits
+sudo apt-get remove -y libcephfs-dev || true
+sudo apt-get remove -y libcephfs2 || true
+sudo apt-get remove -y libcephfs1 || true
+sudo apt-get remove -y librados2 || true
+sudo apt-get remove -y librbd1 || true
+
+# We need these for the build
+sudo apt-get update && sudo apt-get install -y build-essential equivs libgnutls-dev libacl1-dev libldap2-dev ruby ruby-dev libcephfs-dev libpam-dev
+
+# We use fpm to create the deb package
+sudo gem install fpm
+
+
+## Do the actual build
+# Prepare the build
+DESTDIR="install.tmp"
+install -d -m0755 -- "$DESTDIR"
+./configure --without-lttng
+
+# Perform the build and install the files to DESTDIR
+NCPU=$(grep -c processor /proc/cpuinfo)
+make -j$NCPU
+make -j$NCPU install DESTDIR=${DESTDIR}
+
+
+## Get some basic information about the system and the repository
+# Get version
+export LD_LIBRARY_PATH=${DESTDIR}/usr/local/samba/lib/:${DESTDIR}/usr/local/samba/lib/private/
+DEB_ARCH=$(dpkg-architecture -qDEB_BUILD_ARCH)
+VERSION=$(${DESTDIR}/usr/local/samba/sbin/smbd --version | sed -e "s|Version ||")
+REVISION="$(git rev-parse HEAD)"
+
+
+## Create the deb package
+# Make sure there are no other deb packages, first
+rm -f *.deb
+
+# Create the deb package
+fpm -s dir -t deb -n samba -v ${VERSION} -C ${DESTDIR} -d krb5-user usr
+
+
+## Upload the created DEB to chacra
+chacra_endpoint="samba/${BRANCH}/${GIT_COMMIT}/${DISTRO}/${DIST}"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+find *.deb | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/
+PACKAGE_MANAGER_VERSION=$(dpkg-deb -f $(find *"$DEB_ARCH".deb | head -1) Version)
+
+# write json file with build info
+cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$VERSION",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+# post the json to repo-extra json to chacra
+curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_endpoint}/extra/
+
+# start repo creation
+$VENV/chacractl repo update ${chacra_endpoint}
+
+echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+# Only do actual work when we are an RPM distro
+if test "$DISTRO" != "fedora" -a "$DISTRO" != "centos" -a "$DISTRO" != "rhel"; then
+ exit 0
+fi
+
+get_rpm_dist
+
+## Get the desired CEPH_BRANCH/CEPH_SHA1 ceph repo
+# Get .repo file from appropriate shaman build
+REPO_URL="https://shaman.ceph.com/api/repos/ceph/$CEPH_BRANCH/$CEPH_SHA1/$DISTRO/$RELEASE/flavors/default/repo"
+TIME_LIMIT=1200
+INTERVAL=30
+REPO_FOUND=0
+
+# poll shaman for up to 10 minutes
+while [ "$SECONDS" -le "$TIME_LIMIT" ]
+do
+ if `curl --fail -L $REPO_URL > $WORKSPACE/shaman.repo`; then
+ echo "Ceph repo file has been added from shaman"
+ REPO_FOUND=1
+ break
+ else
+ sleep $INTERVAL
+ fi
+done
+
+if [[ "$REPO_FOUND" -eq 0 ]]; then
+ echo "Ceph lib repo does NOT exist in shaman"
+ exit 1
+fi
+
+# Copy the repo
+sudo cp $WORKSPACE/shaman.repo /etc/yum.repos.d/
+
+## Install any setup-time deps
+# We modified the repos, clean cache first
+sudo yum clean all
+
+# We need these for the build
+sudo yum install -y gnutls-devel libacl-devel openldap-devel rubygems ruby-devel libcephfs-devel pam-devel
+
+# We use fpm to create the deb package
+sudo gem install fpm
+
+
+## Do the actual build
+# Prepare the build
+DESTDIR="install.tmp"
+install -d -m0755 -- "$DESTDIR"
+./configure --without-lttng
+
+# Perform the build and install the files to DESTDIR
+NCPU=$(grep -c processor /proc/cpuinfo)
+make -j$NCPU
+make -j$NCPU install DESTDIR=${DESTDIR}
+
+
+## Get some basic information about the system and the repository
+# Get version
+export LD_LIBRARY_PATH=${DESTDIR}/usr/local/samba/lib/:${DESTDIR}/usr/local/samba/lib/private/
+VERSION=$(${DESTDIR}/usr/local/samba/sbin/smbd --version | sed -e "s|Version ||")
+REVISION="$(git rev-parse HEAD)"
+
+
+## Create the deb package
+# Make sure there are no other deb packages, first
+rm -f *.rpm
+
+# Create the deb package
+fpm -s dir -t rpm -n samba -v ${VERSION} -C ${DESTDIR} -d krb5-user usr
+
+
+## Upload the created RPMs to chacra
+chacra_endpoint="samba/${BRANCH}/${GIT_COMMIT}/${DISTRO}/${RELEASE}"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+find *.rpm | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/$ARCH/
+PACKAGE_MANAGER_VERSION=$(rpm --queryformat '%{VERSION}-%{RELEASE}\n' -qp $(find *.rpm | egrep "\.$ARCH\.rpm" | head -1))
+
+# write json file with build info
+cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$VERSION",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+# post the json to repo-extra json to chacra
+curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_endpoint}/extra/
+
+# start repo creation
+$VENV/chacractl repo update ${chacra_endpoint}
+
+echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
--- /dev/null
+#! /usr/bin/bash
+#
+# Ceph distributed storage system
+#
+# Copyright (C) 2016 Red Hat <contact@redhat.com>
+#
+# Author: Boris Ranto <branto@redhat.com>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+set -ex
+
+# Make sure we execute at the top level directory before we do anything
+cd $WORKSPACE
+
+# This will set the DISTRO and MOCK_TARGET variables
+get_distro_and_target
+
+# Perform a clean-up
+git clean -fxd
+
+# Make sure the dist directory is clean
+rm -rf dist
+mkdir -p dist
+
+# Print some basic system info
+HOST=$(hostname --short)
+echo "Building on $(hostname) with the following env"
+echo "*****"
+env
+echo "*****"
+
+export LC_ALL=C # the following is vulnerable to i18n
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+SAMBA_BRANCH=$(branch_slash_filter $SAMBA_BRANCH)
+CEPH_BRANCH=$(branch_slash_filter $CEPH_BRANCH)
+BRANCH=${SAMBA_BRANCH}
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -f -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
--- /dev/null
+- job:
+ name: samba
+ project-type: matrix
+ defaults: global
+ display-name: 'samba'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ parameters:
+ - string:
+ name: SAMBA_BRANCH
+ description: "The git branch (or tag) to build"
+ default: main
+
+ - string:
+ name: CEPH_SHA1
+ description: "The SHA1 of the ceph branch"
+ default: "latest"
+
+ - string:
+ name: CEPH_BRANCH
+ description: "The branch of Ceph to get the repo file of for libcephfs"
+ default: main
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos7, centos6, bionic, xenial, trusty-pbuilder, precise, wheezy, and jessie"
+ default: "centos7 xenial bionic"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64, and arm64"
+ default: "x86_64"
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ execution-strategy:
+ combination-filter: DIST==AVAILABLE_DIST && ARCH==AVAILABLE_ARCH && (ARCH=="x86_64" || (ARCH == "arm64" && (DIST == "xenial" || DIST == "centos7")))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - huge
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - arm64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos7
+ - trusty-pbuilder
+ - xenial
+ - jessie
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+ scm:
+ - git:
+ url: git@github.com:ceph/samba.git
+ # Use the SSH key attached to the ceph-jenkins GitHub account.
+ credentials-id: 'jenkins-build'
+ branches:
+ - $SAMBA_BRANCH
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ rm -rf dist
+ rm -rf venv
+ rm -rf release
+ # debian build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_deb
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_rpm
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - SUCCESS
+ - UNSTABLE
+ - FAILURE
+ - ABORTED
+ - NOT_BUILT
+ build-steps:
+ - shell: "sudo rm -f /etc/apt/sources.list.d/shaman.list /etc/yum.repos.d/shaman.repo"
--- /dev/null
+#!/bin/bash
+# vim: ts=4 sw=4 expandtab
+set -ex
+PS4="\$(date --rfc-3339=seconds) + "
+
+# XXX perhaps use job parameters instead of literals; then
+# later stages can also use them to compare etc.
+# build container image that supports building crimson-osd
+if [[ $CI_CONTAINER == "true" && $DISTRO =~ centos|rocky|alma && "$RELEASE" =~ 8|9|10 ]] ; then
+ podman login -u $CONTAINER_REPO_USERNAME -p $CONTAINER_REPO_PASSWORD $CONTAINER_REPO_HOSTNAME/$CONTAINER_REPO_ORGANIZATION
+ loop=0
+ ready=false
+ while ((loop < 15)); do
+ curl -s "https://shaman.ceph.com/api/search/?project=ceph&distros=${DISTRO}/${RELEASE}/${ARCH}&sha1=${SHA1}&ref=${BRANCH}&flavor=${FLAVOR}" > shaman.status
+ if [[ ($(jq -r '.[0].status' < shaman.status) == 'ready') ]]; then
+ # If we skipped compilation, we will not have generated a shaman build,
+ # so skip validating against extra.build_url
+ if [[ ${CI_COMPILE:-true} == "false" ]]; then
+ ready=true
+ break
+ elif [[ ($(jq -r '.[0].extra.build_url' < shaman.status) == ${BUILD_URL}) ]]; then
+ ready=true
+ break
+ fi
+ fi
+ ((loop = loop + 1))
+ sleep 60
+ done
+
+ if [[ "$ready" == "false" ]] ; then
+ chacra_endpoint="ceph/${BRANCH}/${SHA1}/${DISTRO}/${RELEASE}"
+ echo "FAIL: timed out waiting for shaman repo to be built: https://shaman.ceph.com/api/repos/${chacra_endpoint}/flavors/${FLAVOR}/"
+ # don't fail the build here on purpose
+ # update_build_status "failed" "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
+ # exit 1
+ fi
+ cd ${WORKSPACE}
+ # older jobs used a versioned directory; ceph-dev-pipeline uses an unversioned dir.
+ [[ -d ./dist/ceph/container ]] && cd ./dist/ceph/container || cd dist/ceph-${cephver}/container
+ from_image_spec="${DISTRO}:${RELEASE}"
+ case "${from_image_spec}" in
+ "centos:9"|"centos:stream9") FROM_IMAGE=quay.io/centos/centos:stream9 ;;
+ "rocky:10") FROM_IMAGE=docker.io/rockylinux/rockylinux:10 ;;
+ *) echo "Don't know requested FROM image ${from_image_spec}"; exit 1 ;;
+ esac
+ FROM_IMAGE=${FROM_IMAGE} CEPH_SHA1=${SHA1} ./build.sh
+fi
--- /dev/null
+#!/bin/bash
+# vim: ts=4 sw=4 expandtab
+
+set -ex
+
+function create_venv_dir() {
+ local venv_dir
+ venv_dir=$(mktemp -td venv.XXXXXXXXXX)
+ trap "rm -rf ${venv_dir}" EXIT
+ echo "${venv_dir}"
+}
+
+function release_from_version() {
+ local ver=$1
+ case $ver in
+ 20.*)
+ rel="tentacle"
+ ;;
+ 19.*)
+ rel="squid"
+ ;;
+ 18.*)
+ rel="reef"
+ ;;
+ 17.*)
+ rel="quincy"
+ ;;
+ 16.*)
+ rel="pacific"
+ ;;
+ 15.*)
+ rel="octopus"
+ ;;
+ 14.*)
+ rel="nautilus"
+ ;;
+ 13.*)
+ rel="mimic"
+ ;;
+ 12.*)
+ rel="luminous"
+ ;;
+ 11.*)
+ rel="kraken"
+ ;;
+ 10.*)
+ rel="jewel"
+ ;;
+ 9.*)
+ rel="infernalis"
+ ;;
+ 0.94.*)
+ rel="hammer"
+ ;;
+ 0.87.*)
+ rel="giant"
+ ;;
+ 0.80.*)
+ rel="firefly"
+ ;;
+ 0.72.*)
+ rel="emperor"
+ ;;
+ 0.67.*)
+ rel="dumpling"
+ ;;
+ *)
+ rel="unknown"
+ echo "ERROR: Unknown release for version '$ver'" > /dev/stderr
+ echo $rel
+ exit 1
+ ;;
+ esac
+ echo $rel
+}
+
+
+branch_slash_filter() {
+ # The build system relies on an HTTP binary store that uses branches/refs
+ # as URL parts. A literal extra slash in the branch name is considered
+ # illegal, so this function performs a check *and* prunes the common
+ # `origin/branch-name` scenario (which is OK to have).
+ RAW_BRANCH=$1
+ branch_slashes=$(grep -o "/" <<< ${RAW_BRANCH} | wc -l)
+ FILTERED_BRANCH=`echo ${RAW_BRANCH} | rev | cut -d '/' -f 1 | rev`
+
+ # Prevent building branches that have slashes in their name
+ if [ "$((branch_slashes))" -gt 1 ] ; then
+ echo "Will refuse to build branch: ${RAW_BRANCH}"
+ echo "Invalid branch name (contains slashes): ${FILTERED_BRANCH}"
+ exit 1
+ fi
+ echo $FILTERED_BRANCH
+}
+
+pip_download() {
+ local venv=$1
+ shift
+ local package=$1
+ shift
+ local options=$@
+ if ! $venv/pip download $options --dest="$PIP_SDIST_INDEX" $package; then
+ # pip <8.0.0 does not have "download" command
+ $venv/pip install $options \
+ --upgrade --exists-action=i --cache-dir="$PIP_SDIST_INDEX" \
+ $package
+ fi
+}
+
+create_virtualenv () {
+ local path=$1
+ if [ -d $path ]; then
+ echo "Will reuse existing virtual env: $path"
+ else
+ if command -v python3 > /dev/null; then
+ python3 -m venv $path
+ elif command -v python2.7 > /dev/null; then
+ virtualenv -p python2.7 $path
+ else
+ virtualenv -p python $path
+ fi
+ fi
+}
+
+install_python_packages_no_binary () {
+ local venv_dir=$1
+ shift
+ local venv="$venv_dir/bin"
+ # Use this function to create a virtualenv and install python packages
+ # without compiling (or using wheels). Pass a list of package names. If
+ # the virtualenv exists it will get re-used since this function can be used
+ # along with install_python_packages
+ #
+ # Usage (with pip==24.0 [the default]):
+ #
+ # to_install=( "ansible" "chacractl>=0.0.21" )
+ # install_python_packages_no_binary $TEMPVENV "to_install[@]"
+ #
+ # Usage (with pip<X.X.X [can also do ==X.X.X or !=X.X.X]):
+ #
+ # to_install=( "ansible" "chacractl>=0.0.21" )
+ # install_python_packages_no_binary $TEMPVENV "to_install[@]" "pip<X.X.X"
+ #
+ # Usage (with latest pip):
+ #
+ # to_install=( "ansible" "chacractl>=0.0.21" )
+ # install_python_packages_no_binary $TEMPVENV "to_install[@]" latest
+
+ create_virtualenv $venv_dir
+
+ # Define and ensure the PIP cache
+ PIP_SDIST_INDEX="$HOME/.cache/pip"
+ mkdir -p $PIP_SDIST_INDEX
+
+ if [ "$2" == "latest" ]; then
+ echo "Ensuring latest pip is installed"
+ $venv/pip install -U pip
+ elif [[ -n $2 && "$2" != "latest" ]]; then
+ echo "Installing $2"
+ $venv/pip install "$2"
+ else
+ # This is the default for most jobs.
+ # See ff01d2c5 and fea10f52
+ echo "Installing pip 24.0"
+ $venv/pip install "pip==24.0"
+ fi
+
+ echo "Ensuring latest wheel is installed"
+ $venv/pip install -U wheel
+
+ echo "Updating setuptools"
+ pip_download $VENV setuptools
+
+ pkgs=("${!1}")
+ for package in "${pkgs[@]}"; do
+ echo $package
+ # download packages to the local pip cache
+ pip_download $VENV $package --no-binary=:all:
+ # install packages from the local pip cache, ignoring pypi
+ $venv/pip install --no-binary=:all: --upgrade --exists-action=i --find-links="file://$PIP_SDIST_INDEX" --no-index $package
+ done
+}
+
+
+install_python_packages () {
+ local venv_dir=$1
+ shift
+ local venv="$venv_dir/bin"
+ # Use this function to create a virtualenv and install
+ # python packages. Pass a list of package names.
+ #
+ # Usage (with pip 24.0 [the default]):
+ #
+ # to_install=( "ansible" "chacractl>=0.0.21" )
+ # install_python_packages $TEMPVENV "to_install[@]"
+ #
+ # Usage (with pip<X.X.X [can also do ==X.X.X or !=X.X.X]):
+ #
+ # to_install=( "ansible" "chacractl>=0.0.21" )
+ # install_python_packages_no_binary $TEMPVENV "to_install[@]" "pip<X.X.X"
+ #
+ # Usage (with latest pip):
+ #
+ # to_install=( "ansible" "chacractl>=0.0.21" )
+ # install_python_packages $TEMPVENV "to_install[@]" latest
+
+ create_virtualenv $venv_dir
+
+ # Define and ensure the PIP cache
+ PIP_SDIST_INDEX="$HOME/.cache/pip"
+ mkdir -p $PIP_SDIST_INDEX
+
+ # Avoid UnicodeErrors when installing packages.
+ # See https://github.com/ceph/ceph/pull/42811
+ export LC_ALL=en_US.UTF-8
+
+ if [ "$2" == "latest" ]; then
+ echo "Ensuring latest pip is installed"
+ $venv/pip install -U pip
+ elif [[ -n $2 && "$2" != "latest" ]]; then
+ echo "Installing $2"
+ $venv/pip install "$2"
+ else
+ # This is the default for most jobs.
+ # See ff01d2c5 and fea10f52
+ echo "Installing pip 24.0"
+ $venv/pip install "pip==24.0"
+ fi
+
+ echo "Ensuring latest wheel is installed"
+ $venv/pip install -U wheel
+
+ echo "Updating setuptools"
+ pip_download $venv setuptools
+
+ pkgs=("${!1}")
+ for package in "${pkgs[@]}"; do
+ echo $package
+ # download packages to the local pip cache
+ pip_download $venv $package
+ # install packages from the local pip cache, ignoring pypi
+ $venv/pip install --upgrade --exists-action=i --find-links="file://$PIP_SDIST_INDEX" --no-index $package
+ done
+
+ # See https://tracker.ceph.com/issues/59652
+ echo "Pinning urllib3 and requests"
+ $venv/pip install "urllib3<2.0.0"
+ $venv/pip install "requests<2.30.0"
+}
+
+make_chacractl_config () {
+ # create the .chacractl config file
+ if [ -z "$1" ] # Is parameter #1 zero length?
+ then
+ url=$CHACRACTL_URL
+ else
+ url=$1
+ fi
+ cat > $HOME/.chacractl << EOF
+url = "$url"
+user = "$CHACRACTL_USER"
+key = "$CHACRACTL_KEY"
+ssl_verify = True
+EOF
+}
+
+get_rpm_dist() {
+ # creates a DISTRO_VERSION and DISTRO global variable for
+ # use in constructing chacra urls for rpm distros
+
+ source /etc/os-release
+
+ RELEASE=$VERSION_ID
+ DISTRO=$ID
+
+ case $ID in
+ rhel)
+ DIST=rhel$RELEASE
+ ;;
+ centos)
+ DIST=el$RELEASE
+ ;;
+ fedora)
+ DIST=fc$RELEASE
+ ;;
+ sles)
+ DIST=sles$RELEASE
+ ;;
+ opensuse-leap)
+ DIST=opensuse$RELEASE
+ DISTRO=opensuse
+ ;;
+ *)
+ DIST=unknown
+ DISTRO=unknown
+ ;;
+ esac
+ DISTRO_VERSION=$RELEASE
+ echo $DIST
+}
+
+check_binary_existence () {
+ local venv=$1
+ shift
+ local url=$1
+ shift
+
+ # we have to use ! here so thet -e will ignore the error code for the command
+ # because of this, the exit code is also reversed
+ ! $venv/chacractl exists binaries/${url} ; exists=$?
+
+ # if the binary already exists in chacra, do not rebuild
+ if [ $exists -eq 1 ] && [ "$FORCE" = false ] ; then
+ echo "The endpoint at ${chacra_endpoint} already exists and FORCE was not set, Exiting..."
+ exit 0
+ fi
+
+}
+
+
+submit_build_status() {
+ # A helper script to post (create) the status of a build in shaman
+ # 'state' can be either 'failed' or 'started'
+ # 'project' is used to post to the right url in shaman
+ http_method=$1
+ state=$2
+ project=$3
+ distro=$4
+ distro_version=$5
+ distro_arch=$6
+ cat > $WORKSPACE/build_status.json << EOF
+{
+ "extra":{
+ "version":"$vers",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME",
+ "build_user":"$BUILD_USER"
+ },
+ "url":"$BUILD_URL",
+ "log_url":"$BUILD_URL/consoleFull",
+ "status":"$state",
+ "distro":"$distro",
+ "distro_version":"$distro_version",
+ "distro_arch":"$distro_arch",
+ "ref":"$BRANCH",
+ "sha1":"$SHA1",
+ "flavor":"$FLAVOR"
+}
+EOF
+
+ # these variables are saved in this jenkins
+ # properties file so that other scripts
+ # in the same job can inject them
+ cat > $WORKSPACE/build_info << EOF
+NORMAL_DISTRO=$distro
+NORMAL_DISTRO_VERSION=$distro_version
+NORMAL_ARCH=$distro_arch
+SHA1=$SHA1
+EOF
+
+ SHAMAN_URL="https://shaman.ceph.com/api/builds/$project/"
+ # post the build information as JSON to shaman
+ curl -X $http_method -H "Content-Type:application/json" --data "@$WORKSPACE/build_status.json" -u $SHAMAN_API_USER:$SHAMAN_API_KEY ${SHAMAN_URL}
+
+
+}
+
+
+update_build_status() {
+ # A proxy script to PUT (create or update) the status of a build
+ # in shaman for a normal/initial build
+ # 'state' can be either of: 'started', 'completed', or 'failed'
+ # 'project' is used to post to the right url in shaman
+
+ # required
+ state=$1
+ project=$2
+
+ # optional
+ distro=$3
+ distro_version=$4
+ distro_arch=$5
+
+ submit_build_status "POST" $state $project $distro $distro_version $distro_arch
+}
+
+
+failed_build_status() {
+ # A helper script to POST (create) the status of a build in shaman as
+ # a failed build. The only required argument is the 'project', so that it
+ # can be used post to the right url in shaman
+
+ # required
+ project=$1
+
+ state="failed"
+
+ # optional
+ distro=$2
+ distro_version=$3
+ distro_arch=$4
+
+ submit_build_status "POST" $state $project $distro $distro_version $distro_arch
+}
+
+
+get_distro_and_target() {
+ # Get distro from DIST for chacra uploads
+ DISTRO=""
+ case $DIST in
+ bookworm*)
+ DIST=bookworm
+ DISTRO="debian"
+ ;;
+ bullseye*)
+ DIST=bullseye
+ DISTRO="debian"
+ ;;
+ buster*)
+ DIST=buster
+ DISTRO="debian"
+ ;;
+ stretch*)
+ DIST=stretch
+ DISTRO="debian"
+ ;;
+ jessie*)
+ DIST=jessie
+ DISTRO="debian"
+ ;;
+ wheezy*)
+ DIST=wheezy
+ DISTRO="debian"
+ ;;
+ noble*)
+ DIST=noble
+ DISTRO="ubuntu"
+ ;;
+ jammy*)
+ DIST=jammy
+ DISTRO="ubuntu"
+ ;;
+ focal*)
+ DIST=focal
+ DISTRO="ubuntu"
+ ;;
+ bionic*)
+ DIST=bionic
+ DISTRO="ubuntu"
+ ;;
+ xenial*)
+ DIST=xenial
+ DISTRO="ubuntu"
+ ;;
+ precise*)
+ DIST=precise
+ DISTRO="ubuntu"
+ ;;
+ trusty*)
+ DIST=trusty
+ DISTRO="ubuntu"
+ ;;
+ centos*)
+ source /etc/os-release
+ if [ $VERSION -ge 8 ]; then
+ MOCK_TARGET="centos-stream+epel"
+ else
+ MOCK_TARGET="epel"
+ fi
+ DISTRO="centos"
+ ;;
+ rhel*)
+ DISTRO="rhel"
+ MOCK_TARGET="epel"
+ ;;
+ fedora*)
+ DISTRO="fedora"
+ MOCK_TARGET="fedora"
+ ;;
+ *)
+ DISTRO="unknown"
+ ;;
+ esac
+}
+
+setup_updates_repo() {
+ local hookdir=$1
+
+ if [[ $NORMAL_DISTRO != ubuntu ]]; then
+ return
+ fi
+ if [[ "$ARCH" == "x86_64" ]]; then
+ cat > $hookdir/D04install-updates-repo <<EOF
+echo "deb [arch=amd64] http://us.archive.ubuntu.com/ubuntu/ $DIST-updates main restricted universe multiverse" >> /etc/apt/sources.list
+echo "deb [arch=amd64] http://us.archive.ubuntu.com/ubuntu/ $DIST-backports main restricted universe multiverse" >> /etc/apt/sources.list
+echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu $DIST-security main restricted universe multiverse" >> /etc/apt/sources.list
+env DEBIAN_FRONTEND=noninteractive apt-get update -y -o Acquire::Languages=none -o Acquire::Translation=none || true
+env DEBIAN_FRONTEND=noninteractive apt-get install -y gnupg
+EOF
+ elif [[ "$ARCH" == "arm64" ]]; then
+ cat > $hookdir/D04install-updates-repo <<EOF
+echo "deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ $DIST-updates main restricted universe multiverse" >> /etc/apt/sources.list
+echo "deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ $DIST-backports main restricted universe multiverse" >> /etc/apt/sources.list
+echo "deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ $DIST-security main restricted universe multiverse" >> /etc/apt/sources.list
+env DEBIAN_FRONTEND=noninteractive apt-get update -y -o Acquire::Languages=none -o Acquire::Translation=none || true
+env DEBIAN_FRONTEND=noninteractive apt-get install -y gnupg
+EOF
+ fi
+ chmod +x $hookdir/D04install-updates-repo
+}
+
+recreate_hookdir() {
+ local hookdir
+ if use_ppa; then
+ hookdir=$HOME/.pbuilder/hook.d
+ else
+ hookdir=$HOME/.pbuilder/hook-old-gcc.d
+ fi
+ rm -rf $hookdir
+ mkdir -p $hookdir
+ echo $hookdir
+}
+
+setup_pbuilder() {
+ local use_gcc=$1
+
+ # This function will set the tgz images needed for pbuilder on a given host. It has
+ # some hard-coded values like `/srv/debian-base` because it gets built every
+ # time this file is executed - completely ephemeral. If a Debian host will use
+ # pbuilder, then it will need this. Since it is not idempotent it makes
+ # everything a bit slower. ## FIXME ##
+
+ basedir="/srv/debian-base"
+
+ # Ensure that the basedir directory exists
+ sudo mkdir -p "$basedir"
+
+ # This used to live in a *file* on /srv/ceph-build as
+ # /srv/ceph-build/update_pbuilder.sh Now it lives here because it doesn't make
+ # sense to have a file that lives in /srv/ that we then concatenate to get its
+ # contents. what.
+ # By using $DIST we are narrowing down to updating only the distro image we
+ # need, unlike before where we updated everything on every server on every
+ # build.
+
+ os="debian"
+ [ "$DIST" = "precise" ] && os="ubuntu"
+ [ "$DIST" = "saucy" ] && os="ubuntu"
+ [ "$DIST" = "trusty" ] && os="ubuntu"
+ [ "$DIST" = "xenial" ] && os="ubuntu"
+ [ "$DIST" = "bionic" ] && os="ubuntu"
+ [ "$DIST" = "focal" ] && os="ubuntu"
+ [ "$DIST" = "jammy" ] && os="ubuntu"
+ [ "$DIST" = "noble" ] && os="ubuntu"
+
+ if [ $os = "debian" ]; then
+ # this mirror seems to have been decommissioned.
+ # mirror="http://www.gtlib.gatech.edu/pub/debian"
+ mirror="http://ftp.us.debian.org/debian"
+ if [ "$DIST" = "jessie" ]; then
+ # despite the fact we're building for jessie, pbuilder was failing due to
+ # missing wheezy key 8B48AD6246925553. Pointing pbuilder at the archive
+ # keyring takes care of it.
+ debootstrapopts='DEBOOTSTRAPOPTS=( "--keyring" "/usr/share/keyrings/debian-archive-keyring.gpg" )'
+ else
+ # this assumes that newer Debian releases are being added to
+ # /etc/apt/trusted.gpg that is also the default location for Ubuntu trusted
+ # keys. The Jenkins builder should ensure that the needed keys are added accordingly
+ # to this location.
+ debootstrapopts='DEBOOTSTRAPOPTS=( "--keyring" "/etc/apt/trusted.gpg" )'
+ fi
+ components='COMPONENTS="main contrib"'
+ elif [ "$ARCH" = "arm64" ]; then
+ mirror="http://ports.ubuntu.com/ubuntu-ports"
+ debootstrapopts=""
+ components='COMPONENTS="main universe"'
+ else
+ mirror="http://us.archive.ubuntu.com/ubuntu"
+ debootstrapopts=""
+ components='COMPONENTS="main universe"'
+ fi
+
+ # ensure that the tgz is valid, otherwise remove it so that it can be recreated
+ # again
+ pbuild_tar="$basedir/$DIST.tgz"
+ is_not_tar=`python3 -c $'try: import tarfile;print(int(not int(tarfile.is_tarfile(\"$pbuild_tar\"))))\nexcept IOError: print(1)'`
+ file_size_kb=`test -f $pbuild_tar && du -k "$pbuild_tar" | cut -f1 || echo 0`
+
+ if [ "$is_not_tar" = "1" ]; then
+ sudo rm -f "$pbuild_tar"
+ fi
+
+ if [ $file_size_kb -lt 1 ]; then
+ sudo rm -f "$pbuild_tar"
+ fi
+
+ # Ordinarily pbuilder only pulls packages from "main". ceph depends on
+ # packages like python-virtualenv which are in "universe". We have to configure
+ # pbuilder to look in "universe". Otherwise the build would fail with a message similar
+ # to:
+ # The following packages have unmet dependencies:
+ # pbuilder-satisfydepends-dummy : Depends: python-virtualenv which is a virtual package.
+ # Depends: xmlstarlet which is a virtual package.
+ # Unable to resolve dependencies! Giving up...
+ echo "$components" > ~/.pbuilderrc
+ echo "$debootstrapopts" >> ~/.pbuilderrc
+
+ # gcc 7.4.0 will come from bionic-updates, those need to be added as well
+ if [ $DIST = "bionic" ]; then
+ other_mirror="OTHERMIRROR=\"deb $mirror $DIST-updates main universe\""
+ echo "$other_mirror" >> ~/.pbuilderrc
+ fi
+
+ #in case it needs to update/add repo, the following packages should installed.
+ extrapackages="software-properties-common curl"
+ echo "EXTRAPACKAGES=\"${extrapackages}\"" >> ~/.pbuilderrc
+
+ local opts
+ opts+=" --basetgz $basedir/$DIST.tgz"
+ opts+=" --distribution $DIST"
+ if [ -n "$mirror" ] ; then
+ opts+=" --mirror $mirror"
+ fi
+
+ if [ -n "$use_gcc" ]; then
+ # Newer pbuilder versions set $HOME to /nonexistent which breaks all kinds of
+ # things that rely on a proper (writable) path. Setting this to the system user's $HOME is not enough
+ # because of how pbuilder uses a chroot environment for builds, using a temporary directory here ensures
+ # that writes will be successful.
+ echo "BUILD_HOME=`mktemp -d`" >> ~/.pbuilderrc
+ # Some Ceph components will want to use cached wheels that may have older versions of buggy executables
+ # like: /usr/share/python-wheels/pip-8.1.1-py2.py3-none-any.whl which causes errors that are already fixed
+ # in newer versions. This ticket solves the specific issue in 8.1.1 (which vendors urllib3):
+ # https://github.com/shazow/urllib3/issues/567
+ echo "USENETWORK=yes" >> ~/.pbuilderrc
+ local hookdir
+ hookdir=$(recreate_hookdir)
+ if [[ "$NORMAL_DISTRO" = ubuntu ]]; then
+ setup_updates_repo $hookdir
+ setup_pbuilder_for_ppa $hookdir
+ fi
+ echo "HOOKDIR=$hookdir" >> ~/.pbuilderrc
+ fi
+
+ # As of this writing, Ceph needs about 100GB of space to build the binaries.
+ # If the Jenkins builder has 150% that amount in RAM available, let's build
+ # on a tmpfs
+ available_mem=$(cat /proc/meminfo | grep MemAvailable | awk '{ print $2 }')
+ if [ $available_mem -gt 157286400 ]; then
+ echo "Will be building in a tmpfs"
+ sudo mkdir -p /var/cache/pbuilder/build
+ sudo mount -t tmpfs -o size=150G tmpfs /var/cache/pbuilder/build
+ echo "APTCACHEHARDLINK=no" >> ~/.pbuilderrc
+ PBUILDER_IN_TMPFS=true
+ else
+ echo "$available_mem kB is not enough space to build Ceph in a tmpfs. Will use the Jenkins workspace instead"
+ PBUILDER_IN_TMPFS=false
+ fi
+
+ sudo cp ~/.pbuilderrc /root/.pbuilderrc
+
+ if [ -e $basedir/$DIST.tgz ]; then
+ echo updating $DIST base.tgz
+ sudo pbuilder update $opts \
+ --override-config
+ else
+ echo building $DIST base.tgz
+ sudo pbuilder create $opts
+ fi
+}
+
+use_ppa() {
+ # only use ppa on focal for reef+
+ case $vers in
+ 15.*) # o
+ use_ppa=false;;
+ 16.*) # p
+ use_ppa=false;;
+ 17.2*) # q
+ use_ppa=false;;
+ *)
+ case $DIST in
+ focal)
+ use_ppa=true;;
+ *)
+ use_ppa=false;;
+ esac
+ ;;
+ esac
+ $use_ppa
+}
+
+setup_gcc_hook() {
+ new=$1
+ cat <<EOF
+old=\$(gcc -dumpversion)
+if dpkg --compare-versions \$old eq $new; then
+ return
+fi
+
+case \$old in
+ 4*)
+ old=4.8;;
+ 5*)
+ old=5;;
+ 7*)
+ old=7;;
+ 8*)
+ old=8;;
+esac
+
+update-alternatives --remove-all gcc
+
+update-alternatives \
+ --install /usr/bin/gcc gcc /usr/bin/gcc-${new} 20 \
+ --slave /usr/bin/g++ g++ /usr/bin/g++-${new}
+
+update-alternatives \
+ --install /usr/bin/gcc gcc /usr/bin/gcc-\${old} 10 \
+ --slave /usr/bin/g++ g++ /usr/bin/g++-\${old}
+
+update-alternatives --auto gcc
+
+# cmake uses the latter by default
+ln -nsf /usr/bin/gcc /usr/bin/\$(arch)-linux-gnu-gcc
+ln -nsf /usr/bin/g++ /usr/bin/\$(arch)-linux-gnu-g++
+EOF
+}
+
+setup_pbuilder_for_new_gcc() {
+ # point gcc,g++ to the newly installed ones
+ local hookdir=$1
+ shift
+ local version=$1
+ shift
+
+ # need to add the test repo and install new gcc after
+ # `pbuilder create|update` finishes apt-get instead of using "extrapackages".
+ # otherwise installing gcc will leave us a half-configured build-essential
+ # and gcc, and `pbuilder` command will fail. because the `build-essential`
+ # depends on a certain version of gcc which is upgraded already by the one
+ # in test repo.
+ if [ "$ARCH" = "arm64" ]; then
+ cat > $hookdir/D05install-new-gcc <<EOF
+echo "deb [lang=none] http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu $DIST main" >> \
+ /etc/apt/sources.list.d/ubuntu-toolchain-r.list
+echo "deb [lang=none] http://ports.ubuntu.com/ubuntu-ports $DIST-updates main" >> \
+ /etc/apt/sources.list.d/ubuntu-toolchain-r.list
+EOF
+ elif [ "$ARCH" = "x86_64" ]; then
+ cat > $hookdir/D05install-new-gcc <<EOF
+echo "deb [lang=none] http://security.ubuntu.com/ubuntu $DIST-security main" >> \
+ /etc/apt/sources.list.d/ubuntu-toolchain-r.list
+echo "deb [lang=none] http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu $DIST main" >> \
+ /etc/apt/sources.list.d/ubuntu-toolchain-r.list
+echo "deb [arch=amd64 lang=none] http://mirror.nullivex.com/ppa/ubuntu-toolchain-r-test $DIST main" >> \
+ /etc/apt/sources.list.d/ubuntu-toolchain-r.list
+echo "deb [arch=amd64 lang=none] http://deb.rug.nl/ppa/mirror/ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu $DIST main" >> \
+ /etc/apt/sources.list.d/ubuntu-toolchain-r.list
+EOF
+ else
+ echo "unsupported arch: $ARCH"
+ exit 1
+ fi
+cat >> $hookdir/D05install-new-gcc <<EOF
+env DEBIAN_FRONTEND=noninteractive apt-get install -y gnupg
+cat << ENDOFKEY | apt-key add -
+-----BEGIN PGP PUBLIC KEY BLOCK-----
+Version: SKS 1.1.6
+Comment: Hostname: keyserver.ubuntu.com
+
+mI0ESuBvRwEEAMi4cDba7xlKaaoXjO1n1HX8RKrkW+HEIl79nSOSJyvzysajs7zUow/OzCQp
+9NswqrDmNuH1+lPTTRNAGtK8r2ouq2rnXT1mTl23dpgHZ9spseR73s4ZBGw/ag4bpU5dNUSt
+vfmHhIjVCuiSpNn7cyy1JSSvSs3N2mxteKjXLBf7ABEBAAG0GkxhdW5jaHBhZCBUb29sY2hh
+aW4gYnVpbGRziLYEEwECACAFAkrgb0cCGwMGCwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRAe
+k3eiup7yfzGKA/4xzUqNACSlB+k+DxFFHqkwKa/ziFiAlkLQyyhm+iqz80htRZr7Ls/ZRYZl
+0aSU56/hLe0V+TviJ1s8qdN2lamkKdXIAFfavA04nOnTzyIBJ82EAUT3Nh45skMxo4z4iZMN
+msyaQpNl/m/lNtOLhR64v5ZybofB2EWkMxUzX8D/FQ==
+=LcUQ
+-----END PGP PUBLIC KEY BLOCK-----
+ENDOFKEY
+# import PPA's signing key into APT's keyring
+env DEBIAN_FRONTEND=noninteractive apt-get update -y -o Acquire::Languages=none -o Acquire::Translation=none || true
+env DEBIAN_FRONTEND=noninteractive apt-get install -y g++-$version
+EOF
+
+ chmod +x $hookdir/D05install-new-gcc
+
+ setup_gcc_hook $version > $hookdir/D10update-gcc-alternatives
+ chmod +x $hookdir/D10update-gcc-alternatives
+}
+
+setup_pbuilder_for_old_gcc() {
+ # point gcc,g++ to the ones shipped by distro
+ local hookdir=$1
+ case $DIST in
+ trusty)
+ old=4.8;;
+ xenial)
+ old=5;;
+ bionic)
+ old=8;;
+ focal)
+ old=9;;
+ jammy)
+ old=11;;
+ noble)
+ old=14;;
+ esac
+
+ # make sure the requested version is installed. this isn't the case by default on noble
+ cat > $hookdir/D05install-old-gcc <<EOF
+env DEBIAN_FRONTEND=noninteractive apt-get install -y g++-$old
+EOF
+ chmod +x $hookdir/D05install-old-gcc
+
+ setup_gcc_hook $old > $hookdir/D10update-gcc-alternatives
+ chmod +x $hookdir/D10update-gcc-alternatives
+}
+
+setup_pbuilder_for_ppa() {
+ local hookdir=$1
+ if use_ppa; then
+ local gcc_ver=11
+ setup_pbuilder_for_new_gcc $hookdir $gcc_ver
+ else
+ setup_pbuilder_for_old_gcc $hookdir
+ fi
+}
+
+get_bptag() {
+ dist=$1
+
+ [ "$dist" = "sid" ] && dver=""
+ [ "$dist" = "bookworm" ] && dver="~bpo12+1"
+ [ "$dist" = "bullseye" ] && dver="~bpo11+1"
+ [ "$dist" = "buster" ] && dver="~bpo10+1"
+ [ "$dist" = "stretch" ] && dver="~bpo90+1"
+ [ "$dist" = "jessie" ] && dver="~bpo80+1"
+ [ "$dist" = "wheezy" ] && dver="~bpo70+1"
+ [ "$dist" = "squeeze" ] && dver="~bpo60+1"
+ [ "$dist" = "lenny" ] && dver="~bpo50+1"
+ [ "$dist" = "noble" ] && dver="$dist"
+ [ "$dist" = "jammy" ] && dver="$dist"
+ [ "$dist" = "focal" ] && dver="$dist"
+ [ "$dist" = "bionic" ] && dver="$dist"
+ [ "$dist" = "xenial" ] && dver="$dist"
+ [ "$dist" = "trusty" ] && dver="$dist"
+ [ "$dist" = "saucy" ] && dver="$dist"
+ [ "$dist" = "precise" ] && dver="$dist"
+ [ "$dist" = "oneiric" ] && dver="$dist"
+ [ "$dist" = "natty" ] && dver="$dist"
+ [ "$dist" = "maverick" ] && dver="$dist"
+ [ "$dist" = "lucid" ] && dver="$dist"
+ [ "$dist" = "karmic" ] && dver="$dist"
+
+ echo $dver
+}
+
+gen_debian_version() {
+ local raw=$1
+ local dist=$2
+ local bptag
+
+ bptag=$(get_bptag $dist)
+ echo "${raw}${bptag}"
+}
+
+# Flavor Builds support
+# - CEPH_EXTRA_RPMBUILD_ARGS is consumed by build_rpms()
+# - CEPH_EXTRA_CMAKE_ARGS is consumed by debian/rules and ceph.spec directly
+# - PROFILES is consumed by build_debs()
+ceph_build_args_from_flavor() {
+ local flavor=$1
+ shift
+
+ # shellcheck disable=SC2034
+ case "${flavor}" in
+ default)
+ CEPH_EXTRA_RPMBUILD_ARGS="--with tcmalloc"
+ CEPH_EXTRA_CMAKE_ARGS+=" -DALLOCATOR=tcmalloc"
+ # build boost with valgrind=on for https://tracker.ceph.com/issues/56500
+ CEPH_EXTRA_CMAKE_ARGS+=" -DWITH_SYSTEM_BOOST=OFF -DWITH_BOOST_VALGRIND=ON"
+ DEB_BUILD_PROFILES=""
+ ;;
+ crimson-debug)
+ CEPH_EXTRA_RPMBUILD_ARGS="--with crimson"
+ DEB_BUILD_PROFILES="pkg.ceph.crimson"
+ CEPH_EXTRA_CMAKE_ARGS+=" -DCMAKE_BUILD_TYPE=Debug"
+ ;;
+ crimson-release)
+ CEPH_EXTRA_RPMBUILD_ARGS="--with crimson"
+ DEB_BUILD_PROFILES="pkg.ceph.crimson"
+ ;;
+ *)
+ echo "unknown FLAVOR: ${FLAVOR}" >&2
+ exit 1
+ esac
+}
+
+write_dist_files()
+{
+ cat > dist/sha1 << EOF
+SHA1=${GIT_COMMIT}
+EOF
+
+ cat > dist/branch << EOF
+BRANCH=${BRANCH}
+EOF
+
+ # - CEPH_EXTRA_RPMBUILD_ARGS are consumed by build_rpm before
+ # the switch to cmake;
+ # - CEPH_EXTRA_CMAKE_ARGS is for after cmake
+ # - DEB_BUILD_PROFILES is consumed by build_debs()
+ cat > dist/other_envvars << EOF
+CEPH_EXTRA_RPMBUILD_ARGS=${CEPH_EXTRA_RPMBUILD_ARGS}
+CEPH_EXTRA_CMAKE_ARGS=${CEPH_EXTRA_CMAKE_ARGS}
+DEB_BUILD_PROFILES=${DEB_BUILD_PROFILES}
+EOF
+}
+
+build_debs() {
+ local venv=$1
+ shift
+ local vers=$1
+ shift
+ local debian_version=$1
+ shift
+
+ # create a release directory for ceph-build tools
+ mkdir -p release
+ cp -a dist release/${vers}
+ echo $DIST > release/${vers}/debian_dists
+ echo "${debian_version}" > release/${vers}/debian_version
+
+ cd release/$vers
+
+
+ # unpack sources
+ dpkg-source -x ceph_${vers}-1.dsc
+
+ ( cd ceph-${vers}
+ DEB_VERSION=$(dpkg-parsechangelog | sed -rne 's,^Version: (.*),\1, p')
+ BP_VERSION=${DEB_VERSION}${BPTAG}
+ dch -D $DIST --force-distribution -b -v "$BP_VERSION" "$comment"
+ )
+ dpkg-source -b ceph-${vers}
+
+ echo "Building Debian"
+ cd "$WORKSPACE"
+ # Before, at this point, this script called the below contents that
+ # was part of /srv/ceph-buid/build_debs.sh. Now everything is in here, in one
+ # place, no need to checkout/clone anything. WYSIWYG::
+ #
+ # sudo $bindir/build_debs.sh ./release /srv/debian-base $vers
+
+ releasedir="./release"
+ pbuilddir="/srv/debian-base"
+ cephver=$vers
+
+ if $PBUILDER_IN_TMPFS; then
+ pbuild_build_dir="/var/cache/pbuilder/build"
+ else
+ mkdir -p $WORKSPACE/build
+ pbuild_build_dir="$WORKSPACE/build"
+ fi
+
+ echo version $cephver
+
+ echo deb vers $bpvers
+
+ echo building debs for $DIST
+
+ CEPH_EXTRA_CMAKE_ARGS="$CEPH_EXTRA_CMAKE_ARGS $(extra_cmake_args)"
+ DEB_BUILD_OPTIONS="parallel=$(get_nr_build_jobs)"
+
+ # pass only those env vars specifically noted
+ sudo \
+ CEPH_EXTRA_CMAKE_ARGS="$CEPH_EXTRA_CMAKE_ARGS" \
+ DEB_BUILD_OPTIONS="$DEB_BUILD_OPTIONS" \
+ DEB_BUILD_PROFILES="$DEB_BUILD_PROFILES" \
+ pbuilder build \
+ --distribution $DIST \
+ --basetgz $pbuilddir/$DIST.tgz \
+ --buildresult $releasedir/$cephver \
+ --use-network yes \
+ --buildplace $pbuild_build_dir \
+ $releasedir/$cephver/ceph_$bpvers.dsc
+
+ if $PBUILDER_IN_TMPFS; then
+ sudo umount $pbuild_build_dir
+ fi
+
+ # do lintian checks
+ echo lintian checks for $bpvers
+ echo lintian --allow-root $releasedir/$cephver/*$bpvers*.deb
+
+ [ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+ if [ "$THROWAWAY" = false ] ; then
+ # push binaries to chacra
+ find release/$vers/ | \
+ egrep "*(\.changes|\.deb|\.ddeb|\.dsc|ceph[^/]*\.gz)$" | \
+ egrep -v "(Packages|Sources|Contents)" | \
+ $venv/chacractl binary ${chacra_flags} create ${chacra_endpoint}
+
+ # extract cephadm if it exists
+ for file in release/${vers}/cephadm_${vers}*.deb; do
+ dpkg-deb --fsys-tarfile "$file" | tar -x -f - --strip-components=3 ./usr/sbin/cephadm
+ echo cephadm | $venv/chacractl binary ${chacra_flags} create ${chacra_endpoint}
+ break
+ done
+
+ # write json file with build info
+ cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$vers",
+ "package_manager_version":"$bpvers",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+ # post the json to repo-extra json to chacra
+ curl -X POST \
+ -H "Content-Type:application/json" \
+ --data "@$WORKSPACE/repo-extra.json" \
+ -u $CHACRACTL_USER:$CHACRACTL_KEY \
+ ${chacra_url}repos/${chacra_repo_endpoint}/extra/
+ # start repo creation
+ $venv/chacractl repo update ${chacra_repo_endpoint}
+
+ echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_repo_endpoint}/
+ fi
+}
+
+extra_cmake_args() {
+ # statically link against libstdc++ for building new releases on old distros
+ if [ "${FLAVOR}" = "crimson-debug" ] || [ "${FLAVOR}" = "crimson-release" ] ; then
+ # seastar's exception hack assums dynamic linkage against libgcc. as
+ # otherwise _Unwind_RaiseException will conflict with its counterpart
+ # defined in libgcc_eh.a, when the linker comes into play. and more
+ # importantly, _Unwind_RaiseException() in seastar will not be able
+ # to call the one implemented by GCC.
+ echo "-DWITH_STATIC_LIBSTDCXX=OFF"
+ elif use_ppa; then
+ echo "-DWITH_STATIC_LIBSTDCXX=ON"
+ fi
+}
+
+prune_stale_vagrant_vms() {
+ # Vagrant VMs might be stale from a previous run. Seen only with libvirt as
+ # a provider, the VMs appear in "prepare" state as reported by ``vagrant
+ # global-status``. The fix is to ensure the ``.vagrant/machines`` dir is
+ # removed, and then call the ``vagrant global-status --prune`` to clean up
+ # reporting.
+ # See: https://github.com/SUSE/pennyworth/wiki/Troubleshooting#missing-domain
+
+ # Usage examples:
+
+ # Global workspace search with extended globbing (will only look into tests directories):
+ #
+ # prune_stale_vagrant_vms $WORKSPACE/../**/tests
+ #
+ # Current worspace only:
+ #
+ # prune_stale_vagrant_vms
+
+ # Allow an optional search path, for faster searching on the global
+ # workspace
+ case "$1" in
+ *\*\**)
+ SEARCH_PATH=$1
+ ;;
+ *)
+ SEARCH_PATH="$WORKSPACE"
+ ;;
+ esac
+
+ # set extended pattern globbing
+ shopt -s globstar
+
+ # From the global workspace path, find any machine stale from other jobs
+ # A loop is required here to avoid having problems when matching too many
+ # paths in $SEARCH_PATH
+ for path in $SEARCH_PATH; do
+ sudo find $path -path "*/.vagrant/machines/*" -delete
+ done
+
+ # unset extended pattern globbing, to prevent messing up other functions
+ shopt -u globstar
+
+ # Make sure anything stale has been removed from reporting, without halting
+ # everything if it fails
+ vagrant global-status --prune || true
+}
+
+prune_stale_vagrant_running_vms() {
+ # The method of cleaning up VMs in the function above isn't aggressive enough.
+ cd $HOME
+ running_vagrant_vms=$(vagrant global-status | grep "running" | awk '{ print $1 }')
+ for uuid in $running_vagrant_vms; do
+ if ! vagrant destroy -f $uuid; then
+ echo "Destroying $uuid failed. Deleting its directory."
+ failed_path=$(vagrant global-status | grep $uuid | awk '{ print $5 }')
+ if [ -z ${failed_path+x} ]; then
+ echo "Didn't get a path for $uuid. Skipping."
+ else
+ if [[ $failed_path =~ $WORKSPACE ]]; then
+ echo "Skipping $failed_path. That's the current job."
+ else
+ rm -rf $failed_path
+ fi
+ fi
+ vagrant global-status --prune
+ fi
+ done
+ cd $WORKSPACE
+}
+
+delete_libvirt_vms() {
+ # Delete any VMs leftover from previous builds.
+ # Primarily used for Vagrant VMs leftover from docker builds.
+ libvirt_vms=`sudo virsh list --all --name`
+ for vm in $libvirt_vms; do
+ # Destroy returns a non-zero rc if the VM's not running
+ sudo virsh destroy $vm || true
+ sudo virsh managedsave-remove $vm || true
+ sudo virsh undefine $vm || true
+ done
+ # Clean up any leftover disk images
+ sudo find /var/lib/libvirt/images/ -type f -delete
+ sudo virsh pool-refresh default || true
+}
+
+clear_libvirt_networks() {
+ # Sometimes, networks may linger around, so we must ensure they are killed:
+ networks=`sudo virsh net-list --all --name`
+ for network in $networks; do
+ sudo virsh net-destroy $network || true
+ sudo virsh net-undefine $network || true
+ done
+}
+
+restart_libvirt_services() {
+ # restart libvirt services
+ if test -f /etc/redhat-release; then
+ sudo service libvirtd restart
+ else
+ sudo service libvirt-bin restart
+ fi
+ sudo service libvirt-guests restart
+}
+
+# Function to update vagrant boxes on static libvirt Jenkins builders used for ceph-ansible and ceph-docker testing
+update_vagrant_boxes() {
+ outdated_boxes=`vagrant box outdated --global | grep 'is outdated' | awk '{ print $2 }' | tr -d "'"`
+ if [ -n "$outdated_boxes" ]; then
+ for box in $outdated_boxes; do
+ vagrant box update --box $box
+ done
+ # Clean up old images
+ vagrant box prune
+ fi
+}
+
+start_tox() {
+ local venv_dir=$1
+ shift
+ local venv="$venv_dir/bin"
+
+ # the $SCENARIO var is injected by the job template. It maps
+ # to an actual, defined, tox environment
+ while true; do
+ case $1 in
+ CEPH_DOCKER_IMAGE_TAG=?*)
+ local ceph_docker_image_tag=${1#*=}
+ shift
+ ;;
+ RELEASE=?*)
+ local release=${1#*=}
+ shift
+ ;;
+ *)
+ break
+ ;;
+ esac
+ done
+ if [ "$release" = "dev" ]; then
+ # dev runs will need to be set to the release
+ # that matches what the current ceph main
+ # branch is at
+ local release="tentacle"
+ fi
+ TOX_RUN_ENV=("timeout 3h")
+ if [ -n "$ceph_docker_image_tag" ]; then
+ TOX_RUN_ENV=("CEPH_DOCKER_IMAGE_TAG=$ceph_docker_image_tag" "${TOX_RUN_ENV[@]}")
+ fi
+ if [ -n "$release" ]; then
+ TOX_RUN_ENV=("CEPH_STABLE_RELEASE=$release" "${TOX_RUN_ENV[@]}")
+ else
+ TOX_RUN_ENV=("CEPH_STABLE_RELEASE=$RELEASE" "${TOX_RUN_ENV[@]}")
+ fi
+
+ function build_job_name() {
+ local job_name=$1
+ shift
+ for item in "$@"; do
+ job_name="${job_name}-${item}"
+ done
+ echo "${job_name}"
+ }
+
+ # shellcheck disable=SC2153
+ ENV_NAME="$(build_job_name "$DISTRIBUTION" "$DEPLOYMENT" "$SCENARIO")"
+
+ case $SCENARIO in
+ rbdmirror)
+ TOX_INI_FILE=tox-rbdmirror.ini
+ ;;
+ update)
+ TOX_INI_FILE=tox-update.ini
+ ;;
+ podman)
+ TOX_INI_FILE=tox-podman.ini
+ ;;
+ filestore_to_bluestore)
+ TOX_INI_FILE=tox-filestore_to_bluestore.ini
+ ;;
+ docker_to_podman)
+ TOX_INI_FILE=tox-docker2podman.ini
+ ;;
+ external_clients)
+ TOX_INI_FILE=tox-external_clients.ini
+ ;;
+ cephadm)
+ TOX_INI_FILE=tox-cephadm.ini
+ ;;
+ shrink_osd_single)
+ TOX_INI_FILE=tox-shrink_osd.ini
+ ;;
+ shrink_osd_multiple)
+ TOX_INI_FILE=tox-shrink_osd.ini
+ ;;
+ subset_update)
+ TOX_INI_FILE=tox-subset_update.ini
+ ;;
+ *)
+ TOX_INI_FILE=tox.ini
+ ;;
+ esac
+
+ for tox_env in $("$venv"/tox -c "$TOX_INI_FILE" -l)
+ do
+ if [[ "$ENV_NAME" == "$tox_env" ]]; then
+ # shellcheck disable=SC2116
+ if ! eval "$(echo "${TOX_RUN_ENV[@]}")" "$venv"/tox -c "$TOX_INI_FILE" --workdir="$venv_dir" -v -e="$ENV_NAME" -- --provider=libvirt; then
+ echo "ERROR: Job didn't complete successfully or got stuck for more than 3h."
+ exit 1
+ fi
+ return 0
+ fi
+ done
+ echo "ERROR: Environment $ENV_NAME is not defined in $TOX_INI_FILE !"
+ exit 1
+}
+
+github_status_setup() {
+
+ # This job is meant to be triggered from a Github Pull Request, only when the
+ # job is executed in that way a few "special" variables become available. So
+ # this build script tries to use those first but then it will try to figure it
+ # out using Git directly so that if triggered manually it can attempt to
+ # actually work.
+ SHA=$ghprbActualCommit
+ BRANCH=$ghprbSourceBranch
+
+
+ # Find out the name of the remote branch from the Pull Request. This is otherwise not
+ # available by the plugin. Without grepping for `heads` output will look like:
+ #
+ # 855ce630695ed9ca53c314b7e261ec3cc499787d refs/heads/wip-volume-tests
+ if [ -z "$ghprbSourceBranch" ]; then
+ BRANCH=`git ls-remote origin | grep $GIT_PREVIOUS_COMMIT | grep heads | cut -d '/' -f 3`
+ SHA=$GIT_PREVIOUS_COMMIT
+ fi
+
+ # sometimes, $GIT_PREVIOUS_COMMIT will not help grep from ls-remote, so we fallback
+ # to looking for GIT_COMMIT (e.g. if the branch has not been rebased to be the tip)
+ if [ -z "$BRANCH" ]; then
+ BRANCH=`git ls-remote origin | grep $GIT_COMMIT | grep heads | cut -d '/' -f 3`
+ SHA=$GIT_COMMIT
+ fi
+
+ # Finally, we verify one last time to bail if nothing really worked here to determine
+ # this
+ if [ -z "$BRANCH" ]; then
+ echo "Could not determine \$BRANCH var from \$ghprbSourceBranch"
+ echo "or by using \$GIT_PREVIOUS_COMMIT and \$GIT_COMMIT"
+ exit 1
+ fi
+
+ # if ghprbActualCommit is not available, and the previous checks were not able to determine
+ # the SHA1, then the last attempt should be to try and set it to the env passed in as a parameter (GITHUB_SHA)
+ SHA="${ghprbActualCommit:-$GITHUB_SHA}"
+ if [ -z "$SHA" ]; then
+ echo "Could not determine \$SHA var from \$ghprbActualCommit"
+ echo "or by using \$GIT_PREVIOUS_COMMIT or \$GIT_COMMIT"
+ echo "or even looking at the \$GITHUB_SHA parameter for this job"
+ exit 1
+ fi
+
+}
+
+write_collect_logs_playbook() {
+ cat > $WORKSPACE/collect-logs.yml << EOF
+- hosts: all
+ become: yes
+ tasks:
+ - name: import_role ceph-defaults
+ import_role:
+ name: ceph-defaults
+
+ - name: import_role ceph-facts
+ import_role:
+ name: ceph-facts
+ tasks_from: container_binary.yml
+
+ - name: set_fact ceph_cmd
+ set_fact:
+ ceph_cmd: "{{ container_binary + ' run --rm --net=host -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph:/var/lib/ceph:z -v /var/run/ceph:/var/run/ceph:z --entrypoint=ceph ' + ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else 'ceph' }}"
+
+ - name: get some ceph status outputs
+ command: "{{ ceph_cmd }} --connect-timeout 10 --cluster {{ cluster }} {{ item }}"
+ register: ceph_status
+ run_once: True
+ delegate_to: mon0
+ failed_when: false
+ changed_when: false
+ with_items:
+ - "-s -f json"
+ - "osd tree"
+ - "osd dump"
+ - "pg dump"
+ - "versions"
+
+ - name: save ceph status to file
+ copy:
+ content: "{{ item.stdout }}"
+ dest: "{{ archive_path }}/{{ item.item | regex_replace(' ', '_') }}.log"
+ delegate_to: localhost
+ run_once: True
+ with_items: "{{ ceph_status.results }}"
+
+ - name: find ceph config file and logs
+ find:
+ paths:
+ - /etc/ceph
+ - /var/log/ceph
+ patterns:
+ - "*.conf"
+ - "*.log"
+ register: results
+
+ - name: collect ceph config file and logs
+ fetch:
+ src: "{{ item.path }}"
+ dest: "{{ archive_path }}/{{ inventory_hostname }}/"
+ flat: yes
+ with_items: "{{ results.files }}"
+EOF
+}
+
+collect_ceph_logs() {
+ local venv=$1
+ local limit=$2
+ local collect_logs_playbook_path=$3
+ shift
+ # this is meant to be run in a testing scenario directory
+ # with running vagrant vms. the ansible playbook will connect
+ # to your test nodes and fetch any ceph logs that are present
+ # in /var/log/ceph and store them on the jenkins builder.
+ # these logs can then be archived using the JJB archive publisher
+
+ if [ -f "./vagrant_ssh_config" ]; then
+ mkdir -p $WORKSPACE/logs
+
+ if [ -z "$collect_logs_playbook_path" ]; then
+ write_collect_logs_playbook
+ else
+ # the playbook needs to be in the root directory so it can discover the roles
+ cp $collect_logs_playbook_path $WORKSPACE/collect-logs.yml
+ fi
+
+ pkgs=( "ansible" )
+ install_python_packages $TEMPVENV "pkgs[@]"
+
+ export ANSIBLE_SSH_ARGS='-F ./vagrant_ssh_config'
+ export ANSIBLE_STDOUT_CALLBACK='debug'
+ $venv/ansible-playbook -vv -i hosts --limit $limit --extra-vars "archive_path=$WORKSPACE/logs" $WORKSPACE/collect-logs.yml || true
+ fi
+}
+
+teardown_vagrant_tests() {
+ local venv=$1
+ local collect_logs_playbook_path=$2
+
+ # collect ceph logs and teardown any running vagrant vms
+ # this also cleans up any lingering livirt networks
+ scenarios=$(find . | grep vagrant_ssh_config | xargs -r dirname)
+
+ for scenario in $scenarios; do
+ cd $scenario
+ # collect all ceph logs from all test nodes
+ collect_ceph_logs $venv all "$collect_logs_playbook_path"
+ vagrant destroy -f
+ stat ./fetch > /dev/null 2>&1 && rm -rf ./fetch
+ cd -
+ done
+
+ # Sometimes, networks may linger around, so we must ensure they are killed:
+ networks=`sudo virsh net-list --all | grep active | egrep -v "(default|libvirt)" | cut -d ' ' -f 2`
+ for network in $networks; do
+ sudo virsh net-destroy $network || true
+ sudo virsh net-undefine $network || true
+ done
+
+ # For when machines get stuck in state: preparing
+ # https://github.com/SUSE/pennyworth/wiki/Troubleshooting#missing-domain
+ for dir in $(sudo find $WORKSPACE | grep '.vagrant/machines'); do
+ rm -rf "$dir/*"
+ done
+
+ vagrant global-status --prune
+}
+
+get_nr_build_jobs() {
+ # assume each compiling job takes 3000 MiB memory on average when nproc <= 50
+ # otherwise, assume 4000 MiB when nproc > 50
+ # See https://tracker.ceph.com/issues/57296
+ local nproc=$(nproc)
+ if [[ $nproc -gt 50 ]]; then
+ local max_build_jobs=$(vmstat --stats --unit m | \
+ grep 'total memory' | \
+ awk '{print int($1/4000)}')
+ else
+ local max_build_jobs=$(vmstat --stats --unit m | \
+ grep 'total memory' | \
+ awk '{print int($1/3000)}')
+ fi
+ if [[ $max_build_jobs -eq 0 ]]; then
+ # probably the system is under high load, use a safe number
+ max_build_jobs=16
+ fi
+ if [[ $nproc -ge $max_build_jobs ]]; then
+ n_build_jobs=$max_build_jobs
+ else
+ n_build_jobs=$nproc
+ fi
+ echo $n_build_jobs
+}
+
+setup_rpm_build_deps() {
+ $SUDO yum install -y yum-utils
+ if [ "$RELEASE" = 7 ]; then
+ if [ "$ARCH" = x86_64 ]; then
+ $SUDO yum install -y centos-release-scl
+ elif [ "$ARCH" = arm64 ]; then
+ $SUDO yum install -y centos-release-scl-rh
+ $SUDO yum-config-manager --disable centos-sclo-rh
+ $SUDO yum-config-manager --enable centos-sclo-rh-testing
+ fi
+ elif [ "$RELEASE" = 8 ]; then
+ # for grpc-devel
+ # See https://copr.fedorainfracloud.org/coprs/ceph/grpc/
+ $SUDO yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
+ $SUDO dnf copr enable -y ceph/grpc
+ # centos 8.3 changes to lowercase repo names
+ $SUDO dnf config-manager --set-enabled PowerTools || \
+ $SUDO dnf config-manager --set-enabled powertools
+ $SUDO dnf -y module enable javapackages-tools
+ # before EPEL8 and PowerTools provide all dependencies, we use sepia for the dependencies
+ $SUDO dnf config-manager --add-repo http://apt-mirror.front.sepia.ceph.com/lab-extras/8/
+ $SUDO dnf config-manager --setopt=apt-mirror.front.sepia.ceph.com_lab-extras_8_.gpgcheck=0 --save
+
+ elif [ "$RELEASE" = 9 ]; then
+ $SUDO dnf -y copr enable ceph/el9
+ $SUDO dnf config-manager --set-enabled crb
+
+ $SUDO dnf -y install epel-next-release
+ $SUDO dnf -y install javapackages-tools
+ fi
+
+ DIR=/tmp/install-deps.$$
+ trap "rm -fr $DIR" EXIT
+ mkdir -p $DIR
+
+ sed -e 's/@//g' < ceph.spec.in > $DIR/ceph.spec
+
+ # enable more build depends required by build flavor(crimson)
+ case "${FLAVOR}" in
+ crimson-debug)
+ sed -i -e 's/%bcond_with crimson/%bcond_without crimson/g' $DIR/ceph.spec
+ ;;
+ crimson-release)
+ sed -i -e 's/%bcond_with crimson/%bcond_without crimson/g' $DIR/ceph.spec
+ ;;
+ esac
+
+ # Make sure we have all the rpm macros installed and at the latest version
+ # before installing the dependencies, python3-devel requires the
+ # python-rpm-macro we use for identifying the python related dependencies
+ $SUDO dnf install -y python3-devel
+
+ $SUDO dnf clean all
+ $SUDO dnf builddep -y --setopt=*.skip_if_unavailable=true $DIR/ceph.spec
+}
+
+setup_rpm_build_area() {
+ local build_area=$1
+ shift
+
+ # Set up build area
+ mkdir -p ${build_area}/{SOURCES,SRPMS,SPECS,RPMS,BUILD}
+ cp -a ceph-*.tar.bz2 ${build_area}/SOURCES/.
+ cp -a ceph.spec ${build_area}/SPECS/.
+ for f in $(find rpm -maxdepth 1 -name '*.patch'); do
+ cp -a $f ${build_area}/SOURCES/.
+ done
+ ### rpm wants absolute path
+ echo `readlink -fn $build_area`
+}
+
+build_rpms() {
+ local build_area=$1
+ shift
+ local extra_rpm_build_args=$1
+ shift
+
+ # Build RPMs
+ cd ${build_area}/SPECS
+ rpmbuild -ba --define "_topdir ${build_area}" ${extra_rpm_build_args} ceph.spec
+ echo done
+}
+
+build_ceph_release_rpm() {
+ local build_area=$1
+ shift
+ local is_dev_release=$1
+ shift
+
+ # The following was copied from autobuild-ceph/build-ceph-rpm.sh
+ # which creates the ceph-release rpm meant to create the repository file for the repo
+ # that will be built and served later.
+ # Create and build an RPM for the repository
+
+ if $is_dev_release; then
+ summary="Ceph Development repository configuration"
+ project_url=${chacra_url}r/ceph/${chacra_ref}/${SHA1}/${DISTRO}/${RELEASE}/flavors/$FLAVOR/
+ epoch=0
+ repo_base_url="${chacra_url}/r/ceph/${chacra_ref}/${SHA1}/${DISTRO}/${RELEASE}/flavors/${FLAVOR}"
+ gpgcheck=0
+ gpgkey=autobuild.asc
+ else
+ summary="Ceph repository configuration"
+ project_url=http://download.ceph.com/
+ epoch=1
+ repo_base_url="http://download.ceph.com/rpm-${ceph_release}/${target}"
+ gpgcheck=1
+ gpgkey=release.asc
+ fi
+ cat <<EOF > ${build_area}/SPECS/ceph-release.spec
+Name: ceph-release
+Version: 1
+Release: ${epoch}%{?dist}
+Summary: Ceph Development repository configuration
+Group: System Environment/Base
+License: GPLv2
+URL: ${project_url}
+Source0: ceph.repo
+#Source0: RPM-GPG-KEY-CEPH
+#Source1: ceph.repo
+BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
+BuildArch: noarch
+
+%description
+This package contains the Ceph repository GPG key as well as configuration
+for yum and up2date.
+
+%prep
+
+%setup -q -c -T
+install -pm 644 %{SOURCE0} .
+#install -pm 644 %{SOURCE1} .
+
+%build
+
+%install
+rm -rf %{buildroot}
+#install -Dpm 644 %{SOURCE0} \
+# %{buildroot}/%{_sysconfdir}/pki/rpm-gpg/RPM-GPG-KEY-CEPH
+%if 0%{defined suse_version}
+install -dm 755 %{buildroot}/%{_sysconfdir}/zypp
+install -dm 755 %{buildroot}/%{_sysconfdir}/zypp/repos.d
+install -pm 644 %{SOURCE0} \
+ %{buildroot}/%{_sysconfdir}/zypp/repos.d
+%else
+install -dm 755 %{buildroot}/%{_sysconfdir}/yum.repos.d
+install -pm 644 %{SOURCE0} \
+ %{buildroot}/%{_sysconfdir}/yum.repos.d
+%endif
+
+%clean
+#rm -rf %{buildroot}
+
+%post
+
+%postun
+
+%files
+%defattr(-,root,root,-)
+#%doc GPL
+%if 0%{defined suse_version}
+/etc/zypp/repos.d/*
+%else
+/etc/yum.repos.d/*
+%endif
+#/etc/pki/rpm-gpg/*
+
+%changelog
+* Fri Aug 12 2016 Alfredo Deza <adeza@redhat.com> 1-1
+* Mon Jan 12 2015 Travis Rhoden <trhoden@redhat.com> 1-1
+- Make .repo files be %config(noreplace)
+* Sun Mar 10 2013 Gary Lowell <glowell@inktank.com> - 1-0
+- Handle both yum and zypper
+- Use URL to ceph git repo for key
+- remove config attribute from repo file
+* Mon Aug 27 2012 Gary Lowell <glowell@inktank.com> - 1-0
+- Initial Package
+EOF
+ # End of ceph-release.spec file.
+
+ # GPG Key
+ #gpg --export --armor $keyid > ${build_area}/SOURCES/RPM-GPG-KEY-CEPH
+ #chmod 644 ${build_area}/SOURCES/RPM-GPG-KEY-CEPH
+
+ # Install ceph.repo file
+ cat <<EOF > $build_area/SOURCES/ceph.repo
+[Ceph]
+name=Ceph packages for \$basearch
+baseurl=${repo_base_url}/\$basearch
+enabled=1
+gpgcheck=${gpgcheck}
+type=rpm-md
+gpgkey=https://download.ceph.com/keys/${gpgkey}
+
+[Ceph-noarch]
+name=Ceph noarch packages
+baseurl=${repo_base_url}/noarch
+enabled=1
+gpgcheck=${gpgcheck}
+type=rpm-md
+gpgkey=https://download.ceph.com/keys/${gpgkey}
+
+[ceph-source]
+name=Ceph source packages
+baseurl=${repo_base_url}/SRPMS
+enabled=1
+gpgcheck=${gpgcheck}
+type=rpm-md
+gpgkey=https://download.ceph.com/keys/${gpgkey}
+EOF
+# End of ceph.repo file
+
+ if $is_dev_release; then
+ rpmbuild -bb \
+ --define "_topdir ${build_area}" \
+ ${build_area}/SPECS/ceph-release.spec
+ else
+ # build source packages for official releases
+ rpmbuild -ba \
+ --define "_topdir ${build_area}" \
+ --define "_unpackaged_files_terminate_build 0" \
+ ${build_area}/SPECS/ceph-release.spec
+ fi
+}
+
+maybe_reset_ci_container() {
+ if ! "$CI_CONTAINER"; then
+ return
+ fi
+ if [[ "$BRANCH" =~ nautilus|octopus|pacific && "$DIST" == el7 ]]; then
+ echo "disabling CI container build for $BRANCH"
+ CI_CONTAINER=false
+ fi
+}
+
+# NOTE: These functions will only work on a Pull Request job!
+pr_only_for() {
+ # $1 is passed by reference to avoid having to call with ${array[@]} and
+ # receive by creating another local array ("$@")
+ local -n local_patterns=$1
+ local files
+ pushd .
+ # cd to ceph repo if we need to.
+ # The ceph-pr-commits job checks out ceph.git and ceph-build.git but most
+ # other jobs do not.
+ if ! [[ "$(git config --get remote.origin.url)" =~ "ceph/ceph.git" ]]; then
+ cd "$WORKSPACE/ceph"
+ fi
+ if [ -f $(git rev-parse --git-dir)/shallow ]; then
+ # We can't do a regular `git diff` in a shallow clone. There is no other way to check files changed.
+ files="$(curl -s -u ${GITHUB_USER}:${GITHUB_PASS} https://api.github.com/repos/${ghprbGhRepository}/pulls/${ghprbPullId}/files | jq '.[].filename' | tr -d '"')"
+ else
+ files="$(git diff --name-only origin/${ghprbTargetBranch}...origin/pr/${ghprbPullId}/head)"
+ fi
+ popd
+ echo -e "changed files:\n$files"
+ # 0 is true, 1 is false
+ local all_match=0
+ for f in $files; do
+ local match=1
+ for p in "${local_patterns[@]}"; do
+ # pattern loop: if one pattern matches, skip the others
+ if [[ $f == $p ]]; then match=0; break; fi
+ done
+ # file loop: if this file matched no patterns, the group fails
+ # (one mismatch spoils the whole bushel)
+ if [[ $match -eq 1 ]] ; then all_match=1; break; fi
+ done
+ return $all_match
+}
+
+docs_pr_only() {
+ DOCS_ONLY=false
+ local patterns=(
+ 'doc/*'
+ 'admin/*'
+ 'src/sample.ceph.conf'
+ 'CodingStyle'
+ '*.rst'
+ '*.md'
+ 'COPYING*'
+ 'README.*'
+ 'SubmittingPatches'
+ '.readthedocs.yml'
+ 'PendingReleaseNotes'
+ )
+ if pr_only_for patterns; then DOCS_ONLY=true; fi
+}
+
+container_pr_only() {
+ CONTAINER_ONLY=false
+ local patterns=(
+ 'container/*'
+ 'Dockerfile.build'
+ 'src/script/buildcontainer-setup.sh'
+ 'src/script/build-with-container.py'
+ )
+ if pr_only_for patterns; then CONTAINER_ONLY=true; fi
+}
+
+gha_pr_only () {
+ GHA_ONLY=false
+ local patterns=(
+ '.github/*'
+ )
+ if pr_only_for patterns; then GHA_ONLY=true; fi
+}
+
+qa_pr_only () {
+ QA_ONLY=false
+ local patterns=(
+ 'qa/*'
+ )
+ if pr_only_for patterns; then QA_ONLY=true; fi
+}
+
+function ssh_exec() {
+ if [[ -z $SSH_ADDRESS ]]; then
+ echo "ERROR: Env variable SSH_ADDRESS is not set"
+ exit 1
+ fi
+ if [[ -z $SSH_USER ]]; then
+ echo "ERROR: Env variable SSH_USER is not set"
+ exit 1
+ fi
+ local SSH_OPTS=""
+ if [[ ! -z $SSH_KNOWN_HOSTS_FILE ]]; then
+ SSH_OPTS="$SSH_OPTS -o UserKnownHostsFile=$SSH_KNOWN_HOSTS_FILE"
+ fi
+ timeout ${SSH_TIMEOUT:-"30s"} ssh -i ${SSH_KEY:-"$HOME/.ssh/id_rsa"} $SSH_OPTS ${SSH_USER}@${SSH_ADDRESS} "${@}" || {
+ EXIT_CODE=$?
+ # By default, the "timeout" CLI tool always exits with 124 when the
+ # timeout is exceeded. Unless "--preserve-status" argument is used, the
+ # exit code is never set to the exit code of the command that timed out.
+ if [[ $EXIT_CODE -eq 124 ]]; then
+ echo "ERROR: ssh command timed out"
+ fi
+ return $EXIT_CODE
+ }
+}
+
+function scp_upload() {
+ local LOCAL_FILE="$1"
+ local REMOTE_FILE="$2"
+ if [[ -z $SSH_ADDRESS ]]; then
+ echo "ERROR: Env variable SSH_ADDRESS is not set"
+ exit 1
+ fi
+ if [[ -z $SSH_USER ]]; then
+ echo "ERROR: Env variable SSH_USER is not set"
+ exit 1
+ fi
+ local SSH_OPTS=""
+ if [[ ! -z $SSH_KNOWN_HOSTS_FILE ]]; then
+ SSH_OPTS="$SSH_OPTS -o UserKnownHostsFile=$SSH_KNOWN_HOSTS_FILE"
+ fi
+ timeout ${SSH_TIMEOUT:-"10m"} scp -i ${SSH_KEY:-"$HOME/.ssh/id_rsa"} $SSH_OPTS -r $LOCAL_FILE ${SSH_USER}@${SSH_ADDRESS}:${REMOTE_FILE} || {
+ EXIT_CODE=$?
+ # By default, the "timeout" CLI tool always exits with 124 when the
+ # timeout is exceeded. Unless "--preserve-status" argument is used, the
+ # exit code is never set to the exit code of the command that timed out.
+ if [[ $EXIT_CODE -eq 124 ]]; then
+ echo "ERROR: scp upload timed out"
+ fi
+ return $EXIT_CODE
+ }
+}
+
+function scp_download() {
+ local REMOTE_FILE="$1"
+ local LOCAL_FILE="$2"
+ if [[ -z $SSH_ADDRESS ]]; then
+ echo "ERROR: Env variable SSH_ADDRESS is not set"
+ exit 1
+ fi
+ if [[ -z $SSH_USER ]]; then
+ echo "ERROR: Env variable SSH_USER is not set"
+ exit 1
+ fi
+ local SSH_OPTS=""
+ if [[ ! -z $SSH_KNOWN_HOSTS_FILE ]]; then
+ SSH_OPTS="$SSH_OPTS -o UserKnownHostsFile=$SSH_KNOWN_HOSTS_FILE"
+ fi
+ timeout ${SSH_TIMEOUT:-"10m"} scp -i ${SSH_KEY:-"$HOME/.ssh/id_rsa"} $SSH_OPTS -r ${SSH_USER}@${SSH_ADDRESS}:${REMOTE_FILE} $LOCAL_FILE || {
+ EXIT_CODE=$?
+ # By default, the "timeout" CLI tool always exits with 124 when the
+ # timeout is exceeded. Unless "--preserve-status" argument is used, the
+ # exit code is never set to the exit code of the command that timed out.
+ if [[ $EXIT_CODE -eq 124 ]]; then
+ echo "ERROR: scp download timed out"
+ fi
+ return $EXIT_CODE
+ }
+}
+
+function retrycmd_if_failure() {
+ set +o errexit
+ retries=$1
+ wait_sleep=$2
+ timeout=$3
+ shift && shift && shift
+ for i in $(seq 1 "$retries"); do
+ if timeout "$timeout" "${@}"; then
+ break
+ fi
+ if [[ $i -eq $retries ]]; then
+ echo "Error: Failed to execute '$*' after $i attempts"
+ set -o errexit
+ return 1
+ fi
+ sleep "$wait_sleep"
+ done
+ echo "Executed '$*' $i times"
+ set -o errexit
+}
+
+function set_centos_python3_version() {
+ # This function expects $1 to be a string like "python3.9"
+ local EXPECTED_PYTHON3_VERSION=$1
+ sudo dnf reinstall -y $EXPECTED_PYTHON3_VERSION || sudo dnf install -y $EXPECTED_PYTHON3_VERSION
+ sudo ln -fs /usr/bin/$EXPECTED_PYTHON3_VERSION /usr/bin/python3
+}
--- /dev/null
+#!/bin/bash
+set -x
+
+# install nvm
+if [[ ! $(command -v nvm) ]]; then
+ # install nvm
+ LATEST_NVM_VERSION=$(curl -s https://api.github.com/repos/nvm-sh/nvm/releases/latest | jq -r '.tag_name')
+ echo "Installing nvm version ${LATEST_NVM_VERSION}"
+
+ curl -fsSL https://raw.githubusercontent.com/nvm-sh/nvm/${LATEST_NVM_VERSION}/install.sh | bash
+
+ export NVM_DIR="$HOME/.nvm"
+ [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
+ [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"
+fi
+
+echo "Installing nodejs from nvm with version $(cat .nvmrc)"
+nvm install
+nvm use
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+
+# Cleanup libvirt VMs / networks
+delete_libvirt_vms
+clear_libvirt_networks
+
+# Cleanup remaining files / directories
+sudo rm -rf \
+ $WORKSPACE/ceph $WORKSPACE/ceph_vstart $WORKSPACE/ceph.zip \
+ $WORKSPACE/libvirt
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+if [[ -z $UBUNTU_VM_NAME ]]; then echo "ERROR: The env variable UBUNTU_VM_NAME is not set"; exit 1; fi
+
+# Destroy and undefine the VM
+sudo virsh destroy $UBUNTU_VM_NAME
+sudo virsh undefine $UBUNTU_VM_NAME --remove-all-storage
--- /dev/null
+param (
+ [Parameter(Mandatory)]
+ [string]$LogDirectory,
+ [switch]$IncludeEvtxFiles = $false,
+ [switch]$CleanupEventLog = $false
+)
+
+function SanitizeName {
+ Param(
+ [Parameter(Mandatory=$true)]
+ [string]$Name
+ )
+ return $Name.replace(" ","-").replace("/", "-").replace("\", "-")
+}
+
+function DumpEventLogEvtx {
+ Param(
+ [Parameter(Mandatory=$true)]
+ [string]$Path
+ )
+ $winEvents = Get-WinEvent -ListLog * | Where-Object { $_.RecordCount -gt 0 }
+ foreach ($i in $winEvents) {
+ $logFile = Join-Path $Path "eventlog_$(SanitizeName $i.LogName).evtx"
+
+ Write-Output "exporting '$($i.LogName)' to $logFile"
+ & $Env:WinDir\System32\wevtutil.exe epl "$($i.LogName)" $logFile
+ if ($LASTEXITCODE) {
+ Write-Output "Failed to export $($i.LogName) to $logFile"
+ }
+ }
+}
+
+function DumpEventLogTxt {
+ Param(
+ [Parameter(Mandatory=$true)]
+ [string]$Path
+ )
+ $winEvents = Get-WinEvent -ListLog * | Where-Object { $_.RecordCount -gt 0 }
+ foreach ($i in $winEvents) {
+ $logFile = Join-Path $Path "eventlog_$(SanitizeName $i.LogName).txt"
+
+ Write-Output "exporting '$($i.LogName)' to $logFile"
+ Get-WinEvent `
+ -ErrorAction "SilentlyContinue" `
+ -FilterHashtable @{
+ LogName=$i.LogName;
+ StartTime=$(Get-Date).AddHours(-6)
+ } | `
+ Format-Table -AutoSize -Wrap | Out-File -Encoding ascii -FilePath $logFile
+ }
+}
+
+function ClearEventLog {
+ $winEvents = Get-WinEvent -ListLog * | Where-Object { $_.RecordCount -gt 0 }
+ foreach ($i in $winEvents) {
+ & $Env:WinDir\System32\wevtutil.exe cl $i.LogName
+ if ($LASTEXITCODE) {
+ Write-Output "Failed to clear '$($i.LogName)' from the event log"
+ }
+ }
+}
+
+mkdir -force $LogDirectory
+
+DumpEventLogTxt $LogDirectory
+
+if ($IncludeEvtxFiles) {
+ DumpEventLogEvtx $LogDirectory
+}
+
+if ($CleanupEventLog) {
+ ClearEventLog
+}
+
+Write-Output "Finished collecting Windows event logs."
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+if [[ ! -f $WORKSPACE/ceph.zip ]]; then echo "ERROR: The Ceph Windows build zip file doesn't exist at '$WORKSPACE/ceph.zip'"; exit 1; fi
+if [[ ! -f $CEPH_WINDOWS_CONF ]]; then echo "ERROR: The Ceph Windows config file doesn't exist at '$CEPH_WINDOWS_CONF'"; exit 1; fi
+if [[ ! -f $CEPH_KEYRING ]]; then echo "ERROR: The Ceph keyring file doesn't exist at '$CEPH_KEYRING'"; exit 1; fi
+
+if [[ -z $WINDOWS_SSH_USER ]]; then echo "ERROR: The WINDOWS_SSH_USER env variable is not set"; exit 1; fi
+if [[ -z $WINDOWS_VM_IP ]]; then echo "ERROR: The WINDOWS_VM_IP env variable is not set"; exit 1; fi
+if [[ -z $UBUNTU_SSH_USER ]]; then echo "ERROR: The UBUNTU_SSH_USER env variable is not set"; exit 1; fi
+if [[ -z $UBUNTU_VM_IP ]]; then echo "ERROR: The UBUNTU_VM_IP env variable is not set"; exit 1; fi
+
+export SSH_USER=$WINDOWS_SSH_USER
+export SSH_ADDRESS=$WINDOWS_VM_IP
+
+WIN_USERSPACE_CRASH_DUMPS=${WIN_USERSPACE_CRASH_DUMPS:-"C:\\userspace_crash_dumps"}
+COLLECT_EVENT_LOGS_SCRIPT_URL="https://raw.githubusercontent.com/ceph/ceph-build/main/scripts/ceph-windows/collect-event-logs.ps1"
+
+#
+# Clone ceph-win32-tests repo
+#
+SSH_TIMEOUT=5m ssh_exec git.exe clone https://github.com/ceph/ceph-win32-tests.git /workspace/repos/ceph-win32-tests
+
+#
+# Set Windows user-space crash dumps directory
+#
+ssh_exec powershell.exe /workspace/repos/ceph-win32-tests/test_host/set_userspace_crashdump_location.ps1 -dumpDir $WIN_USERSPACE_CRASH_DUMPS
+
+#
+# Copy the ceph.conf and keyring to the Windows VM
+#
+ssh_exec powershell.exe mkdir -force /ProgramData/ceph/out
+ssh_exec powershell.exe mkdir -force /ProgramData/ceph/logs
+scp_upload $CEPH_WINDOWS_CONF /ProgramData/ceph/ceph.conf
+scp_upload $CEPH_KEYRING /ProgramData/ceph/keyring
+
+#
+# Setup the Ceph Windows build in the Windows VM
+#
+scp_upload $WORKSPACE/ceph.zip /ceph.zip
+SSH_TIMEOUT=10m ssh_exec powershell.exe "\$ProgressPreference='SilentlyContinue'; Expand-Archive -Path /ceph.zip -DestinationPath / -Force"
+ssh_exec powershell.exe "Add-MpPreference -ExclusionPath 'C:\ceph'"
+ssh_exec powershell.exe "New-Service -Name ceph-rbd -BinaryPathName 'c:\ceph\rbd-wnbd.exe service'"
+ssh_exec powershell.exe net.exe start ceph-rbd
+
+#
+# Collect artifacts on script exit
+#
+function collect_artifacts() {
+ rm -rf $WORKSPACE/artifacts
+ mkdir -p $WORKSPACE/artifacts
+ mkdir -p $WORKSPACE/artifacts/cluster
+ mkdir -p $WORKSPACE/artifacts/cluster/ceph_logs
+
+ SSH_USER=$UBUNTU_SSH_USER SSH_ADDRESS=$UBUNTU_VM_IP ssh_exec ./ceph/build/bin/ceph status
+ SSH_USER=$UBUNTU_SSH_USER SSH_ADDRESS=$UBUNTU_VM_IP ssh_exec free -h
+ SSH_USER=$UBUNTU_SSH_USER SSH_ADDRESS=$UBUNTU_VM_IP ssh_exec df -h
+ SSH_USER=$UBUNTU_SSH_USER SSH_ADDRESS=$UBUNTU_VM_IP ssh_exec \
+ "du -sh ./ceph-vstart/*"
+
+ SSH_USER=$UBUNTU_SSH_USER SSH_ADDRESS=$UBUNTU_VM_IP ssh_exec \
+ "journalctl -b > /tmp/journal"
+ SSH_USER=$UBUNTU_SSH_USER SSH_ADDRESS=$UBUNTU_VM_IP scp_download \
+ /tmp/journal $WORKSPACE/artifacts/cluster/journal
+
+ SSH_USER=$UBUNTU_SSH_USER SSH_ADDRESS=$UBUNTU_VM_IP scp_download \
+ './ceph-vstart/out/*.log' $WORKSPACE/artifacts/cluster/ceph_logs/
+
+ scp_download /workspace/test_results $WORKSPACE/artifacts/test_results
+ if [[ "$INCLUDE_USERSPACE_CRASH_DUMPS" = true ]]; then
+ scp_download /userspace_crash_dumps $WORKSPACE/artifacts/userspace_crash_dumps
+ fi
+ if [[ "$INCLUDE_CEPH_ZIP" = true ]]; then
+ cp $WORKSPACE/ceph.zip $WORKSPACE/artifacts/ceph.zip
+ fi
+
+ mkdir -p $WORKSPACE/artifacts/client
+
+ scp_download /ProgramData/ceph/logs $WORKSPACE/artifacts/client/logs
+ cp $CEPH_WINDOWS_CONF $WORKSPACE/artifacts/client
+ ssh_exec /wnbd/wnbd-client.exe version
+ ssh_exec curl.exe --retry-max-time 30 --retry 10 -L -o /workspace/collect-event-logs.ps1 $COLLECT_EVENT_LOGS_SCRIPT_URL
+ SSH_TIMEOUT=30m ssh_exec powershell.exe /workspace/collect-event-logs.ps1 -LogDirectory /workspace/eventlogs
+ scp_download /workspace/eventlogs $WORKSPACE/artifacts/client/eventlogs
+ echo "Successfully retrieved artifacts."
+}
+trap collect_artifacts EXIT
+
+# View cluster status before test run
+SSH_USER=$UBUNTU_SSH_USER SSH_ADDRESS=$UBUNTU_VM_IP ssh_exec ./ceph/build/bin/ceph status
+#
+# Run the Windows tests
+#
+SSH_TIMEOUT=2h ssh_exec powershell.exe /workspace/repos/ceph-win32-tests/test_host/run_tests.ps1 -workerCount 4
+
+WIN_WORKUNITS_DIR="$WORKSPACE/ceph/qa/workunits/windows"
+LOCAL_SCRIPT_PATH="$WIN_WORKUNITS_DIR/run-tests.ps1"
+if [[ -f $LOCAL_SCRIPT_PATH ]]; then
+ echo "Using locally cloned test script: $LOCAL_SCRIPT_PATH"
+ scp_upload $WIN_WORKUNITS_DIR /workspace/workunits
+ SSH_TIMEOUT=1h ssh_exec powershell.exe -File /workspace/workunits/run-tests.ps1
+else
+ # The following is only used on Quincy, make sure to leave this in place while Quincy
+ # is still being tested.
+ REMOTE_SCRIPT_URL="https://raw.githubusercontent.com/ceph/ceph/1db85786588ad974621a6b669a3aae4e8799b1e6/qa/workunits/windows/test_rbd_wnbd.py"
+ echo "Using remote test script from: $REMOTE_SCRIPT_URL"
+ ssh_exec curl.exe -s -L -o /workspace/test_rbd_wnbd.py $REMOTE_SCRIPT_URL
+
+ SSH_TIMEOUT=30m ssh_exec python.exe /workspace/test_rbd_wnbd.py --test-name RbdTest --iterations 100
+ SSH_TIMEOUT=60m ssh_exec python.exe /workspace/test_rbd_wnbd.py --test-name RbdFioTest --iterations 100
+ SSH_TIMEOUT=30m ssh_exec python.exe /workspace/test_rbd_wnbd.py --test-name RbdStampTest --iterations 100
+
+ # It can take a while to setup the partition (~10s), we'll use fewer iterations.
+ SSH_TIMEOUT=30m ssh_exec python.exe /workspace/test_rbd_wnbd.py --test-name RbdFsTest --iterations 4
+ SSH_TIMEOUT=30m ssh_exec python.exe /workspace/test_rbd_wnbd.py --test-name RbdFsFioTest --iterations 4
+ SSH_TIMEOUT=30m ssh_exec python.exe /workspace/test_rbd_wnbd.py --test-name RbdFsStampTest --iterations 4
+fi
+
+echo "Windows tests succeeded."
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+if [[ -z $UBUNTU_SSH_USER ]]; then echo "ERROR: The UBUNTU_SSH_USER env variable is not set"; exit 1; fi
+if [[ -z $UBUNTU_VM_IP ]]; then echo "ERROR: The UBUNTU_VM_IP env variable is not set"; exit 1; fi
+
+export VSTART_DIR="$WORKSPACE/ceph_vstart"
+
+USE_MEMSTORE="${USE_MEMSTORE:-no}"
+VSTART_MEMSTORE_BYTES="5368709120" # 5GB
+VSTART_BLUESTORE_BYTES="5368709120" # 5GB
+
+export SSH_USER=$UBUNTU_SSH_USER
+export SSH_ADDRESS=$UBUNTU_VM_IP
+
+mkdir -p $VSTART_DIR
+
+function rsync_cmd() {
+ rsync -a --delete -e "ssh -i $CEPH_WIN_CI_KEY -o UserKnownHostsFile=$SSH_KNOWN_HOSTS_FILE" "${@}"
+}
+
+#
+# Build Ceph vstart
+#
+cat > ${VSTART_DIR}/build-ceph-vstart.sh << EOF
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+cd ~/ceph
+./install-deps.sh
+./do_cmake.sh \
+ -DCMAKE_BUILD_TYPE=Release \
+ -DWITH_RADOSGW=OFF \
+ -DWITH_MGR_DASHBOARD_FRONTEND=OFF \
+ -WITH_MGR=OFF \
+ -WITH_LTTNG=OFF \
+ -DWITH_TESTS=OFF
+cd ./build
+ninja vstart
+EOF
+chmod +x ${VSTART_DIR}/build-ceph-vstart.sh
+time rsync_cmd $WORKSPACE/ceph ${VSTART_DIR}/build-ceph-vstart.sh ${UBUNTU_SSH_USER}@${UBUNTU_VM_IP}:
+
+time SSH_TIMEOUT=4h ssh_exec ./build-ceph-vstart.sh
+time SSH_TIMEOUT=10m ssh_exec sudo apt-get install -y python3-prettytable
+
+if [[ "$USE_MEMSTORE" == "yes" ]]; then
+ OBJECTSTORE_ARGS="--memstore -o memstore_device_bytes=$VSTART_MEMSTORE_BYTES"
+else
+ OBJECTSTORE_ARGS="--bluestore -o bluestore_block_size=$VSTART_BLUESTORE_BYTES"
+fi
+
+#
+# Run Ceph vstart
+#
+cat > ${VSTART_DIR}/ceph-vstart.sh << EOF
+mkdir -p \$HOME/ceph-vstart/out
+
+cd ~/ceph/build
+VSTART_DEST=\$HOME/ceph-vstart ../src/vstart.sh \
+ -n $OBJECTSTORE_ARGS \
+ --without-dashboard -i "$UBUNTU_VM_IP" \
+ 2>&1 | tee \$HOME/ceph-vstart/vstart.log
+
+export CEPH_CONF=\$HOME/ceph-vstart/ceph.conf
+export CEPH_KEYRING=\$HOME/ceph-vstart/keyring
+
+./bin/ceph osd pool create rbd
+
+./bin/ceph osd pool set cephfs.a.data size 1 --yes-i-really-mean-it
+./bin/ceph osd pool set cephfs.a.meta size 1 --yes-i-really-mean-it
+./bin/ceph osd pool set rbd size 1 --yes-i-really-mean-it
+
+./bin/ceph tell mon.\* config set debug_mon 0
+./bin/ceph tell mon.\* config set debug_ms 0
+EOF
+chmod +x ${VSTART_DIR}/ceph-vstart.sh
+
+rsync_cmd ${VSTART_DIR}/ceph-vstart.sh ${UBUNTU_SSH_USER}@${UBUNTU_VM_IP}:
+time SSH_TIMEOUT=10m ssh_exec ./ceph-vstart.sh
+
+ssh_exec sudo mkdir -p /etc/ceph
+ssh_exec sudo cp ./ceph-vstart/ceph.conf ./ceph-vstart/keyring /etc/ceph
+
+rsync_cmd ${UBUNTU_SSH_USER}@${UBUNTU_VM_IP}:./ceph-vstart/ceph.conf ${VSTART_DIR}/ceph.conf
+rsync_cmd ${UBUNTU_SSH_USER}@${UBUNTU_VM_IP}:./ceph-vstart/keyring ${VSTART_DIR}/keyring
+
+export CEPH_CONF="$VSTART_DIR/ceph.conf"
+export CEPH_KEYRING="$VSTART_DIR/keyring"
+export CEPH_WINDOWS_CONF="$VSTART_DIR/ceph-windows.conf"
+
+MON_HOST=$(cat $CEPH_CONF | grep -o "mon host \=.*")
+
+cat > $CEPH_WINDOWS_CONF << EOF
+[client]
+ keyring = C:/ProgramData/ceph/keyring
+ log file = C:/ProgramData/ceph/logs/\$name.\$pid.log
+ admin socket = C:/ProgramData/ceph/out/\$name.\$pid.asok
+ client_mount_uid = 1000
+ client_mount_gid = 1000
+ client_permissions = true
+[global]
+ log to stderr = true
+ run dir = C:/ProgramData/ceph/out
+ crash dir = C:/ProgramData/ceph/out
+ $MON_HOST
+EOF
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+if [[ -z $CEPH_WIN_CI_KEY ]]; then echo "ERROR: The CI SSH private key secret (CEPH_WIN_CI_KEY) is not set"; exit 1; fi
+
+export LIBVIRT_DIR="$WORKSPACE/libvirt"
+
+export SSH_KEY="$CEPH_WIN_CI_KEY"
+export SSH_KNOWN_HOSTS_FILE="$LIBVIRT_DIR/known_hosts"
+
+mkdir -p $LIBVIRT_DIR
+
+function get_libvirt_vm_ssh_address() {
+ if [[ -z $VM_NAME ]]; then
+ echo "ERROR: Env variable VM_NAME is not set"
+ exit 1
+ fi
+ if [[ -z $SSH_USER ]]; then
+ echo "ERROR: Env variable SSH_USER is not set"
+ exit 1
+ fi
+
+ if ! which xmllint >/dev/null; then
+ sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none || true
+ sudo apt-get install -y libxml2-utils
+ fi
+ if ! which jq >/dev/null; then
+ sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none || true
+ sudo apt-get install -y jq
+ fi
+
+ sudo virsh dumpxml $VM_NAME > $LIBVIRT_DIR/$VM_NAME.xml
+ local VM_NIC_MAC_ADDRESS=`xmllint --xpath 'string(/domain/devices/interface/mac/@address)' $LIBVIRT_DIR/$VM_NAME.xml`
+ rm $LIBVIRT_DIR/$VM_NAME.xml
+
+ local TIMEOUT=${TIMEOUT:-600}
+ local SLEEP_SECS=${SLEEP_SECS:-10}
+
+ SECONDS=0
+ while true; do
+ if [[ $SECONDS -gt $TIMEOUT ]]; then
+ >&2 echo "Timeout waiting for the VM to start"
+ return 1
+ fi
+ # Get the VM NIC IP address from the "default" virsh network
+ VM_IP=$(sudo virsh qemu-agent-command $VM_NAME '{"execute":"guest-network-get-interfaces"}' | jq -r ".return[] | select(.\"hardware-address\"==\"${VM_NIC_MAC_ADDRESS}\") | .\"ip-addresses\"[] | select(.\"ip-address\" | startswith(\"192.168.122.\")) | .\"ip-address\"") || {
+ >&2 echo "Retrying in $SLEEP_SECS seconds"
+ sleep $SLEEP_SECS
+ continue
+ }
+ if [[ -z $VM_IP ]]; then
+ >&2 echo "Cannot find the VM IP address. Retrying in $SLEEP_SECS seconds"
+ sleep $SLEEP_SECS
+ continue
+ fi
+ ssh-keyscan -H $VM_IP &> ${LIBVIRT_DIR}/${VM_NAME}_known_hosts || {
+ >&2 echo "SSH is not reachable yet"
+ sleep $SLEEP_SECS
+ continue
+ }
+ SSH_ADDRESS=$VM_IP SSH_KNOWN_HOSTS_FILE=${LIBVIRT_DIR}/${VM_NAME}_known_hosts ssh_exec hostname 1>&2 || {
+ >&2 echo "Cannot execute SSH commands yet"
+ sleep $SLEEP_SECS
+ continue
+ }
+ break
+ done
+ cat ${LIBVIRT_DIR}/${VM_NAME}_known_hosts >> $SSH_KNOWN_HOSTS_FILE
+ rm ${LIBVIRT_DIR}/${VM_NAME}_known_hosts
+ echo $VM_IP
+}
+
+#
+# Setup requirements (if needed)
+#
+sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none || true
+sudo apt-get install -y libvirt-daemon-system virtinst cloud-image-utils qemu-kvm
+
+# Ensure that the libvirt socket is available, otherwise virsh commands will fail.
+# If missing, we'll restart the appropriate services.
+if [[ ! -S /var/run/libvirt/libvirt-sock ]]; then
+ sudo systemctl stop libvirtd.socket
+ sudo systemctl restart libvirtd
+
+ sudo systemctl status libvirtd.socket
+ sudo systemctl status libvirtd
+fi
+
+if ! sudo virsh net-info default &>/dev/null; then
+ cat << EOF > $LIBVIRT_DIR/default-net.xml
+<network>
+ <name>default</name>
+ <bridge name="virbr0"/>
+ <forward mode="nat"/>
+ <ip address="192.168.122.1" netmask="255.255.255.0">
+ <dhcp>
+ <range start="192.168.122.2" end="192.168.122.254"/>
+ </dhcp>
+ </ip>
+</network>
+EOF
+ sudo virsh net-define $LIBVIRT_DIR/default-net.xml
+ sudo virsh net-start default
+ sudo virsh net-autostart default
+ rm $LIBVIRT_DIR/default-net.xml
+fi
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+if [[ -z $LIBVIRT_DIR ]]; then echo "ERROR: The env variable LIBVIRT_DIR is not set"; exit 1; fi
+
+nproc=$(nproc)
+# By limiting the number of parallel build jobs, we avoid using excessive
+# amounts of memory.
+DEFAULT_UBUNTU_VM_VCPUS=$((nproc > 16 ? 16 : nproc))
+
+export UBUNTU_VM_IMAGE_URL=${UBUNTU_VM_IMAGE_URL:-"https://cloud-images.ubuntu.com/minimal/releases/jammy/release/ubuntu-22.04-minimal-cloudimg-amd64.img"}
+export UBUNTU_VM_NAME=${UBUNTU_VM_NAME:-"ceph-ubuntu-vstart-${JOB_NAME}-${BUILD_ID}"}
+export UBUNTU_VM_VCPUS=${UBUNTU_VM_VCPUS:-$DEFAULT_UBUNTU_VM_VCPUS}
+export UBUNTU_VM_MEMORY=${UBUNTU_VM_MEMORY:-"16384"} # 16 GB
+export UBUNTU_SSH_USER="ubuntu"
+
+#
+# Setup the Ubuntu VM to run Ceph vstart
+#
+mkdir -p $LIBVIRT_DIR
+echo "Downloading VM image from $UBUNTU_VM_IMAGE_URL"
+curl -s -L $UBUNTU_VM_IMAGE_URL -o ${LIBVIRT_DIR}/ceph-ubuntu-vstart.qcow2
+qemu-img resize ${LIBVIRT_DIR}/ceph-ubuntu-vstart.qcow2 128G
+
+cat > ${LIBVIRT_DIR}/metadata.yaml << EOF
+instance-id: ceph-ubuntu-vstart
+local-hostname: ceph-ubuntu-vstart.local
+locale: en_US
+EOF
+
+cat > ${LIBVIRT_DIR}/user-data.yaml << EOF
+#cloud-config
+
+ssh_authorized_keys:
+ - $(ssh-keygen -y -f $SSH_KEY)
+
+packages_update: true
+packages:
+ - qemu-guest-agent
+ - locales
+ - rsync
+ - jq
+ - python3-bcrypt
+
+runcmd:
+ - [localedef, -i, en_US, -c, -f, UTF-8, -A, /usr/share/locale/locale.alias, en_US.UTF-8]
+ - [systemctl, start, qemu-guest-agent]
+EOF
+
+cloud-localds ${LIBVIRT_DIR}/config-drive.img ${LIBVIRT_DIR}/user-data.yaml ${LIBVIRT_DIR}/metadata.yaml
+
+sudo virt-install \
+ --name $UBUNTU_VM_NAME \
+ --os-variant ubuntu22.04 \
+ --boot hd \
+ --virt-type kvm \
+ --graphics spice \
+ --cpu host \
+ --vcpus $UBUNTU_VM_VCPUS \
+ --memory $UBUNTU_VM_MEMORY \
+ --disk ${LIBVIRT_DIR}/ceph-ubuntu-vstart.qcow2,bus=virtio \
+ --disk ${LIBVIRT_DIR}/config-drive.img,bus=virtio \
+ --network network=default,model=virtio \
+ --controller type=virtio-serial \
+ --channel unix,target_type=virtio,name=org.qemu.guest_agent.0 \
+ --noautoconsol
+
+#
+# Get the VM SSH address
+#
+export UBUNTU_VM_IP=$(VM_NAME=$UBUNTU_VM_NAME SSH_USER=$UBUNTU_SSH_USER get_libvirt_vm_ssh_address)
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+if [[ -z $LIBVIRT_DIR ]]; then echo "ERROR: The env variable LIBVIRT_DIR is not set"; exit 1; fi
+
+export WINDOWS_VM_IMAGE_URL=${WINDOWS_VM_IMAGE_URL:-"https://filedump.ceph.com/windows/ceph-win-ltsc2019-ci-image.qcow2"}
+export WINDOWS_VM_NAME=${WINDOWS_VM_NAME:-"ceph-windows-client-${JOB_NAME}-${BUILD_ID}"}
+export WINDOWS_VM_VCPUS="8"
+export WINDOWS_VM_MEMORY="8192" # 8GB
+export WINDOWS_SSH_USER="administrator"
+
+#
+# Setup the Windows VM to run Ceph client
+#
+mkdir -p $LIBVIRT_DIR
+echo "Downloading VM image from $WINDOWS_VM_IMAGE_URL"
+curl -s -L $WINDOWS_VM_IMAGE_URL -o ${LIBVIRT_DIR}/ceph-windows-client.qcow2
+
+sudo virt-install \
+ --name $WINDOWS_VM_NAME \
+ --os-variant win2k19 \
+ --boot hd \
+ --virt-type kvm \
+ --graphics spice \
+ --cpu host \
+ --vcpus $WINDOWS_VM_VCPUS \
+ --memory $WINDOWS_VM_MEMORY \
+ --disk ${LIBVIRT_DIR}/ceph-windows-client.qcow2,bus=virtio \
+ --network network=default,model=virtio \
+ --controller type=virtio-serial \
+ --channel unix,target_type=virtio,name=org.qemu.guest_agent.0 \
+ --noautoconsol
+
+#
+# Get the VM SSH address
+#
+export WINDOWS_VM_IP=$(VM_NAME=$WINDOWS_VM_NAME SSH_USER=$WINDOWS_SSH_USER get_libvirt_vm_ssh_address)
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+if [[ -z $SSH_KEY ]]; then echo "ERROR: The SSH_KEY env variable is not set"; exit 1; fi
+if [[ -z $SSH_KNOWN_HOSTS_FILE ]]; then echo "ERROR: The SSH_KNOWN_HOSTS_FILE env variable is not set"; exit 1; fi
+
+if [[ -z $UBUNTU_SSH_USER ]]; then echo "ERROR: The UBUNTU_SSH_USER env variable is not set"; exit 1; fi
+if [[ -z $UBUNTU_VM_IP ]]; then echo "ERROR: The UBUNTU_VM_IP env variable is not set"; exit 1; fi
+
+export SSH_USER=$UBUNTU_SSH_USER
+export SSH_ADDRESS=$UBUNTU_VM_IP
+
+function rsync_cmd() {
+ rsync -a --delete -e "ssh -i $SSH_KEY -o UserKnownHostsFile=$SSH_KNOWN_HOSTS_FILE" "${@}"
+}
+
+#
+# Build Ceph Windows
+#
+cat > ${WORKSPACE}/build-ceph-windows.sh << EOF
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+cd ~/ceph
+
+sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none || true
+sudo apt-get install -y git
+git submodule update --init --recursive
+
+ZIP_DEST=~/ceph.zip $CEPH_WIN32_BUILD_FLAGS timeout 3h ./win32_build.sh
+EOF
+chmod +x ${WORKSPACE}/build-ceph-windows.sh
+time rsync_cmd $WORKSPACE/ceph ${WORKSPACE}/build-ceph-windows.sh ${UBUNTU_SSH_USER}@${UBUNTU_VM_IP}:
+
+time SSH_TIMEOUT=3h ssh_exec ./build-ceph-windows.sh
+time rsync_cmd ${UBUNTU_SSH_USER}@${UBUNTU_VM_IP}:~/ceph.zip $WORKSPACE/ceph.zip
--- /dev/null
+#!/bin/bash
+# vim: ts=4 sw=4 expandtab
+set -ex
+
+cd "$WORKSPACE"
+VENV="${WORKSPACE}/.venv"
+PATH=$PATH:$HOME/.local/bin
+chacra_endpoint="ceph/${BRANCH}/${SHA1}/${OS_NAME}/${OS_VERSION_NAME}"
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+if [ "$OS_PKG_TYPE" = "rpm" ]; then
+ RPM_RELEASE=`grep Release dist/ceph/ceph.spec | sed 's/Release:[ \t]*//g' | cut -d '%' -f 1`
+ RPM_VERSION=`grep Version dist/ceph/ceph.spec | sed 's/Version:[ \t]*//g'`
+ PACKAGE_MANAGER_VERSION="$RPM_VERSION-$RPM_RELEASE"
+ BUILDAREA="${WORKSPACE}/dist/ceph/rpmbuild"
+ find dist/ceph/rpmbuild/SRPMS | grep rpm | chacractl binary ${chacra_flags} create ${chacra_endpoint}/source/flavors/${FLAVOR}
+ find dist/ceph/rpmbuild/RPMS/* | grep rpm | chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ if [ -f ./cephadm ] ; then
+ echo cephadm | chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ fi
+elif [ "$OS_PKG_TYPE" = "deb" ]; then
+ PACKAGE_MANAGER_VERSION="${VERSION}-1${OS_VERSION_NAME}"
+ find ${WORKSPACE}/dist/ceph/ | \
+ egrep "*(\.changes|\.deb|\.ddeb|\.dsc|ceph[^/]*\.gz)$" | \
+ egrep -v "(Packages|Sources|Contents)" | \
+ chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ BUILDAREA="${WORKSPACE}/dist/ceph/debs"
+ if [ -f ./cephadm ] ; then
+ echo cephadm | chacractl binary ${chacra_flags} create ${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}
+ fi
+fi
+# write json file with build info
+ cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$VERSION",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+chacra_repo_endpoint="${chacra_endpoint}/flavors/${FLAVOR}"
+# post the json to repo-extra json to chacra
+curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${CHACRA_URL}repos/${chacra_repo_endpoint}/extra/
+# start repo creation
+chacractl repo update ${chacra_repo_endpoint}
+
+echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}/flavors/${FLAVOR}/
--- /dev/null
+#!/bin/bash
+set -ex
+
+if grep -q debian /etc/*-release; then
+ sudo apt-get install -y python3-scipy python3-routes
+elif grep -q rhel /etc/*-release; then
+ sudo yum install -y scipy python-routes python3-routes
+fi
--- /dev/null
+#!/usr/bin/env bash
+
+set -ex
+
+on_error() {
+ if [ "$1" != "0" ]; then
+ printf "\n\nERROR $1 thrown on line $2\n\n"
+ printf "\n\nCollecting info...\n\n"
+ sudo journalctl --since "10 min ago" --no-tail --no-pager -x
+ printf "\n\nERROR: displaying containers' logs:\n\n"
+ docker ps -aq | xargs -r docker logs
+ printf "\n\nTEST FAILED.\n\n"
+ fi
+}
+
+trap 'on_error $? $LINENO' ERR
+
+# Install required deps.
+sudo apt update -y
+sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release \
+ openssh-server software-properties-common
+
+# install nvm
+if [[ ! $(command -v nvm) ]]; then
+ LATEST_NVM_VERSION=$(curl -s https://api.github.com/repos/nvm-sh/nvm/releases/latest | jq -r '.tag_name')
+ echo "Installing nvm version ${LATEST_NVM_VERSION}"
+
+ curl -fsSL https://raw.githubusercontent.com/nvm-sh/nvm/${LATEST_NVM_VERSION}/install.sh | bash
+
+ export NVM_DIR="$HOME/.nvm"
+ [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
+ [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"
+fi
+
+pushd src/pybind/mgr/dashboard/frontend
+
+echo "Installing nodejs from nvm with version $(cat .nvmrc)"
+nvm install
+nvm use
+popd
+
+sudo apt install -y libvirt-daemon-system libvirt-daemon-driver-qemu qemu-kvm libvirt-clients runc
+
+sudo usermod -aG libvirt $(id -un)
+newgrp libvirt # Avoid having to log out and log in for group addition to take effect.
+sudo systemctl enable --now libvirtd
+
+DISTRO="$(lsb_release -cs)"
+
+if [[ $(command -v docker) == '' ]]; then
+ # Set up docker official repo and install docker.
+ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
+ echo \
+ "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
+ ${DISTRO} stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
+ sudo apt update -y
+ sudo apt install -y docker-ce docker-ce-cli containerd.io
+fi
+sudo groupadd docker || true
+sudo usermod -aG docker $(id -un)
+sudo systemctl start docker
+sudo chgrp "$(id -un)" /var/run/docker.sock
+
+docker info
+docker container prune -f
+
+KCLI_CONFIG_DIR="${HOME}/.kcli"
+mkdir -p ${KCLI_CONFIG_DIR}
+if [[ ! -f "${KCLI_CONFIG_DIR}/id_rsa" ]]; then
+ sudo ssh-keygen -t rsa -q -f "${KCLI_CONFIG_DIR}/id_rsa" -N "" <<< y
+fi
+
+: ${KCLI_CONTAINER_IMAGE:='quay.io/karmab/kcli:2543a61'}
+
+docker pull ${KCLI_CONTAINER_IMAGE}
+
+echo "#!/usr/bin/env bash
+
+docker run --net host --security-opt label=disable \
+ -v ${KCLI_CONFIG_DIR}:/root/.kcli \
+ -v ${PWD}:/workdir \
+ -v /var/lib/libvirt/images:/var/lib/libvirt/images \
+ -v /var/run/libvirt:/var/run/libvirt \
+ -v /var/tmp:/ignitiondir \
+ ${KCLI_CONTAINER_IMAGE} \""'${@}'"\"
+" | sudo tee /usr/local/bin/kcli
+sudo chmod +x /usr/local/bin/kcli
+
+# KCLI cleanup function can be found here: https://github.com/ceph/ceph/blob/main/src/pybind/mgr/dashboard/ci/cephadm/start-cluster.sh
+sudo mkdir -p /var/lib/libvirt/images/ceph-dashboard
+kcli delete plan ceph -y || true
+kcli delete network ceph-dashboard -y
+kcli create pool -p /var/lib/libvirt/images/ceph-dashboard ceph-dashboard
+kcli create network -c 192.168.100.0/24 ceph-dashboard
--- /dev/null
+#!/bin/bash
+set -ex
+
+if [[ ! $(arch) =~ (i386|x86_64|amd64) ]]; then
+ # google chrome is only available on amd64
+ exit
+fi
+
+if grep -q debian /etc/*-release; then
+ sudo bash -c 'echo "deb [arch=amd64] https://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list'
+ curl -fsSL https://dl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
+ sudo apt-get update
+ sudo apt-get install -y google-chrome-stable
+ sudo apt-get install -y python3-requests python3-openssl python3-jinja2 \
+ python3-jwt python3-scipy python3-routes
+ sudo apt-get install -y xvfb libxss1
+ sudo rm /etc/apt/sources.list.d/google-chrome.list
+elif grep -q rhel /etc/*-release; then
+ sudo dd of=/etc/yum.repos.d/google-chrome.repo status=none <<EOF
+[google-chrome]
+name=google-chrome
+baseurl=https://dl.google.com/linux/chrome/rpm/stable/\$basearch
+enabled=1
+gpgcheck=1
+gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub
+EOF
+ sudo yum install -y google-chrome-stable
+ sudo rm /etc/yum.repos.d/google-chrome.repo
+ sudo yum install -y python-requests pyOpenSSL python-jinja2 python-jwt scipy python-routes python3-routes
+ sudo yum install -y xorg-x11-server-Xvfb.x86_64
+fi
+
+# kill any existing Xvfb process to avoid port conflict
+sudo killall Xvfb || true
--- /dev/null
+#!/bin/bash
+set -x
+# Helper to get tarball for releases
+
+: ${2?"Usage: $0 \$release \$sha1 \$version"}
+
+release=$1
+sha1=$2
+version=$3
+
+pushd /data/download.ceph.com/www/prerelease/ceph/tarballs
+
+if [[ ! -f ceph_${version}.tar.gz ]]; then
+ wget -q https://chacra.ceph.com/binaries/ceph/${release}/${sha1}/ubuntu/noble/x86_64/flavors/default/ceph_${version}-1noble.tar.gz \
+ || wget -q https://chacra.ceph.com/binaries/ceph/${release}/${sha1}/ubuntu/jammy/x86_64/flavors/default/ceph_${version}-1jammy.tar.gz
+
+ mv ceph_${version}*.tar.gz ceph-${version}.tar.gz
+fi
+
+popd
--- /dev/null
+#!/bin/bash
+# The script runnings on the signer box will pull nfs-ganesha packags that wore created in the last 24 hours on chacra.ceph.com to /opt/new-repos
+# After this the sign-rpms-auto script will run and sign the nfs-ganesha packages
+# And finally the sync-push-auto script will run and will push the signed packages to download.ceph.com
+
+today_items=$(ssh ubuntu@chacra.ceph.com 'find /opt/repos/nfs-ganesha-stable -newermt "-24 hours" -ls' | awk '{ print $11 }' )
+if [ -n "$today_items" ]; then
+echo "pulling nfs-ganesha packages from chacra"
+echo "********************************************"
+[[ -d /opt/nfs-ganesha/new-repos/ ]] | mkdir -p /opt/nfs-ganesha/new-repos/
+ for item in $today_items; do
+ sync_cmd="ubuntu@chacra.ceph.com:$item /opt/nfs-ganesha/new-repos/"
+ rsync -Lavh --progress --relative $sync_cmd
+ done
+
+ # sign the rpm's that wore pulled today
+
+echo "signing rpms"
+bash /home/ubuntu/ceph-build/scripts/nfs-ganesha/sign-rpms-auto
+
+ # syncing the singed rpm's to download.ceph.com
+
+echo "pushing rpms to download.ceph.com"
+bash /home/ubuntu/ceph-build/scripts/nfs-ganesha/sync-push-auto
+
+fi
--- /dev/null
+#!/bin/bash
+# This script will the rpm files pulled from the chacra machines.
+
+
+keyid=460F3994
+GPG_PASSPHRASE=''
+
+path="/opt/nfs-ganesha/new-repos/"
+echo $path
+update_repo=0
+cd $path
+
+for rpm in `find -name "*.rpm"`
+do
+ signature=$(rpm -qi -p $rpm 2>/dev/null | grep ^Signature)
+ if ! grep -iq $keyid <<< "$signature" ; then
+ rpm_path=`readlink -f $rpm`
+ echo "signing: $rpm_path"
+ update_repo=1
+
+ echo "yes" | setsid rpm \
+ --define "_gpg_name '$keyid'" \
+ --define '_signature gpg' \
+ --define '__gpg_check_password_cmd /bin/true' \
+ --define "__gpg_sign_cmd %{__gpg} gpg --no-tty --yes --batch --no-armor --passphrase '$GPG_PASSPHRASE' --no-secmem-warning -u "%{_gpg_name}" --sign --detach-sign --output %{__signature_filename} %{__plaintext_filename}" \
+ --resign "$rpm_path"
+
+ fi
+done
+
+# now sign the repomd.xml files
+if [[ $update_repo -eq 1 ]]; then
+ for repomd in `find -name repomd.xml`
+ do
+ echo "signing repomd: $repomd"
+ gpg --batch --yes --passphrase "$GPG_PASSPHRASE" --detach-sign --armor -u $keyid $repomd
+ done
+fi
+
+# finally, update the repo metadata
+repodirs=$( find /opt/nfs-ganesha/new-repos/ -type d -name x86_64 | cut -d/ -f 13 --complement )
+if [ -n "$repodirs" ]; then
+ for directory in $repodirs
+ do
+ cd $directory
+ createrepo .
+ cd -
+ done
+fi
--- /dev/null
+#!/bin/bash
+# This script will push repository files from the signer box to the upstream repositories.
+# By default it will push all releases and ceph_versions defined in the releases and ceph_version varibles to download.ceph.com
+
+releases=( V3.5 V2.7 )
+ceph_version=( octopus ceph_pacific )
+
+repodirs=$( find /opt/nfs-ganesha/new-repos/ -type d -name x86_64 | cut -d/ -f 13 --complement )
+for dir in "$repodirs"; do
+ for i in "${releases[@]}"; do
+ for v in "${ceph_version[@]}"; do
+ find_release=$( ls -ld "$dir" | grep "$i" | wc -l )
+ find_version=$( ls -ld "$dir" | grep "$v" | wc -l )
+ if [ $find_release == '1' ] && [ $find_version == '1' ]; then
+ release=$i
+ version=$v
+ ssh signer@download.ceph.com "mkdir -p /data/download.ceph.com/www/nfs-ganesha/rpm-$release-stable/$version/el8" && el8_cmd="$dir/* signer@download.ceph.com:/data/download.ceph.com/www/nfs-ganesha/rpm-$release-stable/$version/el8" && rsync --progress -avr $el8_cmd
+ rm -rf /opt/nfs-ganesha/new-repos/*
+ fi
+ done
+ done
+done
--- /dev/null
+#!/usr/bin/env bash
+
+set -ex
+
+install_docker(){
+ DISTRO="$(lsb_release -cs)"
+ if [[ $(command -v docker) == '' ]]; then
+ # Set up docker official repo and install docker.
+ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
+ echo \
+ "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
+ ${DISTRO} stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
+ sudo apt update -y
+ sudo apt install -y docker-ce docker-ce-cli containerd.io
+ fi
+ sudo groupadd docker || true
+ sudo usermod -aG docker $(id -un)
+ sudo systemctl unmask docker
+ sudo systemctl restart docker
+ sudo chgrp "$(id -un)" /var/run/docker.sock
+
+ # wait for docker
+ sleep 10
+
+ docker info
+ docker container prune -f
+}
+
+# install dependencies
+sudo apt update -y
+sudo apt install --reinstall -y qemu-kvm libvirt-daemon-driver-qemu libvirt-clients libvirt-daemon-system libvirt-daemon runc python3
+sudo apt install --reinstall -y python3-pip
+install_docker
+
+# install minikube
+curl -LO https://storage.googleapis.com/minikube/releases/v1.31.2/minikube-linux-amd64
+sudo install minikube-linux-amd64 /usr/local/bin/minikube
+
+# delete any existing minikube setup
+minikube delete
--- /dev/null
+#!/bin/bash -ex
+# vim: ts=4 sw=4 expandtab
+command -v pipx || (
+ command -v apt && sudo apt install -y pipx
+ command -v dnf && sudo dnf install -y pipx
+)
+pipx ensurepath
+pipx install uv
+~/.local/bin/uv tool install chacractl
+
+if [ -z "$chacra_url" ]; then
+ chacra_url=$(curl -u "$SHAMAN_API_USER:$SHAMAN_API_KEY" https://shaman.ceph.com/api/nodes/next/)
+fi
+cat > $HOME/.chacractl << EOF
+url = "$chacra_url"
+user = "$CHACRACTL_USER"
+key = "$CHACRACTL_KEY"
+ssl_verify = True
+EOF
+echo $chacra_url
--- /dev/null
+#!/bin/bash -ex
+# vim: ts=4 sw=4 expandtab
+function setup_container_runtime () {
+ loginctl enable-linger "$(id -nu)"
+ if command -v podman; then
+ podman system info > /dev/null || podman system reset --force
+ if [ "$(podman version -f "{{ lt .Client.Version \"4\" }}")" = "true" ]; then
+ echo "Found a very old podman; removing"
+ command -v dnf && sudo dnf remove -y podman
+ command -v apt && sudo apt remove -y podman
+ fi
+ fi
+
+ if ! command -v podman; then
+ if command -v dnf; then
+ sudo dnf install -y podman
+ elif command -v apt-cache; then
+ sudo apt-get update -q
+ VERSION=$(apt-cache show podman | grep Version: | sort -r | awk '/^Version:/{print $2; exit}')
+ if [[ "${VERSION:0:1}" -ge 4 ]]; then
+ DEBIAN_FRONTEND=noninteractive sudo apt-get install -y podman
+ elif ! command -v docker; then
+ DEBIAN_FRONTEND=noninteractive sudo apt-get install -y docker.io
+ fi
+ fi
+ fi
+
+ if command -v podman; then
+
+ # remove any leftover containers that might be present because of
+ # bad exits from podman (like an oom kill or something).
+ # We've observed new jobs failing to run because they can't create
+ # a container named ceph_build
+ podman rm -f ceph_build
+
+ if [ "$(podman version -f "{{ lt .Client.Version \"5.6.1\" }}")" = "true" ] && \
+ ! echo "928238bfcdc79a26ceb51d7d9759f99144846c0a /etc/tmpfiles.d/podman.conf" | sha1sum --status --check -; then
+ # Pull in this fix: https://github.com/containers/podman/pull/26986
+ curl -sS -L -O https://github.com/containers/podman/raw/refs/tags/v5.6.1/contrib/tmpfile/podman.conf
+ sudo mv podman.conf /etc/tmpfiles.d/
+ sudo systemd-tmpfiles --remove
+ fi
+ if [ "$(podman version -f "{{ ge .Client.Version \"4\" }}")" = "true" ]; then
+ PODMAN_DIR="$HOME/.local/share/containers"
+ test -d "$PODMAN_DIR" && command -v restorecon && sudo restorecon -R -T0 -x "$PODMAN_DIR"
+ PODMAN_STORAGE_DIR="$PODMAN_DIR/storage"
+ if [ -d "$PODMAN_STORAGE_DIR" ]; then
+ sudo chgrp -R "$(groups | cut -d' ' -f1)" "$PODMAN_STORAGE_DIR"
+ if [ "$(podman unshare du -s --block-size=1G "$PODMAN_STORAGE_DIR" | awk '{print $1}')" -ge 50 ]; then
+ time podman image prune --filter=until="$((24*7))h" --all --force
+ time podman system prune --force
+ if [ "$(podman version -f "{{ ge .Client.Version \"5\" }}")" = "true" ]; then
+ time podman system check --repair --quick
+ fi
+ fi
+ fi
+ fi
+ fi
+}
+
+# If the script is executed (as opposed to sourced), run the function now
+if [ "$(basename -- "${0#-}")" = "$(basename -- "${BASH_SOURCE[0]}")" ]; then
+ setup_container_runtime
+fi
--- /dev/null
+#!/bin/bash
+# vim: ts=4 sw=4 expandtab
+
+set -ex
+
+SCCACHE_URL="https://github.com/mozilla/sccache/releases/download/v0.8.2/sccache-v0.8.2-$(uname -m)-unknown-linux-musl.tar.gz"
+
+function write_sccache_conf() {
+ export SCCACHE_CONF=${SCCACHE_CONF:-$WORKSPACE/sccache.conf}
+ cat << EOF > $SCCACHE_CONF
+[cache.s3]
+bucket = "ceph-sccache"
+endpoint = "s3.us-south.cloud-object-storage.appdomain.cloud"
+use_ssl = true
+key_prefix = ""
+server_side_encryption = false
+no_credentials = false
+region = "auto"
+EOF
+}
+
+function write_aws_credentials() {
+ export AWS_PROFILE=default
+ mkdir -p $HOME/.aws
+ cat << EOF > $HOME/.aws/credentials
+[default]
+aws_access_key_id = ${AWS_ACCESS_KEY_ID}
+aws_secret_access_key = ${AWS_SECRET_ACCESS_KEY}
+EOF
+}
+
+function install_sccache () {
+ local sudo
+ if [ "$(id -u)" != "0" ]; then
+ sudo="sudo"
+ fi
+ curl -L $SCCACHE_URL | $sudo tar --no-anchored --strip-components=1 -C /usr/local/bin/ -xzf - sccache
+}
+
+function setup_pbuilderrc () {
+ cat >> ~/.pbuilderrc << EOF
+export SCCACHE="${SCCACHE}"
+export SCCACHE_CONF=/etc/sccache.conf
+export DWZ="${DWZ}"
+export AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}"
+export AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}"
+EOF
+ sudo cp ~/.pbuilderrc /root/.pbuilderrc
+}
+
+function setup_sccache_pbuilder_hook () {
+ for hook_dir in $(ls -d ~/.pbuilder/hook*.d); do
+ hook=$hook_dir/D09-setup-sccache
+ cp $BASH_SOURCE $hook
+ cat >> $hook << EOF
+if [ "$SCCACHE" = true ] ; then
+ write_sccache_conf
+ write_aws_credentials
+ install_sccache
+fi
+EOF
+ chmod +x $hook
+ done
+}
+
+function reset_sccache () {
+ sccache --zero-stats
+ sccache --stop-server
+}
--- /dev/null
+#!/bin/bash
+# vim: ts=4 sw=4 expandtab
+
+function setup_pipx () {
+ command -v pipx || (
+ command -v apt && sudo apt install -y pipx
+ command -v dnf && sudo dnf install -y pipx
+ )
+ pipx ensurepath
+}
+
+function setup_uv () {
+ setup_pipx
+ pipx install uv
+}
+
+# If the script is executed (as opposed to sourced), run the function now
+if [ "$(basename -- "${0#-}")" = "$(basename -- "${BASH_SOURCE[0]}")" ]; then
+ setup_uv
+fi
--- /dev/null
+#!/usr/bin/env python3
+import argparse
+import pathlib
+import subprocess
+import sys
+import xmltodict
+
+
+def parse_args(args: list[str]) -> argparse.Namespace:
+ parser = argparse.ArgumentParser(
+ formatter_class=argparse.ArgumentDefaultsHelpFormatter
+ )
+ parser.add_argument(
+ "job_xml",
+ nargs="*",
+ help="The job XML file(s) to process",
+ )
+ parser.add_argument(
+ "-v",
+ "--verbose",
+ action="store_true",
+ default=False,
+ help="Be more verbose",
+ )
+ return parser.parse_args(args)
+
+
+def main():
+ args = parse_args(sys.argv[1:])
+ success = True
+ for job_xml in args.job_xml:
+ path = pathlib.Path(job_xml)
+ assert path.exists
+ if args.verbose:
+ print(f"JOB: {job_xml}")
+ job_obj = xmltodict.parse(path.read_text())
+ for item in find(job_obj, "command"):
+ try:
+ if args.verbose:
+ print(f" shellcheck {item[0]}")
+ process_item(item)
+ except Exception:
+ print(f"Failed to verify job {job_xml} item {item[0]}")
+ success = False
+ if args.verbose:
+ print()
+ return 0 if success else 1
+
+def process_item(item):
+ script = item[1]
+ if not script.startswith("#!"):
+ script = "#!/bin/bash\n" + script
+ proc = subprocess.Popen(
+ ["shellcheck", "--severity", "error", "-"],
+ stdin=subprocess.PIPE,
+ encoding="utf-8",
+ )
+ proc.communicate(input=script)
+ assert proc.returncode == 0
+
+def find(obj: dict, key: str, result=None, path="") -> list[tuple]:
+ if result is None:
+ result = []
+ if key in obj:
+ result.append((path, obj[key]))
+ return result
+ for k, v in obj.items():
+ if isinstance(v, dict):
+ if "." in k:
+ subpath = f'{path}."{k}"'
+ else:
+ subpath = f"{path}.{k}"
+ maybe_result = find(v, key, result, subpath)
+ if maybe_result is not result:
+ result.append((subpath, maybe_result[-1]))
+ return result
+
+
+if __name__ == "__main__":
+ sys.exit(main())
--- /dev/null
+#!/bin/bash -ex
+# vim: ts=2:sw=2:expandtab
+# This script is meant to be used when signing RPMs on a "signer" box. Such
+# a box needs to have the actual signing keys and follow the structure for
+# a repository layout. The layout follows this convention:
+#
+# /opt/repos/$project/$release/$distro/$distro_version
+# OR (for octopus and later)
+# /opt/repos/$project/$release-X.X.X/$distro/$distro_version
+#
+# If no arguments are passed in, all defined releases are used. It can
+# optionally be just one or any combination of them, like:
+#
+# sign-rpms giant hammer
+#
+# Would sign both Giant and Hammer releases. But the tool can consume a single
+# release as well (which will probably be the most used case):
+#
+# sign-rpms infernalis
+
+keyid=460F3994
+
+function usage() {
+ echo "sign-rpms <project> [ release [ release ..]]"
+}
+
+if [[ $# -lt 1 ]] ; then usage ; exit 1 ; fi
+
+project=$1; shift
+
+if [ $# -eq 0 ]; then
+ # Default releases if no arguments passed
+ releases=( reef squid tentacle )
+else
+ releases=( "$@" )
+fi
+
+# distros are not configurable. "rhel" might not exist in every release (for
+# example it doesn't exist for infernalis releases.
+distros=( centos rhel rocky )
+
+# Although upstream these might be "el9" or "el10", we just use these since they
+# are the same values used by the build system.
+distro_versions=( 9 10 )
+
+# To unlock the gpg keys for the current run, it is requested over STDIN as
+# a password and later passed into GPG directly as a variable.
+read -s -p "Key Passphrase: " GPG_PASSPHRASE
+echo
+
+for release in "${releases[@]}"; do
+ for distro in "${distros[@]}"; do
+ for distro_version in "${distro_versions[@]}"; do
+ for path in /opt/repos/$project/$release*; do
+ if [ -d "$path/$distro/$distro_version" ]; then
+ echo "Checking packages in: $path/$distro/$distro_version"
+ update_repo=0
+ cd $path/$distro/$distro_version
+
+ for rpm in `find -name "*.rpm"`; do
+ # this call to `rpm -qi -p` will spit out metatada information
+ # from an rpm file which will tell us about the signature. This
+ # is significantly faster than letting gpg see if this needs to
+ # be signed or not.
+ signature=$(rpm -qi -p $rpm 2>/dev/null | grep ^Signature)
+ if ! grep -iq $keyid <<< "$signature" ; then
+ rpm_path=`readlink -f $rpm`
+ echo "signing: $rpm_path"
+ update_repo=1
+
+ echo "yes" | setsid rpm \
+ --define "_gpg_name '$keyid'" \
+ --define '_signature gpg' \
+ --define '__gpg_check_password_cmd /bin/true' \
+ --define "__gpg_sign_cmd %{__gpg} gpg --no-tty --yes --batch --no-armor --passphrase '$GPG_PASSPHRASE' --no-secmem-warning -u "%{_gpg_name}" --sign --detach-sign --output %{__signature_filename} %{__plaintext_filename}" \
+ --resign "$rpm_path"
+
+ fi
+ done
+
+ # now, update the repo metadata
+ if [[ $update_repo -eq 1 ]]; then
+ for directory in $(ls $path/$distro/$distro_version); do
+ cd $directory
+ # use the --no-database to workaround the large dbg packages issues
+ # https://tracker.ceph.com/issues/39387
+ # later debian no longer has the (Python) createrepo; it's been replaced
+ # by a mostly-compatible C version called createrepo_c. Use it if we can't
+ # find createrepo.
+ if command -v createrepo >/dev/null 2>&1 ; then
+ createrepo --no-database .
+ else
+ createrepo_c --compatibility --no-database .
+ fi
+ cd -
+ done
+ fi
+
+ # finally, sign the repomd.xml files
+ if [[ $update_repo -eq 1 ]]; then
+ for repomd in `find -name repomd.xml`; do
+ echo "signing repomd: $repomd"
+ gpg --batch --yes --passphrase "$GPG_PASSPHRASE" --detach-sign --armor -u $keyid $repomd
+ done
+ fi
+
+ fi
+ done
+ done
+ done
+done
--- /dev/null
+# update shaman with the completed build status
+update_build_status "completed" "ceph" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
--- /dev/null
+#!/bin/bash -ex
+# vim: ts=2:sw=2:expandtab
+
+: ${3?"Usage: $0 \$project \$release \$sha1"}
+ # Script exits here if command-line parameter absent,
+ #+ with following error message.
+ # usage-message.sh: 1: Usage: sync-pull $project $release $sha1
+
+project=${1}
+release=${2}
+sha1=${3}
+
+echo "sync for: $project $release"
+echo "********************************************"
+
+if [[ "$project" == "ceph" ]] ; then
+ # This ugly loop checks all possible DEB combinations to see which repo has the most packages since that's likely the repo you want to sync.
+ current_highest_count=0
+ for combo in debian/bookworm debian/bullseye ubuntu/bionic ubuntu/focal ubuntu/jammy ubuntu/noble; do
+ combo_count=$(curl -fs https://chacra.ceph.com/r/$project/$release/$sha1/${combo}/flavors/default/pool/main/c/ceph/ | wc -l || /bin/true)
+ if [[ ${PIPESTATUS[0]} -eq 22 ]] ; then
+ echo "$combo packages not found, skipping"
+ continue
+ fi
+ if [ $combo_count -gt $current_highest_count ]; then
+ current_highest_count=$combo_count
+ highest_combo=$combo
+ fi
+ done
+
+ echo "Found the most packages ($current_highest_count) in $highest_combo."
+fi
+
+# Check the the DEB and RPM chacra endpoints to see if the repos are or need updating.
+# This helps prevent packages from getting missed when signing and pushing.
+need_rerun=false
+for endpoint in https://chacra.ceph.com/repos/$project/$release/$sha1/{centos/9,rocky/10} \
+ https://chacra.ceph.com/repos/$project/$release/$sha1/$highest_combo
+do
+ chacra_repo_status=$(curl -s -L $endpoint)
+ if echo "$chacra_repo_status" | jq empty 2>/dev/null; then
+ chacra_needs_update=$(echo $chacra_repo_status | jq .needs_update)
+ chacra_is_updating=$(echo $chacra_repo_status | jq .is_updating)
+ else
+ echo "Non-JSON response from $endpoint, skipping"
+ continue
+ fi
+
+ if [ "$chacra_needs_update" == "true" ] || [ "$chacra_is_updating" == "true" ]; then
+ need_rerun=true
+ fi
+done
+
+relver=$release
+
+# Get numerical version
+version=$(echo $chacra_repo_status | jq -r .extra.version)
+relver=$release-$version
+mkdir -p /opt/repos/$project/$relver/{debian/jessie,centos/9,rocky/10}
+
+if [[ "$project" == "ceph" ]] ; then
+ # Replace $highest_combo with your own DISTRO/VERSION if you don't want to sync from the repo with the most packages.
+ if [[ -n "$highest_combo" ]] ; then
+ deb_cmd="ubuntu@chacra.ceph.com:/opt/repos/$project/$release/$sha1/$highest_combo/flavors/default/* /opt/repos/$project/$relver/debian/jessie/"
+ echo $deb_cmd
+ echo "--------------------------------------------"
+ rsync -Lavh --progress --exclude '*lockfile*' $deb_cmd
+ fi
+fi
+
+for el_version in centos/9 rocky/10; do
+ echo "--------------------------------------------"
+ if curl -fs --head --connect-timeout 5 --max-time 10 https://shaman.ceph.com/api/repos/$project/$release/$sha1/$el_version/flavors/default/ >/dev/null 2>&1; then
+ el_cmd="ubuntu@chacra.ceph.com:/opt/repos/$project/$release/$sha1/$el_version/flavors/default/* /opt/repos/$project/$relver/$el_version/"
+ echo $el_cmd
+ rsync -Lavh --progress $el_cmd
+ else
+ echo "Shaman thinks a repo for $el_version doesn't exist. Skipping."
+ fi
+done
+
+if [[ "$project" == "ceph" ]]; then
+ ssh signer@download.ceph.com "/home/signer/bin/get-tarballs.sh $release $sha1 $version"
+fi
+
+if $need_rerun; then
+ echo
+ echo "********************************************"
+ echo
+ echo "At least one of the Chacra repos synced was "
+ echo " still updating before the rsync started."
+ echo " You should re-run this script!"
+ echo
+ echo "********************************************"
+fi
--- /dev/null
+#!/bin/bash -ex
+# vim:ts=2 sw=2 expandtab
+# This script will push repository files from the signer box to the upstream repositories.
+# By default it will sync all releases defined, but can optionally take one or more
+# releases to sync:
+#
+# sync-push hammer infernalis
+#
+# Since the binaries are created with a different repository layout, this
+# script maps directories like "centos/6" to "rpm-$release/el6"
+
+# this directory is auth-protected so anxious users don't try to
+# pull an in-progress release
+
+function usage() {
+ echo "sync-push <project> [ release [ release ..]]"
+}
+
+if [[ $# -lt 1 ]] ; then usage ; exit 1 ; fi
+
+project=$1; shift
+prerelease_dir=/data/download.ceph.com/www/prerelease/${project}
+
+if [[ "$project" == "ceph" ]] ; then
+ releases=${*:-"reef squid tentacle"}
+else
+ releases=$*
+fi
+
+make_repofile() {
+ project=$1
+ release=$2
+ el_version=$3
+ echo "[${project}]
+name=ceph-iscsi noarch packages
+baseurl=http://download.ceph.com/prerelease/${project}/${release}/rpm/el${el_version}/noarch
+enabled=1
+gpgcheck=0
+gpgkey=https://download.ceph.com/keys/release.asc
+type=rpm-md
+
+[${project}-source]
+name=ceph-iscsi source packages
+baseurl=http://download.ceph.com/prerelease/ceph-iscsi/${release}/rpm/el${el_version}/SRPMS
+enabled=0
+gpgcheck=1
+gpgkey=https://download.ceph.com/keys/release.asc
+type=rpm-md
+ "
+}
+
+project_sync() {
+ project=$1
+ release=$2
+ newgen=false
+ for path in $(ls -d /opt/repos/$project/* | grep $release | sort -V); do
+ if [[ "$project" == "ceph" ]] ; then
+ version=$(echo $path | cut -d '-' -f2)
+ release=$(echo $release | cut -d '-' -f1)
+ debian_path=debian-$version
+ rpm_path=rpm-$version
+ dcc_deb_path=${prerelease_dir}/${debian_path}
+ dcc_rpm_path=${prerelease_dir}/${rpm_path}
+ else
+ dcc_deb_path=""
+ dcc_rpm_path=${prerelease_dir}/${version}/rpm
+ fi
+
+ if [[ -n "${dcc_deb_path}" ]]; then
+ ssh signer@download.ceph.com "mkdir -p ${dcc_deb_path}"
+
+ deb_cmd="$path/debian/jessie/* signer@download.ceph.com:${prerelease_dir}/${debian_path}"
+ rsync --progress --exclude '*lockfile*' -avr $deb_cmd
+ fi
+
+ if [[ -n "${dcc_rpm_path}" ]] ; then
+ for el_version in 9 10; do
+ case "$el_version" in
+ 9)
+ distro="centos"
+ ;;
+ 10)
+ distro="rocky"
+ ;;
+ esac
+
+ ssh signer@download.ceph.com "mkdir -p ${dcc_rpm_path}/el${el_version}"
+ destpath="signer@download.ceph.com:${dcc_rpm_path}/el${el_version}"
+ el_cmd="$path/${distro}/${el_version}/* ${destpath}"
+
+ if [ -d "$path/${distro}/${el_version}" ]; then
+ rsync --progress -avr $el_cmd || true
+ fi
+
+ if [[ "$project" == "ceph-iscsi" ]]; then
+ echo "$(make_repofile ${project} ${release} ${el_version})" > ceph-iscsi.repo
+ rsync --progress -avr ceph-iscsi.repo ${destpath}
+ fi
+ done
+ fi
+ done
+
+ # Since paths are listed alphabetically/numerically in the first `for` loop, the last $version is what gets used for the new symlink below.
+ ssh signer@download.ceph.com "cd ${prerelease_dir}/; \
+ ln -sfn debian-$version debian-$release; \
+ ln -sfn rpm-$version rpm-$release"
+}
+
+for i in "${releases[@]}"
+do
+ project_sync $project $i
+done
+
+echo "Once you've tested the repos at ${prerelease_dir}, don't forget to mv them
+up to the parent directory!"
+if [[ "$project" == "ceph-iscsi" ]]; then
+ echo "And for ceph-iscsi, modify the .repo files in */rpm/*/* to remove the prerelease/ part of the path in baseurl!"
+fi
+
+
--- /dev/null
+#!/bin/bash
+# vim: ts=4 sw=4 expandtab
+
+submit_build_status() {
+
+ # A helper script to post (create) the status of a build in shaman
+ # 'state' can be either 'failed' or 'started'
+ # 'project' is used to post to the right url in shaman
+ http_method=$1
+ state=$2
+ project=$3
+ distro=$4
+ distro_version=$5
+ distro_arch=$6
+ cat > $WORKSPACE/build_status.json << EOF
+{
+ "extra":{
+ "version":"$vers",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME",
+ "build_user":"$BUILD_USER"
+ },
+ "url":"$BUILD_URL",
+ "log_url":"$BUILD_URL/consoleFull",
+ "status":"$state",
+ "distro":"$distro",
+ "distro_version":"$distro_version",
+ "distro_arch":"$distro_arch",
+ "ref":"$BRANCH",
+ "sha1":"$SHA1",
+ "flavor":"$FLAVOR"
+}
+EOF
+
+ # these variables are saved in this jenkins
+ # properties file so that other scripts
+ # in the same job can inject them
+ cat > $WORKSPACE/build_info << EOF
+NORMAL_DISTRO=$distro
+NORMAL_DISTRO_VERSION=$distro_version
+NORMAL_ARCH=$distro_arch
+SHA1=$SHA1
+EOF
+
+ SHAMAN_URL="https://shaman.ceph.com/api/builds/$project/"
+ # post the build information as JSON to shaman
+ curl -X $http_method -H "Content-Type:application/json" --data "@$WORKSPACE/build_status.json" -u $SHAMAN_API_USER:$SHAMAN_API_KEY ${SHAMAN_URL}
+}
+
+# If the script is executed (as opposed to sourced), run the function now
+if [ "$(basename -- "${0#-}")" = "$(basename -- "${BASH_SOURCE}")" ]; then
+ submit_build_status "POST" "$@"
+fi
--- /dev/null
+sepia-fog-images
+================
+
+This job automates the creation/capturing of FOG_ images.
+
+Prerequisites
+-------------
+
+These steps should only have to be performed when a new teuthology host is being set up but it's good to have documented.
+
+#. Run the ``ansible/examples/slave_teuthology.yml`` playbook against the teuthology host.
+
+#. Copy ``/etc/teuthology.yaml`` to ``/home/jenkins-build/.teuthology.yml`` and remove the ``fog:`` yaml block. This is so the job doesn't attempt to provision testnodes using FOG when locking machines.
+
+#. As the ``jenkins-build`` user on the teuthology host, generate a new RSA SSH key (``ssh-keygen -t rsa``).
+
+#. Copy the public key to jenkins-build.pub_ in the keys repo. (This is so the jenkins-build user can ssh to testnodes and VPSHOSTs)
+
+#. Run the ceph-cm-ansible_ ``users`` playbook against the Cobbler host and the DHCP server. (This lets the jenkins-build user set Cobbler settings and update DHCP entries)
+
+#. Define ``FOG_API_TOKEN`` and ``FOG_USER_TOKEN`` as **Global name/password pairs** in Jenkins.
+
+**NOTE:** This job also relies on:
+
+- ceph-sepia-secrets_ -- If the job is being run on a teuthology host, ``/etc/ansible`` should already be symlinked to a ceph-sepia-secrets checkout.
+- ceph-cm-ansible/tools_ -- There's a playbook that preps a host for capturing after Cobbler reimage along with a script to update DHCP entries.
+
+How it works
+------------
+
+This job:
+
+#. Locks a number of testnodes via ``teuthology-lock`` depending on the number of machine types and distros you specify (unless you specify your own using the ``DEFINEDHOSTS`` job parameter).
+
+#. SSHes and configures the DHCP server to make the testnodes boot to the Cobbler PXE server (instead of the default FOG).
+
+#. SSHes and sets the appropriate profile for each machine in Cobbler.
+
+#. Reboots the testnodes so they get reimaged via Cobbler. The ceph-cm-ansible_ testnodes role gets run as a post-install task_.
+
+#. Runs the ``prep-fog-capture.yml`` playbook against the testnodes to wipe out network settings and mounts. (This is because biosdevname/systemd/udev rules need to be overridden/rewritten by rc.local)
+
+#. Configures the DHCP server so the testnodes PXE boot back to the FOG server.
+
+#. Pauses the teuthology queue (if needed) so active FOG deployments aren't interrupted.
+
+#. Reboots all the testnodes so FOG captures the assigned images.
+
+#. Updates the teuthology lock DB with the new host keys and OS info.
+
+#. Unlocks/releases the testnodes.
+
+Usage
+-----
+
+See https://wiki.sepia.ceph.com/doku.php?id=services:fog
+
+.. _FOG: https://fogproject.org/
+.. _jenkins-build.pub: https://github.com/ceph/keys/blob/main/ssh/jenkins-build.pub
+.. _teuthology.yaml: http://docs.ceph.com/teuthology/docs/siteconfig.html
+.. _ceph-sepia-secrets: https://github.com/ceph/ceph-sepia-secrets/
+.. _tools: https://github.com/ceph/ceph-cm-ansible/tree/main/tools
+.. _Jenkins: https://jenkins.ceph.com/job/sepia-fog-images
+.. _task: https://github.com/ceph/ceph-cm-ansible/blob/main/roles/cobbler/templates/snippets/cephlab_rc_local
+.. _ceph-cm-ansible: https://github.com/ceph/ceph-cm-ansible
--- /dev/null
+#!/bin/bash
+# This job:
+# - Reimages testnodes using Cobbler (which runs ceph-cm-ansible)
+# - Preps the testnodes to have a FOG image captured (ceph-cm-ansible/tools/prep-fog-capture.yml)
+# - Captures FOG images
+#
+# CAPITAL vars are provided by Jenkins. lowercase are just in this script
+
+set -ex
+
+if ! grep -s 'User.*ubuntu' ~/.ssh/config >/dev/null 2>&1 ; then
+ echo << EOF
+ERROR: The jenkins-build user on host teuthology does not have "User
+ubuntu" in .ssh/config. This will make teuthology connections,
+and thus this job, fail. Please add that configuration to
+/home/jenkins-build/.ssh/config on teuthology.
+EOF
+ exit 1
+fi
+
+# Converts distro friendly names into Cobbler/FOG image names
+funSetProfiles () {
+ splitdistro=$(echo $1 | cut -d '_' -f1)
+ distroversion=$(echo $1 | cut -d '_' -f2)
+ if [ "$splitdistro" == "ubuntu" ]; then
+ cobblerprofile="Ubuntu-$distroversion-server-x86_64"
+ fogprofile="ubuntu_$distroversion"
+ elif [ "$splitdistro" == "rhel" ]; then
+ cobblerprofile="RHEL-$distroversion-Server-x86_64"
+ fogprofile="rhel_$distroversion"
+ elif [ "$splitdistro" == "centos" ]; then
+ cobblerprofile="CentOS-$distroversion-x86_64"
+ fogprofile="centos_$distroversion"
+ elif [ "$splitdistro" == "opensuse" ]; then
+ cobblerprofile="openSUSE-$distroversion-x86_64"
+ fogprofile="opensuse_$distroversion"
+ else
+ echo "Unknown profile $1"
+ exit 1
+ fi
+}
+
+funPowerCycle () {
+ host=$(echo ${1} | cut -d '.' -f1)
+ powerstatus=$(ipmitool -I lanplus -U inktank -P $SEPIA_IPMI_PASS -H ${host}.ipmi.sepia.ceph.com chassis power status | cut -d ' ' -f4-)
+ if [ "$powerstatus" == "off" ]; then
+ ipmitool -I lanplus -U inktank -P $SEPIA_IPMI_PASS -H ${host}.ipmi.sepia.ceph.com chassis power on
+ else
+ ipmitool -I lanplus -U inktank -P $SEPIA_IPMI_PASS -H ${host}.ipmi.sepia.ceph.com chassis power cycle
+ fi
+}
+
+# There's a few loops that could hang indefinitely if a curl command fails.
+# This function takes two arguments: Current and Max number of retries.
+# It will fail the job if Current > Max retries.
+funRetry () {
+ if [ $1 -gt $2 ]; then
+ echo "Maximum retries exceeded. Failing job."
+ exit 1
+ fi
+}
+
+# Clone or update teuthology
+if [ ! -d teuthology ]; then
+ git clone https://github.com/ceph/teuthology
+ cd teuthology
+ git checkout $TEUTHOLOGYBRANCH
+else
+ cd teuthology
+ git fetch
+ git checkout main
+ git pull
+ git checkout $TEUTHOLOGYBRANCH
+fi
+
+# Should we use teuthology-lock to lock systems?
+if [ "$DEFINEDHOSTS" == "" ]; then
+ use_teuthologylock=true
+else
+ use_teuthologylock=false
+fi
+
+# once this bootstrap was conditional on use_teuthologylock,
+# but we also want teuthology-queue, even if we're not using
+# teuthology-lock
+# Bootstrap teuthology
+./bootstrap
+cd $WORKSPACE
+source $WORKSPACE/teuthology/virtualenv/bin/activate
+
+# Clone or update ceph-cm-ansible
+if [ ! -d ceph-cm-ansible ]; then
+ git clone https://github.com/ceph/ceph-cm-ansible
+ cd ceph-cm-ansible
+ git checkout $CMANSIBLEBRANCH
+else
+ cd ceph-cm-ansible
+ git fetch
+ git checkout main
+ git pull
+ git checkout $CMANSIBLEBRANCH
+fi
+
+cd $WORKSPACE
+
+if [ "$use_teuthologylock" = true ]; then
+ # Don't bail if we fail to lock machines
+ set +e
+
+ numdistros=$(echo $DISTROS | wc -w)
+ # Keep trying to lock machines
+ for type in $MACHINETYPES; do
+ numlocked=$(teuthology-lock --brief -a --machine-type $type --status down | grep "Locked to capture FOG image for Jenkins build $BUILD_NUMBER" | wc -l)
+ currentretries=0
+ while [ $numlocked -lt $numdistros ]; do
+ # We have to mark the system down and set its desc instead of locking because locking attempts to reimage using FOG.
+ # This could be worked around by copying /etc/teuthology.yaml to /home/jenkins-build/.teuthology.yaml and removing `machine_types:`
+ teuthology-lock --update --status down --desc "Locked to capture FOG image for Jenkins build $BUILD_NUMBER" $(teuthology-lock --brief -a --machine-type $type --status up --locked false | head -n 1 | awk '{ print $1 }')
+ # Sleep for a bit so we don't hammer the lock server
+ if [ $? -ne 0 ]; then
+ sleep 5
+ fi
+ numlocked=$(teuthology-lock --brief -a --machine-type $type --status down | grep "Locked to capture FOG image for Jenkins build $BUILD_NUMBER" | wc -l)
+ ((++currentretries))
+ # Retry for 1hr
+ funRetry $currentretries 720
+ done
+ done
+
+ set -e
+
+ allhosts=$(teuthology-lock --brief -a --status down | grep "Locked to capture FOG image for Jenkins build $BUILD_NUMBER" | cut -d '.' -f1 | tr "\n" " ")
+else
+ allhosts="$DEFINEDHOSTS"
+ set -e
+fi
+
+# Configure DHCP to use cobbler as the PXE server for each machine to reimage and ansiblize
+for machine in $allhosts; do
+ ssh ubuntu@store01.front.sepia.ceph.com "sudo /usr/local/sbin/set-next-server.sh $machine cobbler"
+done
+
+# Restart dhcpd (for some reason doing this every time we set the next-server in the for loop above, dhcpd would fail to start)
+ssh ubuntu@store01.front.sepia.ceph.com "sudo service dhcpd restart"
+
+# Get FOG 'Capture' TaskID
+fogcaptureid=$(curl -f -s -k -H "fog-api-token: ${FOG_API_TOKEN}" -H "fog-user-token: ${FOG_USER_TOKEN}" http://fog.front.sepia.ceph.com/fog/tasktype -d '{"name": "Capture"}' -X GET | jq -r '.tasktypes[0].id')
+
+# Set cobbler profile and FOG image ID for each locked machine
+for type in $MACHINETYPES; do
+ if [ "$use_teuthologylock" = true ]; then
+ lockedhosts=$(teuthology-lock --brief -a --machine-type $type --status down | grep "Locked to capture FOG image for Jenkins build $BUILD_NUMBER" | cut -d '.' -f1 | sort)
+ else
+ lockedhosts=$(echo $DEFINEDHOSTS | grep -o "\w*${type}\w*")
+ fi
+ # Create arrays using our lists so we can iterate through them
+ array1=($lockedhosts)
+ array2=($DISTROS)
+ for i in $(seq 1 $numdistros); do
+ funSetProfiles ${array2[$i-1]}
+ ssh ubuntu@cobbler.front.sepia.ceph.com "sudo cobbler system edit --name ${array1[$i-1]} --profile $cobblerprofile --netboot-enabled=1"
+ funPowerCycle ${array1[$i-1]}
+ # Get FOG host ID
+ foghostid=$(curl -f -s -k -H "fog-api-token: ${FOG_API_TOKEN}" -H "fog-user-token: ${FOG_USER_TOKEN}" http://fog.front.sepia.ceph.com/fog/host -d '{"name": "'${array1[$i-1]}'"}' -X GET | jq -r '.hosts[0].id')
+ # Get FOG image ID
+ fogimageid=$(curl -f -s -k -H "fog-api-token: ${FOG_API_TOKEN}" -H "fog-user-token: ${FOG_USER_TOKEN}" http://fog.front.sepia.ceph.com/fog/image -d '{"name": "'${type}_${fogprofile}'"}' -X GET | jq -r '.images[0].id')
+ # Check if FOG image ID got set and create the image template if it's not set
+ if [ "$fogimageid" == "null" ]; then
+ curl -s -k -H "fog-api-token: ${FOG_API_TOKEN}" -H "fog-user-token: ${FOG_USER_TOKEN}" http://fog.front.sepia.ceph.com/fog/image/ -d '{ "imageTypeID": "1", "imagePartitionTypeID": "1", "name": "'${type}_${fogprofile}'", "path": "'${type}_${fogprofile}'", "osID": "50", "format": "0", "magnet": "", "protected": "0", "compress": "6", "isEnabled": "1", "toReplicate": "1", "os": {"id": "50", "name": "Linux", "description": ""}, "imagepartitiontype": {"id": "1", "name": "Everything", "type": "all"}, "imagetype": {"id": "1", "name": "Single Disk - Resizable", "type": "n"}, "imagetypename": "Single Disk - Resizable", "imageparttypename": "Everything", "osname": "Linux", "storagegroupname": "default"}' -X POST
+ fogimageid=$(curl -s -k -H "fog-api-token: ${FOG_API_TOKEN}" -H "fog-user-token: ${FOG_USER_TOKEN}" http://fog.front.sepia.ceph.com/fog/image -d '{"name": "'${type}_${fogprofile}'"}' -X GET | jq -r '.images[0].id')
+ fi
+ # Set foghostid (target host) to capture fogimageid
+ curl -f -s -k -H "fog-api-token: ${FOG_API_TOKEN}" -H "fog-user-token: ${FOG_USER_TOKEN}" http://fog.front.sepia.ceph.com/fog/host/$foghostid -d '{"imageID": "'${fogimageid}'"}' -X PUT
+ # Create 'Capture' task for each machine
+ curl -f -s -k -H "fog-api-token: ${FOG_API_TOKEN}" -H "fog-user-token: ${FOG_USER_TOKEN}" http://fog.front.sepia.ceph.com/fog/host/$foghostid/task -d '{"taskTypeID": "'${fogcaptureid}'"}' -X POST
+ done
+done
+
+# Sleep for 10sec to allow the hosts to reboot (Makes sure we don't `stat` existing/old /ceph-qa-ready
+sleep 10
+
+# Don't bail if machines aren't ready yet
+set +e
+
+# Set DHCP next-server back to FOG and prep each machine for FOG capturing
+remaininghosts=$allhosts
+# Once all the hostnames are removed from $remaininghosts, trailing spaces are all that's left.
+# I'm sure there's a cleaner way to compile the list of hostnames above. PRs welcome.
+currentretries=0
+while [[ $(echo $remaininghosts | wc -w) != 0 ]]; do
+ for host in $remaininghosts; do
+ if ssh -q ubuntu@${host}.front.sepia.ceph.com stat /ceph-qa-ready \> /dev/null 2\>\&1; then
+ # Bail if anything fails
+ set -ex
+ # Set DHCP back
+ ssh ubuntu@store01.front.sepia.ceph.com "sudo /usr/local/sbin/set-next-server.sh $host fog"
+ # Prep the host for FOG image capture
+ # set ANSIBLE_CONFIG to allow teuthology to specify collections dir
+ ANSIBLE_CONFIG=$WORKSPACE/teuthology/ansible.cfg ansible-playbook $WORKSPACE/ceph-cm-ansible/tools/prep-fog-capture.yml -e ansible_ssh_user=ubuntu --limit="$host*"
+ remaininghosts=${remaininghosts//$host/}
+ else
+ # This gets noisy
+ set +ex
+ echo "$(date) -- $host is not ready. Sleeping for 2min"
+ sleep 120
+ ((++currentretries))
+ # Retry for 2h
+ funRetry $currentretries 60
+ fi
+ done
+done
+
+set -ex
+
+# Restart dhcpd so servers PXE boot to FOG server
+ssh ubuntu@store01.front.sepia.ceph.com "sudo service dhcpd restart"
+
+# Only pause the queue if needed
+if [ "$PAUSEQUEUE" == "true" ]; then
+ # Get FOG 'Deploy' TaskID
+ fogdeployid=$(curl -f -s -k -H "fog-api-token: ${FOG_API_TOKEN}" -H "fog-user-token: ${FOG_USER_TOKEN}" http://fog.front.sepia.ceph.com/fog/tasktype -d '{"name": "Deploy"}' -X GET | jq -r '.tasktypes[0].id')
+
+ # Check for scheduled deploy tasks
+ deploytasks=$(curl -f -s -k -H "fog-api-token: ${FOG_API_TOKEN}" -H "fog-user-token: ${FOG_USER_TOKEN}" http://fog.front.sepia.ceph.com/fog/task/active -d '{"typeID": "'${fogdeployid}'", "imageID": "'${fogimageid}'"}' -X GET | jq -r '.count')
+
+ # If there are scheduled or active deploy tasks, pause the queue and let them finish.
+ # Capturing a new OS image can interrupt active OS deployments.
+ if [ $deploytasks -gt 0 ]; then
+ for type in $MACHINETYPES; do
+ # Only pause the queue for 1hr just in case anything goes wrong with the Jenkins job.
+ teuthology-queue --pause 3600 --machine_type $type
+ done
+ pausedqueue=true
+ currentretries=0
+ while [ $deploytasks -gt 0 ]; do
+ echo "$(date) -- $deploytasks FOG deploy tasks still queued. Sleeping 10sec"
+ sleep 10
+ deploytasks=$(curl -f -s -k -H "fog-api-token: ${FOG_API_TOKEN}" -H "fog-user-token: ${FOG_USER_TOKEN}" http://fog.front.sepia.ceph.com/fog/task/active -d '{"typeID": "'${fogdeployid}'", "imageID": "'${fogimageid}'"}' -X GET | jq -r '.count')
+ ((++currentretries))
+ # Retry for 1hr
+ funRetry $currentretries 360
+ done
+ fi
+else
+ pausedqueue=false
+fi
+
+# Reboot all hosts so FOG can capture their OSes
+for host in $allhosts; do
+ funPowerCycle $host
+done
+
+# Wait for Capture tasks to finish
+capturetasks=$(curl -f -s -k -H "fog-api-token: ${FOG_API_TOKEN}" -H "fog-user-token: ${FOG_USER_TOKEN}" http://fog.front.sepia.ceph.com/fog/task/active -d '{"typeID": "'${fogcaptureid}'"}' -X GET | jq -r '.count')
+currentretries=0
+while [ $capturetasks -gt 0 ]; do
+ echo "$(date) -- $capturetasks FOG capture tasks still queued. Sleeping 10sec"
+ sleep 10
+ capturetasks=$(curl -f -s -k -H "fog-api-token: ${FOG_API_TOKEN}" -H "fog-user-token: ${FOG_USER_TOKEN}" http://fog.front.sepia.ceph.com/fog/task/active -d '{"typeID": "'${fogcaptureid}'"}' -X GET | jq -r '.count')
+ ((++currentretries))
+ # Retry for 30min
+ funRetry $currentretries 180
+done
+
+# Unpause the queue if we paused it earlier
+if [ "$pausedqueue" = true ]; then
+ for type in $MACHINETYPES; do
+ teuthology-queue --pause 0 --machine_type $type
+ done
+fi
+
+if [ "$use_teuthologylock" = true ]; then
+ # Unlock all machines after all capture images are finished
+ for host in $allhosts; do
+ teuthology-lock --update --status up $host
+ done
+else
+ deactivate
+ rm -rf $WORKSPACE/venv
+fi
--- /dev/null
+#!/bin/bash
+
+set -ex
+
+funPowerCycle () {
+ host=$(echo ${1} | cut -d '.' -f1)
+ powerstatus=$(ipmitool -I lanplus -U inktank -P $SEPIA_IPMI_PASS -H ${host}.ipmi.sepia.ceph.com chassis power status | cut -d ' ' -f4-)
+ if [ "$powerstatus" == "off" ]; then
+ ipmitool -I lanplus -U inktank -P $SEPIA_IPMI_PASS -H ${host}.ipmi.sepia.ceph.com chassis power on
+ else
+ ipmitool -I lanplus -U inktank -P $SEPIA_IPMI_PASS -H ${host}.ipmi.sepia.ceph.com chassis power cycle
+ fi
+}
+
+# Should we use teuthology-lock to lock systems?
+if [ "$DEFINEDHOSTS" == "" ]; then
+ use_teuthologylock=true
+else
+ use_teuthologylock=false
+fi
+
+# Clone or update teuthology
+if [ ! -d teuthology ]; then
+ git clone https://github.com/ceph/teuthology
+ cd teuthology
+else
+ cd teuthology
+ git pull
+fi
+
+# Bootstrap teuthology
+./bootstrap
+
+cd $WORKSPACE
+
+source $WORKSPACE/teuthology/virtualenv/bin/activate
+
+allhosts=$(teuthology-lock --brief -a --status down | grep "Locked to capture FOG image for Jenkins build $BUILD_NUMBER" | cut -d '.' -f1 | tr "\n" " ")
+# Set DHCP server back to FOG
+for machine in $allhosts; do
+ ssh ubuntu@store01.front.sepia.ceph.com "sudo /usr/local/sbin/set-next-server.sh $machine fog"
+done
+
+# Restart dhcpd (for some reason doing this every time we set the next-server in the for loop above, dhcpd would fail to start)
+ssh ubuntu@store01.front.sepia.ceph.com "sudo service dhcpd restart"
+
+# Get FOG 'Capture' TaskID
+fogcaptureid=$(curl -f -s -k -H "fog-api-token: ${FOG_API_TOKEN}" -H "fog-user-token: ${FOG_USER_TOKEN}" http://fog.front.sepia.ceph.com/fog/tasktype -d '{"name": "Capture"}' -X GET | jq -r '.tasktypes[0].id')
+
+# Delete all active Capture tasks
+for task in $(curl -f -s -k -H "fog-api-token: ${FOG_API_TOKEN}" -H "fog-user-token: ${FOG_USER_TOKEN}" http://fog.front.sepia.ceph.com/fog/task/active -d '{"typeID": "'${fogcaptureid}'"}' -X GET | jq -r '.tasks[].id'); do
+ curl -f -s -k -H "fog-api-token: ${FOG_API_TOKEN}" -H "fog-user-token: ${FOG_USER_TOKEN}" http://fog.front.sepia.ceph.com/fog/task/${task} -X DELETE
+done
+
+set +e
+
+# Unpause the queue if we paused it earlier
+if [ "$pausedqueue" = true ]; then
+ for type in $MACHINETYPES; do
+ teuthology-queue --pause 0 --machine_type $type
+ done
+fi
+
+if [ "$use_teuthologylock" = true ]; then
+ # Unlock all machines after all capture images are finished
+ for host in $allhosts; do
+ teuthology-lock --update --status up $host
+ done
+else
+ deactivate
+ rm -rf $WORKSPACE/venv
+fi
--- /dev/null
+- job:
+ name: sepia-fog-images
+ project-type: freestyle
+ defaults: global
+ concurrent: false
+ display-name: 'Sepia FOG Image Creator'
+ node: teuthology
+ quiet-period: 0
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+
+ # Run every Sunday at noon
+ triggers:
+ - timed: "0 12 * * 0"
+
+ parameters:
+ - string:
+ name: DISTROS
+ default: "ubuntu_20.04 centos_9.stream rhel_9.3"
+ description: "Distro to capture images for: (e.g., 'ubuntu_16.04', 'centos_7.5' or 'ubuntu_16.04 rhel_7.5' for multiple distros)"
+ - string:
+ name: MACHINETYPES
+ default: "smithi"
+ description: "Machine types to capture images for. (e.g., 'smithi' or 'smithi mira' for multiple machine types)"
+ - string:
+ name: TEUTHOLOGYBRANCH
+ default: main
+ description: "Optionally define a different teuthology branch (useful for testing)"
+ - string:
+ name: CMANSIBLEBRANCH
+ default: main
+ description: "Optionally define a different ceph-cm-ansible branch (useful for testing)"
+ - string:
+ name: PAUSEQUEUE
+ default: "true"
+ description: "Should the teuthology queue be paused? Recapturing an existing OS image will cause running reimages to fail without pausing the queue. The queue can remain unpaused when a new distro/version is being captured. Queue is paused by default."
+ - string:
+ name: DEFINEDHOSTS
+ default: ""
+ description: "Define a list of systems to use instead of using teuthology-lock to lock unused systems."
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/build
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../build/failure
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: sepia-ipmi
+ username: SEPIA_IPMI_USER
+ password: SEPIA_IPMI_PASS
+ - username-password-separated:
+ credential-id: fog
+ username: FOG_USER_TOKEN
+ password: FOG_API_TOKEN
--- /dev/null
+#!/bin/bash
+
+# the following two methods exist in scripts/build_utils.sh
+pkgs=( "ansible" "tox" )
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# run ansible to get this current host to meet our requirements, specifying
+# a local connection and 'localhost' as the host where to execute
+cd "$WORKSPACE/ceph-build/shaman-pull-requests/setup/playbooks"
+$VENV/ansible-playbook -i "localhost," -c local setup.yml
+
+cd "$WORKSPACE/shaman"
+$VENV/tox -rv
--- /dev/null
+- scm:
+ name: shaman
+ scm:
+ - git:
+ url: https://github.com/ceph/shaman
+ branches:
+ - ${{sha1}}
+ refspec: +refs/pull/*:refs/remotes/origin/pr/*
+ browser: auto
+ timeout: 20
+ basedir: "shaman"
+ skip-tag: true
+ wipe-workspace: true
+
+- scm:
+ name: ceph-build
+ scm:
+ - git:
+ url: https://github.com/ceph/ceph-build.git
+ browser-url: https://github.com/ceph/ceph-build
+ timeout: 20
+ skip-tag: true
+ wipe-workspace: false
+ basedir: "ceph-build"
+ branches:
+ - origin/main
+
+
+- job:
+ name: shaman-pull-requests
+ description: Runs tox tests for shaman on each GitHub PR
+ project-type: freestyle
+ node: trusty && small
+ block-downstream: false
+ block-upstream: false
+ defaults: global
+ display-name: 'shaman: Pull Requests'
+ quiet-period: 5
+ retry-count: 3
+
+
+ properties:
+ - build-discarder:
+ days-to-keep: 15
+ num-to-keep: 30
+ artifact-days-to-keep: 15
+ artifact-num-to-keep: 15
+ - github:
+ url: https://github.com/ceph/shaman/
+
+ parameters:
+ - string:
+ name: sha1
+ description: "A pull request ID, like 'origin/pr/72/head'"
+
+ triggers:
+ - github-pull-request:
+ org-list:
+ - ceph
+ only-trigger-phrase: false
+ github-hooks: true
+ permit-all: false
+ auto-close-on-fail: false
+
+ scm:
+ - shaman
+ - ceph-build
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
--- /dev/null
+[ssh_connection]
+pipelining=True
--- /dev/null
+---
+
+- hosts: localhost
+ user: jenkins-build
+ sudo: True
+
+ tasks:
+ - import_tasks: tasks/postgresql.yml
--- /dev/null
+---
+- name: update apt cache
+ apt:
+ update_cache: yes
+ become: yes
+
+- name: install postgresql requirements
+ sudo: yes
+ apt:
+ name: "{{ item }}"
+ state: present
+ with_items:
+ - postgresql
+ - postgresql-common
+ - postgresql-contrib
+ - postgresql-server-dev-9.5
+ - python-psycopg2
+ tags:
+ - packages
+
+- name: ensure database service is up
+ service:
+ name: postgresql
+ state: started
+ enabled: yes
+ become: yes
+
+- name: "Build pg_hba.conf file"
+ become: true
+ template:
+ src: pg_hba.conf.j2
+ dest: "/etc/postgresql/9.5/main/pg_hba.conf"
+
+- name: make jenkins-build user
+ postgresql_user:
+ name: "jenkins-build"
+ password: "secret"
+ role_attr_flags: SUPERUSER
+ login_user: postgres
+ become_user: postgres
+ become: yes
+
+- service:
+ name: postgresql
+ state: restarted
+ become: yes
--- /dev/null
+# {{ ansible_managed }}
+# Database administrative login by Unix domain socket
+local all postgres peer
+
+# TYPE DATABASE USER ADDRESS METHOD
+
+# "local" is for Unix domain socket connections only
+local all all peer
+# IPv4 local connections:
+host all all 127.0.0.1/0 trust
+# IPv6 local connections:
+host all all ::1/128 trust
--- /dev/null
+#!/bin/bash
+set -e
+
+# shellcheck disable=SC2034
+WORKDIR=$(mktemp -td tox.XXXXXXXXXX)
+
+cat << EOF > ./sync.yml
+docker.io:
+ images-by-semver:
+ nginx: ">= 1.26.0"
+ grafana/grafana: ">= 9.0.0"
+ grafana/loki: "= 3.0.0"
+ grafana/promtail: "= 3.0.0"
+ maxwo/snmp-notifier: "= v1.2.1"
+EOF
+# make sure we pull the last stable image
+podman pull quay.io/skopeo/stable
+podman run --rm --security-opt label=disable -v ./sync.yml:/sync.yml:ro quay.io/skopeo/stable sync --all --src yaml --dest docker /sync.yml "${DEST_REGISTRY}" --dest-username "${DEST_USERNAME}" --dest-password "${DEST_PASSWORD}"
--- /dev/null
+jenkins.ceph.com
--- /dev/null
+- job:
+ name: sync-images
+ id: sync-images
+ node: small && centos9
+ defaults: global
+ display-name: sync-images
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ retry-count: 3
+ properties:
+ - build-discarder:
+ days-to-keep: -1
+ num-to-keep: -1
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+
+ triggers:
+ - timed: '@daily'
+
+ parameters:
+ - string:
+ name: DEST_REGISTRY
+ description: "The destination registry hostname. Eg: quay.io"
+ default: "quay.io/ceph"
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - username-password-separated:
+ credential-id: sync-images-from-docker-to-quay
+ username: DEST_USERNAME
+ password: DEST_PASSWORD
\ No newline at end of file
--- /dev/null
+- job:
+ name: tcmu-runner-trigger
+ node: built-in
+ project-type: freestyle
+ defaults: global
+ quiet-period: 5
+ block-downstream: false
+ block-upstream: false
+ properties:
+ - build-discarder:
+ days-to-keep: 1
+ num-to-keep: 10
+ artifact-days-to-keep: -1
+ artifact-num-to-keep: -1
+ - github:
+ url: https://github.com/ceph/tcmu-runner
+ discard-old-builds: true
+
+ triggers:
+ - github
+
+ scm:
+ - git:
+ url: https://github.com/ceph/tcmu-runner.git
+ branches:
+ - 'origin/main*'
+ - 'origin/testing*'
+ - 'origin/wip*'
+ skip-tag: true
+ timeout: 20
+ wipe-workspace: true
+
+ builders:
+ - trigger-builds:
+ - project: 'tcmu-runner'
+ predefined-parameters: |
+ BRANCH=${{GIT_BRANCH}}
+ FORCE=True
--- /dev/null
+#! /usr/bin/bash
+set -ex
+
+PROJECT=tcmu-runner
+BRANCH=`branch_slash_filter $BRANCH`
+CEPH_BRANCH=$(branch_slash_filter $CEPH_BRANCH)
+
+# Only do actual work when we are an RPM distro
+if test "$DISTRO" != "fedora" -a "$DISTRO" != "centos" -a "$DISTRO" != "rhel"; then
+ exit 0
+fi
+
+# This will set the RELEASE variable
+get_rpm_dist
+
+## Get the desired CEPH_BRANCH/CEPH_SHA1 ceph repo
+# Get .repo file from appropriate shaman build
+REPO_URL="https://shaman.ceph.com/api/repos/ceph/$CEPH_BRANCH/$CEPH_SHA1/$DISTRO/$RELEASE/flavors/default/repo"
+TIME_LIMIT=1200
+INTERVAL=30
+REPO_FOUND=0
+
+# poll shaman for up to 10 minutes
+while [ "$SECONDS" -le "$TIME_LIMIT" ]
+do
+ if `curl --fail -L $REPO_URL > $WORKSPACE/shaman.repo`; then
+ echo "Ceph repo file has been added from shaman"
+ REPO_FOUND=1
+ break
+ else
+ sleep $INTERVAL
+ fi
+done
+
+if [[ "$REPO_FOUND" -eq 0 ]]; then
+ echo "Ceph lib repo does NOT exist in shaman"
+ exit 1
+fi
+
+# Install the dependencies
+sudo yum install -y mock
+
+## Get some basic information about the system and the repository
+VERSION="$(git describe --abbrev=0 --tags HEAD | sed -e 's/v//1;' | cut -d - -f 1)"
+REVISION="$(git describe --tags HEAD | sed -e 's/v//1;' | cut -d - -f 2- | sed 's/-/./g' | sed 's/^rc/0./')"
+if [ "$VERSION" = "$REVISION" ]; then
+ REVISION="1"
+fi
+
+# Create dummy dist tar
+tar cf dist/${PROJECT}-${VERSION}.tar.gz \
+ --exclude .git --exclude dist \
+ --transform "s,^,${PROJECT}-${VERSION}/," *
+tar tfv dist/${PROJECT}-${VERSION}.tar.gz
+
+# Update spec version
+sed -i "s/^Version:.*$/Version:\t${VERSION}/g" $WORKSPACE/${PROJECT}.spec
+sed -i "s/^Release:.*$/Release:\t${REVISION}%{?dist}/g" $WORKSPACE/${PROJECT}.spec
+sed -i 's/^[# ]*%define _RC.*$//g' tcmu-runner.spec
+# for debugging
+cat $WORKSPACE/${PROJECT}.spec
+
+## Create the source rpm
+echo "Building SRPM"
+rpmbuild \
+ --define "_sourcedir $WORKSPACE/dist" \
+ --define "_specdir $WORKSPACE/dist" \
+ --define "_builddir $WORKSPACE/dist" \
+ --define "_srcrpmdir $WORKSPACE/dist/SRPMS" \
+ --define "_rpmdir $WORKSPACE/dist/RPMS" \
+ --nodeps -bs $WORKSPACE/${PROJECT}.spec
+SRPM=$(readlink -f $WORKSPACE/dist/SRPMS/*.src.rpm)
+
+DISTRO_ARCH=${ARCH}
+if [[ "${ARCH}" = "arm64" ]] ; then
+ DISTRO_ARCH="aarch64"
+fi
+
+# add shaman repo file to mock config
+cat /etc/mock/${MOCK_TARGET}-${RELEASE}-${DISTRO_ARCH}.cfg > tcmu-runner.cfg
+echo "" >> tcmu-runner.cfg
+echo "config_opts['yum.conf'] += \"\"\"" >> tcmu-runner.cfg
+cat $WORKSPACE/shaman.repo >> tcmu-runner.cfg
+echo "\"\"\"" >> tcmu-runner.cfg
+# for debugging
+cat tcmu-runner.cfg
+
+## Build the binaries with mock
+# disable 'glfs' because centos9 packages aren't available
+echo "Building RPMs"
+sudo mock --verbose --without glfs -r tcmu-runner.cfg --scrub=all
+sudo mock --verbose --without glfs -r tcmu-runner.cfg --resultdir=$WORKSPACE/dist/RPMS/ ${SRPM} || ( tail -n +1 $WORKSPACE/dist/RPMS/{root,build}.log && exit 1 )
+
+## Upload the created RPMs to chacra
+chacra_endpoint="tcmu-runner/${BRANCH}/${GIT_COMMIT}/${DISTRO}/${RELEASE}"
+
+[ "$FORCE" = true ] && chacra_flags="--force" || chacra_flags=""
+
+# push binaries to chacra
+find $WORKSPACE/dist/RPMS/ | egrep "\.$DISTRO_ARCH\.rpm" | $VENV/chacractl binary ${chacra_flags} create ${chacra_endpoint}/$DISTRO_ARCH/
+PACKAGE_MANAGER_VERSION=$(rpm --queryformat '%{VERSION}-%{RELEASE}\n' -qp $(find $WORKSPACE/dist/RPMS/ | egrep "\.$DISTRO_ARCH\.rpm" | head -1))
+
+# write json file with build info
+cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$VERSION",
+ "package_manager_version":"$PACKAGE_MANAGER_VERSION",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+# post the json to repo-extra json to chacra
+curl -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_endpoint}/extra/
+
+# start repo creation
+$VENV/chacractl repo update ${chacra_endpoint}
+
+echo Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_endpoint}
--- /dev/null
+#! /usr/bin/bash
+#
+# Ceph distributed storage system
+#
+# Copyright (C) 2016 Red Hat <contact@redhat.com>
+#
+# Author: Boris Ranto <branto@redhat.com>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+set -ex
+
+# Make sure we execute at the top level directory before we do anything
+cd $WORKSPACE
+
+# This will set the DISTRO and MOCK_TARGET variables
+get_distro_and_target
+
+# Perform a clean-up
+git clean -fxd
+
+# Make sure the dist directory is clean
+rm -rf dist
+mkdir -p dist
+
+# Print some basic system info
+HOST=$(hostname --short)
+echo "Building on $(hostname) with the following env"
+echo "*****"
+env
+echo "*****"
+
+export LC_ALL=C # the following is vulnerable to i18n
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -f -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
--- /dev/null
+- job:
+ name: tcmu-runner
+ project-type: matrix
+ defaults: global
+ display-name: 'tcmu-runner'
+ block-downstream: false
+ block-upstream: false
+ concurrent: true
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: main
+
+ - string:
+ name: CEPH_SHA1
+ description: "The SHA1 of the ceph branch"
+ default: "latest"
+
+ - string:
+ name: CEPH_BRANCH
+ description: "The branch of Ceph to get the repo file of for librbd"
+ default: main
+
+ - string:
+ name: DISTROS
+ description: "A list of distros to build for. Available options are: centos7, centos8, centos9"
+ default: "centos7 centos8 centos9"
+
+ - string:
+ name: ARCHS
+ description: "A list of architectures to build for. Available options are: x86_64 arm64"
+ default: "x86_64 arm64"
+
+ - bool:
+ name: FORCE
+ description: "
+If this is unchecked, then nothing is built or pushed if they already exist in chacra. This is the default.
+
+If this is checked, then the binaries will be built and pushed to chacra even if they already exist in chacra."
+
+ - string:
+ name: BUILD_VIRTUALENV
+ description: "Base parent path for virtualenv locations, set to avoid issues with extremely long paths that are incompatible with tools like pip. Defaults to '/tmp/' (note the trailing slash, which is required)."
+ default: "/tmp/"
+
+ execution-strategy:
+ combination-filter: |
+ DIST == AVAILABLE_DIST && ARCH == AVAILABLE_ARCH &&
+ (ARCH == "x86_64" || (ARCH == "arm64" && ["centos8", "centos9"].contains(DIST)))
+ axes:
+ - axis:
+ type: label-expression
+ name: MACHINE_SIZE
+ values:
+ - huge
+ - axis:
+ type: label-expression
+ name: AVAILABLE_ARCH
+ values:
+ - x86_64
+ - arm64
+ - axis:
+ type: label-expression
+ name: AVAILABLE_DIST
+ values:
+ - centos7
+ - centos8
+ - centos9
+ - axis:
+ type: dynamic
+ name: DIST
+ values:
+ - DISTROS
+ - axis:
+ type: dynamic
+ name: ARCH
+ values:
+ - ARCHS
+
+ scm:
+ - git:
+ url: git@github.com:ceph/tcmu-runner.git
+ # Use the SSH key attached to the ceph-jenkins GitHub account.
+ credentials-id: 'jenkins-build'
+ branches:
+ - $BRANCH
+ skip-tag: true
+ wipe-workspace: true
+
+ builders:
+ - shell: |
+ echo "Cleaning up top-level workarea (shared among workspaces)"
+ rm -rf dist
+ rm -rf venv
+ rm -rf release
+ # rpm build scripts
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../build/build_rpm
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - credentials-binding:
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - SUCCESS
+ - UNSTABLE
+ - FAILURE
+ - ABORTED
+ - NOT_BUILT
+ build-steps:
+ - shell: "sudo rm -f /etc/apt/sources.list.d/shaman.list /etc/yum.repos.d/shaman.repo"
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+if [[ -z $WINDOWS_SSH_USER ]]; then echo "ERROR: The WINDOWS_SSH_USER env variable is not set"; exit 1; fi
+if [[ -z $WINDOWS_VM_IP ]]; then echo "ERROR: The WINDOWS_VM_IP env variable is not set"; exit 1; fi
+
+export SSH_USER=$WINDOWS_SSH_USER
+export SSH_ADDRESS=$WINDOWS_VM_IP
+
+BUILD_CONFIGURATION=${BUILD_CONFIGURATION:-"Release"}
+
+
+#
+# Install requirements (if needed)
+#
+if ! which zip >/dev/null; then
+ sudo apt-get update -o Acquire::Languages=none -o Acquire::Translation=none || true
+ sudo apt-get install -y zip
+fi
+
+#
+# Upload wnbd repo to the Windows VM
+#
+scp_upload $WORKSPACE/wnbd /workspace/wnbd
+
+#
+# Build the Visual Studio project
+#
+BUILD_CMD="MSBuild.exe %SystemDrive%\\workspace\\wnbd\\vstudio\\wnbd.sln /p:Configuration=${BUILD_CONFIGURATION}"
+SSH_TIMEOUT=30m ssh_exec "\"%ProgramFiles(x86)%\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Auxiliary\\Build\\vcvarsall.bat\" x86_amd64 & ${BUILD_CMD}"
+
+#
+# Install the driver in the testing Windows VM
+#
+ssh_exec powershell.exe "Import-Certificate -FilePath /workspace/wnbd/vstudio/x64/${BUILD_CONFIGURATION}/wnbd.cer -Cert Cert:\LocalMachine\Root"
+ssh_exec powershell.exe "Import-Certificate -FilePath /workspace/wnbd/vstudio/x64/${BUILD_CONFIGURATION}/wnbd.cer -Cert Cert:\LocalMachine\TrustedPublisher"
+ssh_exec /workspace/wnbd/vstudio/x64/${BUILD_CONFIGURATION}/wnbd-client.exe uninstall-driver
+ssh_exec /workspace/wnbd/vstudio/x64/${BUILD_CONFIGURATION}/wnbd-client.exe install-driver /workspace/wnbd/vstudio/x64/${BUILD_CONFIGURATION}/driver/wnbd.inf
+
+#
+# Download the build artifacts
+#
+mkdir -p $WORKSPACE/build/wnbd/driver
+scp_download /workspace/wnbd/vstudio/x64/${BUILD_CONFIGURATION}/driver/* $WORKSPACE/build/wnbd/driver/
+scp_download /workspace/wnbd/vstudio/x64/${BUILD_CONFIGURATION}/wnbd.cer $WORKSPACE/build/wnbd/driver/
+
+mkdir -p $WORKSPACE/build/wnbd/binaries
+scp_download /workspace/wnbd/vstudio/x64/${BUILD_CONFIGURATION}/libwnbd.dll $WORKSPACE/build/wnbd/binaries/
+scp_download /workspace/wnbd/vstudio/x64/${BUILD_CONFIGURATION}/wnbd-client.exe $WORKSPACE/build/wnbd/binaries/
+scp_download /workspace/wnbd/vstudio/wnbdevents.xml /$WORKSPACE/build/wnbd/binaries/
+
+mkdir -p $WORKSPACE/build/wnbd/symbols
+scp_download /workspace/wnbd/vstudio/x64/${BUILD_CONFIGURATION}/pdb/driver/* $WORKSPACE/build/wnbd/symbols/
+scp_download /workspace/wnbd/vstudio/x64/${BUILD_CONFIGURATION}/pdb/libwnbd/* $WORKSPACE/build/wnbd/symbols/
+scp_download /workspace/wnbd/vstudio/x64/${BUILD_CONFIGURATION}/pdb/wnbd-client/* $WORKSPACE/build/wnbd/symbols/
+
+#
+# Package the build artifacts into a zip file
+#
+cd $WORKSPACE/build
+zip -r $WORKSPACE/wnbd.zip wnbd
+
+#
+# Upload the the zip file to Chacra
+#
+if [ "$THROWAWAY" = false ]; then
+ # push binaries to chacra
+ chacra_binary="$VENV/chacractl binary --force"
+
+ ls $WORKSPACE/wnbd.zip | $chacra_binary create ${chacra_binary_endpoint}
+
+ vers=$(ssh_exec /workspace/wnbd/vstudio/x64/${BUILD_CONFIGURATION}/wnbd-client.exe -v | grep wnbd-client.exe | cut -d ':' -f2 | tr -d '[:space:]')
+
+ # write json file with build info
+ cat > $WORKSPACE/repo-extra.json << EOF
+{
+ "version":"$vers",
+ "package_manager_version":"",
+ "build_url":"$BUILD_URL",
+ "root_build_cause":"$ROOT_BUILD_CAUSE",
+ "node_name":"$NODE_NAME",
+ "job_name":"$JOB_NAME"
+}
+EOF
+ # post the json to repo-extra json to chacra
+ curl --fail -X POST -H "Content-Type:application/json" --data "@$WORKSPACE/repo-extra.json" -u $CHACRACTL_USER:$CHACRACTL_KEY ${chacra_url}repos/${chacra_repo_endpoint}/extra/
+ # start repo creation
+ $VENV/chacractl repo update ${chacra_repo_endpoint}
+
+ echo "Check the status of the repo at: https://shaman.ceph.com/api/repos/${chacra_repo_endpoint}/"
+fi
+
+# update shaman with the completed build status
+update_build_status "completed" "wnbd" $DISTRO $DISTRO_VERSION $ARCH
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+FLAVOR="default"
+
+BRANCH=`branch_slash_filter $BRANCH`
+
+# update shaman with the failed build status
+failed_build_status "wnbd" $NORMAL_DISTRO $NORMAL_DISTRO_VERSION $NORMAL_ARCH
--- /dev/null
+#!/usr/bin/env bash
+set -o errexit
+set -o pipefail
+
+DISTRO="windows"
+DISTRO_VERSION="1809"
+ARCH="x86_64"
+FLAVOR="default"
+
+BRANCH=`branch_slash_filter $BRANCH`
+SHA1="$GIT_COMMIT"
+
+pkgs=( "chacractl>=0.0.21" )
+TEMPVENV=$(create_venv_dir)
+VENV=${TEMPVENV}/bin
+install_python_packages $TEMPVENV "pkgs[@]"
+
+# ask shaman which chacra instance to use
+chacra_url=`curl -u $SHAMAN_API_USER:$SHAMAN_API_KEY https://shaman.ceph.com/api/nodes/next/`
+# create the .chacractl config file using global variables
+make_chacractl_config $chacra_url
+
+chacra_endpoint="wnbd/${BRANCH}/${SHA1}/${DISTRO}/${DISTRO_VERSION}"
+chacra_binary_endpoint="${chacra_endpoint}/${ARCH}/flavors/${FLAVOR}"
+chacra_repo_endpoint="${chacra_endpoint}/flavors/${FLAVOR}"
+chacra_check_url="${chacra_binary_endpoint}/wnbd.zip"
+
+# create build status in shaman
+update_build_status "started" "wnbd" $DISTRO $DISTRO_VERSION $ARCH
--- /dev/null
+- job:
+ name: wnbd-build
+ description: 'Builds the Windows Network Block Device (WNBD) project.'
+ node: amd64 && focal && libvirt
+ project-type: freestyle
+ defaults: global
+ concurrent: true
+ display-name: 'wnbd-build'
+ properties:
+ - build-discarder:
+ days-to-keep: 30
+ num-to-keep: 30
+ artifact-days-to-keep: 30
+ artifact-num-to-keep: 30
+
+ parameters:
+ - string:
+ name: BRANCH
+ description: "The git branch (or tag) to build"
+ default: main
+
+ - bool:
+ name: THROWAWAY
+ description: |
+ Default: False. When True it will not POST binaries to chacra. Artifacts will not be around for long. Useful to test builds.
+ default: false
+
+ triggers:
+ - timed: "H 0 * * *"
+
+ scm:
+ - git:
+ url: https://github.com/ceph/wnbd.git
+ branches:
+ - $BRANCH
+ timeout: 20
+ wipe-workspace: true
+ basedir: wnbd
+
+ builders:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/setup
+ - ../../../scripts/ceph-windows/setup_libvirt
+ - ../../../scripts/ceph-windows/setup_libvirt_windows_vm
+ - ../../build/build
+
+ wrappers:
+ - inject-passwords:
+ global: true
+ mask-password-params: true
+ - credentials-binding:
+ - file:
+ credential-id: ceph_win_ci_private_key
+ variable: CEPH_WIN_CI_KEY
+ - text:
+ credential-id: chacractl-key
+ variable: CHACRACTL_KEY
+ - text:
+ credential-id: shaman-api-key
+ variable: SHAMAN_API_KEY
+
+ publishers:
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - SUCCESS
+ - UNSTABLE
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../../scripts/ceph-windows/cleanup
+
+ - postbuildscript:
+ builders:
+ - role: SLAVE
+ build-on:
+ - FAILURE
+ - ABORTED
+ build-steps:
+ - inject:
+ properties-file: ${{WORKSPACE}}/build_info
+
+ - shell:
+ !include-raw-verbatim:
+ - ../../../scripts/build_utils.sh
+ - ../../build/failure