Uploads dom0 RPM package for securedrop-workstation template#251
Uploads dom0 RPM package for securedrop-workstation template#251
Conversation
Adds gitignore directives to avoid commiting RPM packages. Builds a local container for running `createrepo_c`, which generates the RPM repository structure. Also runs `rpm -Kv <rpm>` to validate signatures on the RPM packages. Excludes RPM packages when running `make clone` in dom0, because we don't want to wait for the tar action on a large (~700MB) file every time. Pulls in external dependencies via pipenv, specifically for the `awscli` lib, which gives us access to commands like `aws s3 sync`, which is what we need to upload the repo contents.
Configures the RPM repository in dom0, which means also configuring it in sys-firewall (the default UpdateVM for dom0). Both the pubkey and the repo config itself are populated dynamically in sys-firewall. These changes do not persist over reboots, which is a problem.
Provides developer-oriented documentation for uploading the signed RPM packages. The nitty-gritty details of obtaining valid AWS credentials are glossed over, since the workflow is "talk to ops team," but otherwise, the process is rather on-the-rails: 1. Build the RPM (already documented) 2. Sign the RPM (already documented) 3. Upload the signed RPM (these docs are new) We'll likely revise these workflows in the near future, but for now, we'll continue to work from the docs to build familiarity with the fundamental actions.
| 5/tfsDr4DGHSz7ws+5M6Zbk6oNJEwQZ4cR+81qCfXE5X5LW1KlAL8wDl7dfS | ||
| =fYUi | ||
| -----END PGP PUBLIC KEY BLOCK----- No newline at end of file | ||
| -----END PGP PUBLIC KEY BLOCK----- |
There was a problem hiding this comment.
Adding the trailing newline was required, otherwise dnf balked, saying the pubkey was invalid.
| gpgcheck=1 | ||
| gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-securedrop-workstation-test | ||
| enabled=1 | ||
| baseurl=https://dev-bin.ops.securedrop.org/dom0-rpm-repo/ |
There was a problem hiding this comment.
Hey @conorsch - you probably want an over-ride variable here, we dont want to target dev-bin.ops.securedrop.org in all scenarios, do we?
There was a problem hiding this comment.
Right now, we only need dev support, but you're right that we'll eventually want to flip this to dev—and preserve the ability for developers to override. Any thoughts on how to do that cleanly in Salt? We're not currently using any vars-based configuration, as we do heavily over in SecureDrop core (https://github.com/freedomofpress/securedrop/).
There was a problem hiding this comment.
Yea @conorsch - look into pillars here. Variable interpolation uses the jinja syntax and you can inject complex jinja logic where-ever (one of the pros/cons in using salt v. ansible).
| container_run createrepo_c . | ||
|
|
||
| # Push created repo dirtree to S3 | ||
| aws --profile sdpackager s3 sync \ |
There was a problem hiding this comment.
How do you feel about making explicit STS calls (assumerole) here?
There was a problem hiding this comment.
The hardcoded sdpackager profile meant I had to figure out how to construct my config/credentials files to suit the script, which did mean some more manual fumbling.
There was a problem hiding this comment.
Great feedback, let's aim to clean this up next time another team member runs through the process. Not opposed to adding a CLI flag to the existing script, but we may want to migrate to a more robust Python script in the near future.
|
I've made it through the test plan, and was able to upload the RPM. Does the RPM repo work?
Can you upload packages to the repo?
|
rmol
left a comment
There was a problem hiding this comment.
Worked for me. I've tried to clarify a few instructions in the README that I tripped over. Please review to make sure that they actually clarify. 😉
| it might be worth creating another template derived from | ||
| `fedora-29`, into which you can install those extras, and basing | ||
| the builder VM on that, or just using a StandaloneVM to save time | ||
| and repetition. |
There was a problem hiding this comment.
Personally I don't bother to install docker inside the sd-template-builder VM; I simply qvm-move <rpm> the artifact back to my sd-dev environment for upload. Either way, your docs are certainly clearer than what we've had. Let's continue to discuss the optimal workflows here, and improve the docs as we go.
|
Thanks for detailed review, @rmol! Great docs improvements. We'll likely be iterating on both the docs and the upload functionality in the coming weeks, as we use the workflow more. |
Provides scripts for uploading the RPM generated by
make templateto an S3-backed RPM repo. Uses a local container to generate the repo metadata locally, then uploads to S3 (assumes valid AWS credentials).Note that this implementation clobbers: whatever local repo is generated, that'll be pushed to the remote, with no regard for state maintenance, meaning prior versions of packages will no longer be available. That's fine for our near-term needs with testing, just be aware that changes you push to the remote are not currently version controlled, and are destructive. Stateful handling can be added as part of #157, once we resolve #250.
Testing
Does the RPM repo work?
make all.sudo qubes-dom0-update --action=search qubes-template-securedrop-workstationsudo qubes-dom0-update qubes-template-securedrop-workstationsecuredrop-workstation[sic; the Salt-provisioned template is still calledsd-workstation] template. Confirm you can log into the VM and update it.qvm-remove <vm_name>sudo dnf remove qubes-template-securedrop-workstationCan you upload packages to the repo?
sd-template-builderAppVM, based on fedora-29, and runmake templateinside that VM to generate an RPM.rpm-repo/on in your dev machine (e.g.sd-dev).make publish-rpm.The script will builder a local container, verify the signatures on the RPM, generate a RPM repo metadata, then upload both the signed RPM and the repo metadata to a remote S3 bucket.