Could you please update flatpak-indexer to the newest code:
flatpak_indexer_git_ref
roles/openshift-apps/flatpak-indexer/vars/{staging,production}.yml
4b0e63e509fc216c95550f67bcda26591f18e10c
The code:
The indexer is currently not indexing F36 Flatpaks, preventing people from testing them. When they hit production, it will prevent people from updating from them. So it would be good to get this done over the next few days - at least by this Friday 2022-05-27.
cc: @tpopela
Metadata Update from @zlopez: - Issue priority set to: Waiting on Assignee (was: Needs Review) - Issue tagged with: high-gain, medium-trouble, ops
Update:
I've now pushed a new version: 4a85ffd4c5da445de02189b72f4d5af67539bf8a that downloads release information dynamically from Bodhi rather than requiring updates with the Fedora release cycle. Please use that commit ID rather than the one listed above.
4a85ffd4c5da445de02189b72f4d5af67539bf8a
Can we at this same time move this app over to our openshift 4 cluster?
It should be pretty transparent... I can do the proxy changes and such.
I think it could just regenerate the data if we move it? or do we need to come up with some way to copy storage?
I see no problem with moving it over to the openshift 4 cluster, if that doesn't push things back too much.
Since few new Flatpak versions have been built recently, regenerating the tardiffs wont' take too long - maybe an hour or so. So, just letting the new instance regenerate data on empty volumes and then cutting the proxy over when it's done should be fine.
ok. I redeployed that hash over in our ocp4 staging cluster. It seems to have hit some test issues.
https://console-openshift-console.apps.ocp.stg.fedoraproject.org is the console. You should be able to login with a valid staging kerberos ticket.
... FAIL Required test coverage of 100% not reached. Total coverage: 13.98% =========================== short test summary info ============================ ERROR tests/test_bodhi_query.py - TypeError: 'type' object is not subscriptable ERROR tests/test_cleaner.py - TypeError: 'type' object is not subscriptable ERROR tests/test_cli.py - TypeError: 'type' object is not subscriptable ERROR tests/test_delta_generator.py - TypeError: 'type' object is not subscri... ERROR tests/test_differ.py - TypeError: 'type' object is not subscriptable ERROR tests/test_fedora_updater.py - TypeError: 'type' object is not subscrip... ERROR tests/test_indexer.py - TypeError: 'type' object is not subscriptable ERROR tests/test_json_model.py - TypeError: 'type' object is not subscriptable ERROR tests/test_koji_query.py - TypeError: 'type' object is not subscriptable ERROR tests/test_models.py - TypeError: 'type' object is not subscriptable ERROR tests/test_pyxis_updater.py - TypeError: 'type' object is not subscript... !!!!!!!!!!!!!!!!!!! Interrupted: 11 errors during collection !!!!!!!!!!!!!!!!!!! ============================== 11 errors in 1.50s ============================== + '[' 2 == 0 ']' + failed=' pytest' + flake8 flatpak_indexer setup.py tests tools + '[' 0 == 0 ']' + set -e +x FAILED: pytest error: build error: error building at STEP "RUN tools/test.sh": error while running runtime: exit status 1
Please let me know if you have any access issues or need me to update anything for you.
Oops, didn't notice that a 3.9 dependency had snuck in. https://pagure.io/fedora-infra/ansible/pull-request/1080
ok. I pushed that, and moved the nfs storage volumes over to the other cluster. Now I am getting:
---> Running application from script (app.sh) ... Traceback (most recent call last): File "/opt/app-root/lib64/python3.9/site-packages/flatpak_indexer/fedora_monitor.py", line 332, in _run self._wait_for_messages() File "/opt/app-root/lib64/python3.9/site-packages/flatpak_indexer/fedora_monitor.py", line 237, in _wait_for_messages ssl_context = ssl.create_default_context(cafile=os.path.join(cert_dir, "cacert.pem")) File "/usr/lib64/python3.9/ssl.py", line 745, in create_default_context context.load_verify_locations(cafile, capath, cadata) FileNotFoundError: [Errno 2] No such file or directory The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/app-root/bin/flatpak-indexer", line 33, in <module> sys.exit(load_entry_point('flatpak-indexer==0.1', 'console_scripts', 'flatpak-indexer')()) File "/opt/app-root/lib64/python3.9/site-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) File "/opt/app-root/lib64/python3.9/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/opt/app-root/lib64/python3.9/site-packages/click/core.py", line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/opt/app-root/lib64/python3.9/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/opt/app-root/lib64/python3.9/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/opt/app-root/lib64/python3.9/site-packages/click/decorators.py", line 26, in new_func return f(get_current_context(), *args, **kwargs) File "/opt/app-root/lib64/python3.9/site-packages/flatpak_indexer/cli.py", line 47, in daemon updater.start() File "/opt/app-root/lib64/python3.9/site-packages/flatpak_indexer/datasource/fedora/updater.py", line 51, in start self.change_monitor.start() File "/opt/app-root/lib64/python3.9/site-packages/flatpak_indexer/fedora_monitor.py", line 78, in start self._maybe_reraise_failure("Failed to start connection to fedora-messaging") File "/opt/app-root/lib64/python3.9/site-packages/flatpak_indexer/fedora_monitor.py", line 180, in _maybe_reraise_failure raise RuntimeError(msg) from self.failure RuntimeError: Failed to start connection to fedora-messaging
Is that a cacert missing? or something on our end?
Problem with setup.py not be updated for moving things around - didn't get caught by the tests since they don't really connect to fedora-messaging. I'll push a fix soon and give you a new commit hash.
Commit with fix for cacert.pem failure: bc155e0198fc9241b4c5b8dd418c3ea687ea986c
Cool. It's indexing away now.
Let me know if it looks ok later and we can do prod whenever you like.
I took a look, and it seems OK to me. I think we can proceed to do prod at your convenience, and once that's confirmed to work from clients, clean up the old openshift-3 deployments.
(I inspected the logs and generated output, but didn't actually try to set up a test client installation I don't know if we're exporting the results from openshift 4 + stg appropriately for that.)
ok. I was going to go to prod today.
But is bc155e0198fc9241b4c5b8dd418c3ea687ea986c the right commit for prod?
ok. I was going to go to prod today. But is bc155e0198fc9241b4c5b8dd418c3ea687ea986c the right commit for prod?
I don't think that we will hear back from Owen on this for some time, but according to his last comment it's indeed the right commit.
Thanks. I'll see about rolling it out to prod later today.
This is now done and from all I can tell working fine.
Please do test and let me know if you see any issues...
Metadata Update from @kevin: - Issue close_status updated to: Fixed - Issue status updated to: Closed (was: Open)
Yes, it's working as expected (users are getting the Fedora 36 Flatpak updates). Thank you Owen and Kevin!
Log in to comment on this ticket.