After the switch to the new indexer last week, I've built two batches of flatpaks and they've both been pushed to testing by bodhi. flatpak cli however doesn't see them and when I look at the newly downloaded /var/lib/flatpak/oci/fedora-testing.index.gz, the new builds don't appear there at all.
I first built https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2021-ca0a978b9f (18 builds, it's the subset that's pre-installed on Silverblue) and it got pushed to testing on Saturday. I'll note that there was a push failure and mboddu resumed the push, which then succeeded.
After that, I did another bodhi update on Sunday, this time with just one package to see if that would make the first update appear: https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2021-3f353c1b90, but that didn't seem to change anything. This bodhi push seems to have succeeded without needing releng to resume it.
If you search for e.g. org.gnome.Calendar (first batch) or org.gnome.Extensions (second batch) in fedora-testing.index.gz neither of them are there. The build dates for both are listed to be in 2020, so only the old builds are indexed.
@otaylor @kevin Help? :)
Hi Kalev -
Problem is that flatpak-indexer doesn't know about/query the F34 Flatpaks Bodhi release yet. Let me push a fix for that, and a PR to update what is running on Fedora infrastructure.
https://github.com/owtaylor/flatpak-indexer/blob/main/flatpak_indexer/datas= ource/fedora/release_info.py
Is a liability for the continued operation of flatpak-indexer - I wrote that around the time that PDC was going away and there wasn't a clear replacement. I'm not sure if there's something we could query now.
Owen
On Mon, Mar 29, 2021 at 7:37 AM Kalev Lember pagure@pagure.io wrote:
kalev reported a new issue against the project: fedora-infrastructure t= hat you are following: `` After the switch to the new indexer last week, I've built two batches of = flatpaks and they've both been pushed to testing by bodhi. flatpak cli howe= ver doesn't see them and when I look at the newly downloaded /var/lib/flatp= ak/oci/fedora-testing.index.gz, the new builds don't appear there at all. I first built https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2021= -ca0a978b9f (18 builds, it's the subset that's pre-installed on Silverblue)= and it got pushed to testing on Saturday. I'll note that there was a push = failure and mboddu resumed the push, which then succeeded. After that, I did another bodhi update on Sunday, this time with just one= package to see if that would make the first update appear: https://bodhi.f= edoraproject.org/updates/FEDORA-FLATPAK-2021-3f353c1b90, but that didn't se= em to change anything. This bodhi push seems to have succeeded without need= ing releng to resume it. If you search for e.g. org.gnome.Calendar (first batch) or org.gnome.Exte= nsions (second batch) in fedora-testing.index.gz neither of them are there.= The build dates for both are listed to be in 2020, so only the old builds = are indexed. @otaylor @kevin Help? :) `` To reply, visit the link below or just reply to this email https://pagure.io/fedora-infrastructure/issue/9794
kalev reported a new issue against the project: fedora-infrastructure t= hat you are following: `` After the switch to the new indexer last week, I've built two batches of = flatpaks and they've both been pushed to testing by bodhi. flatpak cli howe= ver doesn't see them and when I look at the newly downloaded /var/lib/flatp= ak/oci/fedora-testing.index.gz, the new builds don't appear there at all.
fedora-infrastructure
I first built https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2021= -ca0a978b9f (18 builds, it's the subset that's pre-installed on Silverblue)= and it got pushed to testing on Saturday. I'll note that there was a push = failure and mboddu resumed the push, which then succeeded.
After that, I did another bodhi update on Sunday, this time with just one= package to see if that would make the first update appear: https://bodhi.f= edoraproject.org/updates/FEDORA-FLATPAK-2021-3f353c1b90, but that didn't se= em to change anything. This bodhi push seems to have succeeded without need= ing releng to resume it.
If you search for e.g. org.gnome.Calendar (first batch) or org.gnome.Exte= nsions (second batch) in fedora-testing.index.gz neither of them are there.= The build dates for both are listed to be in 2020, so only the old builds = are indexed.
@otaylor @kevin Help? :) ``
To reply, visit the link below or just reply to this email https://pagure.io/fedora-infrastructure/issue/9794
https://pagure.io/fedora-infra/ansible/pull-request/513
Ahh, I see. Thanks, Owen!
I can try to remember to update it next time when starting new flatpak platform builds (or should it even be in releng branching SOP to update it at the same time when adding a new release to bodhi?)
Or thinking about it some more, maybe you could query Bodhi here, because it's really the authoritative source for what flatpak platforms are active? So the workflow is that releng updates bodhi config to add new flatpak platform, and then the indexer would pick it up automatically from bodhi once that's done?
Yeah, just querying Bodhi:
https://bodhi.fedoraproject.org/releases/?exclude_archived=1&name=%25F
Would work fine. release_info.py is used for more in flatpak-status, where the code comes from, but only vestigally in flatpak-indexer.
Started implemented query-Bodhi, turned into a bit bigger job than I expectded, so filed:
https://github.com/owtaylor/flatpak-indexer/issues/3
Pushed that first pr to staging... can push to prod when that looks ok.
I don't see any difference when looking at the staging index for org.gnome.Calendar and org.gnome.Extensions. I don't know if the staging registry has the latest builds though so maybe it's expected?
Metadata Update from @smooge: - Issue priority set to: Waiting on Assignee (was: Needs Review) - Issue tagged with: medium-gain, medium-trouble, ops
Or maybe there isn't anything that's triggering an index rebuild?
Sorry for not updating here - when the new commit was pushed to staging everything went boom over a series of cascading problems: * redis database wasn't actually persisted to the persistent volume * Reconnection to redis not handled well * Indexer thought it had succeeded when it hadn't * In that situation, files cleaned up when they shouldn't have been
Not directly related to this change, but trying to fix up as many of those as possible before we repush to stable.
Ahh, sorry, I didn't realize you were discussing this somewhere else.
With:
https://pagure.io/fedora-infra/ansible/pull-request/515 https://pagure.io/fedora-infra/ansible/pull-request/517
hopefully the robustness of pushing new versions should be a lot better.
Pull requests merged and pushed.
flatpak remote-add --user fedora-testing oci+https://registry.fedoraproject.org#testing flatpak remote-ls fedora-testing --columns=ref,runtime
Now includes the f34 builds. This can be closed now.
Awesome. Thanks!
Metadata Update from @kevin: - Issue close_status updated to: Fixed - Issue status updated to: Closed (was: Open)
Thanks!
@otaylor This seems to have broken again. Nothing pushed to testing over the last few days is showing up in the index.
These all should show up, but don't: https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2021-43e82fdab9 https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2021-1333560501 https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2021-da5bfd6e39 https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2021-7fb4c95f9a https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2021-d647d6f468
Metadata Update from @kalev: - Issue status updated to: Open (was: Closed)
A bit puzzling - the indexer did pick up the new updates:
2021-04-02 13:53:03,007:INFO:flatpak_indexer.datasource.fedora.bodhi_query:Querying Bodhi with params: {'rows_per_page': 10, 'releases': ['F32F', 'F33F', 'F34F'], 'submitted_since': '2021-04-02T13:21:02.929921+0 0:00', 'page': 1} 2021-04-02 13:53:03,166:INFO:flatpak_indexer.datasource.fedora.bodhi_query:Querying Bodhi with params: {'rows_per_page': 10, 'releases': ['F32F', 'F33F', 'F34F'], 'modified_since': '2021-04-02T13:21:02.929921+00 :00', 'page': 1} 2021-04-02 13:53:04,071:INFO:flatpak_indexer.koji_query:Calling koji.getBuild(gnome-calendar-stable-3420210402121048.1) 2021-04-02 13:53:04,097:INFO:flatpak_indexer.koji_query:Calling koji.listArchives(1731518); nvr=gnome-calendar-stable-3420210402121048.1 2021-04-02 13:53:04,112:INFO:flatpak_indexer.koji_query:Calling koji.listRPMs(437736) 2021-04-02 13:53:04,127:INFO:flatpak_indexer.koji_query:Calling koji.getBuild(1731500) 2021-04-02 13:53:04,136:INFO:flatpak_indexer.koji_query:Calling koji.listBuilds(19821, type='module') 2021-04-02 13:53:04,222:INFO:flatpak_indexer.koji_query:Calling koji.listArchives(1731508); nvr=gnome-calendar-stable-3420210402121048.dab6ca4c 2021-04-02 13:53:04,238:INFO:flatpak_indexer.koji_query:Calling koji.listRPMs(437714) 2021-04-02 13:53:04,390:INFO:flatpak_indexer.koji_query:Calling koji.getBuild(gedit-stable-3420210402121019.1) 2021-04-02 13:53:04,398:INFO:flatpak_indexer.koji_query:Calling koji.listArchives(1731517); nvr=gedit-stable-3420210402121019.1 2021-04-02 13:53:04,423:INFO:flatpak_indexer.koji_query:Calling koji.listRPMs(437734) 2021-04-02 13:53:04,433:INFO:flatpak_indexer.koji_query:Calling koji.getBuild(1731503) 2021-04-02 13:53:04,468:INFO:flatpak_indexer.koji_query:Calling koji.listBuilds(404, type='module') 2021-04-02 13:53:04,505:INFO:flatpak_indexer.koji_query:Calling koji.listArchives(1731509); nvr=gedit-stable-3420210402121019.dab6ca4c 2021-04-02 13:53:04,532:INFO:flatpak_indexer.koji_query:Calling koji.listRPMs(437722)
But when it created the new index after that, it didn't include them:
2021-04-02 13:53:06,324:INFO:flatpak_indexer.utils:/var/www/flatpaks/fedora/flatpak-testing-amd64.json is unchanged
I guess I'll try setting an index run going locally to see if it picks them up.
Bah, a local run picked up the new images, not reproducing the problem.
Can someone try scaling the flatpak-indexer deploymentconfig to 0 and back to 1 - to see if a fresh process picks up the changes?
Done
OK, so with the redploy, it wrote out the correct index.
I think what it happened is that it lost the connection to fedora-messaging queue that it uses to track changes to updates and avoid extensive requerying of Bodhi. So, while it saw the new updates, it never saw them change their status to testing and thus never added them to the index.
The fix is some combination of: * Make the code catch being disconnected from RabbitMQ and restablish the connection * Use a permanent, authorized queue, rather than rabbitmq.fedoraproject.org/public_pubsub
I'm not sure why I haven't seen this for flatpak-status, however. Will look into this more on Monday.
I guess this is done? If not, and there's more to do, please re-open...
Reopening as the issue seems to be back: latest org.libreoffice.LibreOffice and org.fedoraproject.Platform//f32 and org.fedoraproject.Sdk//f32 updates don't appear indexed for the testing remote.
https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2021-182465c5e8 https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2021-0fc4e556c4
Yes, this was the same problem - flatpak-indexer lost the connection to rabbitmq, and there isn't even any code to reconnect (so switching to a permanent queue wouldn't help)
2021-04-25 06:51:49,185:ERROR:pika.adapters.utils.io_services_utils:_AsyncBaseTransport._consume() failed, aborting connection: error=SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2607)'); sock=<ssl.SSLSocket fd=8, family=AddressFamily.AF_INET, type=Socket Kind.SOCK_STREAM, proto=6, laddr=('10.128.3.86', 46334), raddr=('10.3.163.75', 5671)>; Caller's stack: Traceback (most recent call last): File "/opt/app-root/lib64/python3.8/site-packages/pika/adapters/utils/io_services_utils.py", line 1235, in _consume super(_AsyncSSLTransport, self)._consume() File "/opt/app-root/lib64/python3.8/site-packages/pika/adapters/utils/io_services_utils.py", line 791, in _consume data = self._sigint_safe_recv(self._sock, self._MAX_RECV_BYTES) File "/opt/app-root/lib64/python3.8/site-packages/pika/adapters/utils/io_services_utils.py", line 79, in retry_sigint_wrap return func(*args, **kwargs) File "/opt/app-root/lib64/python3.8/site-packages/pika/adapters/utils/io_services_utils.py", line 846, in _sigint_safe_recv return sock.recv(max_bytes) File "/usr/lib64/python3.8/ssl.py", line 1226, in recv return self.read(buflen) File "/usr/lib64/python3.8/ssl.py", line 1101, in read return self._sslobj.read(len) ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:2607)
I forced the container to restart (kill -INT -- -1 from the terminal) and it's working now to catch up.
kill -INT -- -1
Please leave this open until I land a fix for flatpak-indexer to try to reconnect.
Thanks, Owen!
When deployed https://pagure.io/fedora-infra/ansible/pull-request/562 will hopefully prevent this happening in the future.
Issue status updated to: Closed (was: Open) Issue close_status updated to: Fixed
This is happening again, https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2021-aadb1c310d and https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2021-0b27823caa and https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2021-d7af823469 don't show up in the testing remote.
We never managed to get the fixes to prod - running the playbook to get https://pagure.io/fedora-infra/ansible/pull-request/562 to prod should both get things updating again with a start and (fingers-crossed) fix this permanently.
playbook run.
Looks like it is up and running well, hopefully this can be closed and not reopened now :-)
Great. Thanks!
Log in to comment on this ticket.