Over in #304, one of the suggestions was to update the test cases for Workstation's preinstalled apps, in order to ensure that testing activities are as effective as possible.
We could also potentially tie the test cases to the final release criteria, perhaps by detailing which tests are required to pass, and which are nice to haves.
Searching around the QA pages, the workstation apps turn up in a couple of different places:
Workstation test cases
The workstation test cases page links to a workstation core applications test page, which in turn links to the old workstation technical specification, for its list of core applications.
GNOME 3 test cases
The workstation app test cases are a bit buried (they're listed as GNOME 3 acceptance test cases, in the big test case index. They include tests for:
Obviously some of these are obsolete, and a bunch of newer apps are missing. Some of the test cases are very basic, but others have a bit more content.
It's a bit unclear to me how people are expected to find the test cases, or which ones they're expected to run. The GNOME test day pages mainly link to the GNOME shell test pages, though the last GNOME test day to list app tests did list tests for Maps, Music, Web and Terminal. I don't know how this shortlist of app tests was arrived at.
Anything else? What do you think, @adamwill and @bcotton ?
[1] Missing preinstalled apps:
Metadata Update from @aday: - Issue tagged with: qa
Thanks, Allan! The first four steps of your plan look good, although @adamwill and @sumantrom might have tweaks on the implementation.
As for
We could aim to add test pages for missing preinstalled apps [1]. However, this is a lot of apps to document. Alternatively, we could pick a shortlist of the most important apps, and ensure that those are documented.
I don't think we need to write test pages for all of them all at once, particularly if we have a clear agreement on the bounds of blocking behavior (basically a shared definition of "basic functionality" for each, or an agreement that some of the apps aren't blocking). If we start with a few of the most important ones, then we can fill in a few more each release cycle.
A related-but-separate question is "how do we get more community engagement on Test Days?" I don't know how many participated this time, but it clearly wasn't enough. :-) That might be a place where upstream can help. If we can get people who use other distros to help with testing GNOME apps, that's a win for everyone.
The obvious first step would be to publicise the test day more widely (Discourse, Twitter, etc). If we can improve our planning around GNOME testing, then that would help with this.
It's a bit unclear to me how people are expected to find the test cases, or which ones they're expected to run.
The wiki has a neat trick for this: the "what links here" page. At some point our wiki UI stopped showing it (or else I'm missing it), but it's easy to find when you know the trick: add Special:Whatlinkshere/ between wiki/ and the page name in the URL. So we can see what links to QA:Testcase clocks here for e.g. For a lot of the pages, it seems test day pages from 2013-2014, around GNOME 3.6 to 3.14, linked to them. So that's the vintage for a lot of them.
Special:Whatlinkshere/
wiki/
Looking at the test day history, it looks like we had GNOME test days up until Fedora 21 (GNOME 3.14), then abandoned them until Fedora 26. From memory, and the page histories, the story there is that I was still sort of in charge of test days - with help from @roshi for a time - during that time, but I had increasingly little time to spend on them and it wasn't a key part of my job any more. We had a lot fewer test days overall for each release around that time. Fedora 26 is where @sumantrom joined and took over responsibility for test days from me, and it looks like when he picked up doing GNOME test days again, he chose not to include all the app test cases, and instead just used a selection of the release validation test cases; compare https://fedoraproject.org/wiki/Test_Day:2014-08-28_Gnome_3.14 (the last one I did) with https://fedoraproject.org/wiki/Test_Day:2017-07-10_Gnome_3.24 (the first one sumantro did).
In general, test case pages only get used if there is an event that links to them - either a test day, or they're part of the release validation test case set. Otherwise they just sit there gently bitrotting until a situation like this happens. So those pages have likely not been touched since 2014 and will probably need revising for the apps that are still relevant.
We can certainly ask @sumantrom to help with updating the old app test cases, creating new ones for newer apps, and including them in future GNOME test days, I think...
On the proposed actions: I'd say creating a page listing preinstalled apps seems a bit unnecessary and prone to bitrot - someone would have to remember to update it every time the set changed, and I bet they wouldn't. Testers can find out what apps are preinstalled by booting an image and looking, or checking comps. Marking obsolete parts of the tech spec as obsolete is good if it's not going to be updated. Rather than create a new test case category, we can just rename the existing one and update the contents. Note, test cases for obsolete apps can be marked with the 'obsolete test case' header and moved to the category for obsolete test cases; I'll do that after writing this comment. Writing new test cases for the missing preinstalled apps would be great but yeah, a chunk of work; we should prioritize the most important and least-likely-to-be-replaced ones.
So I'm working on this now. I've created https://fedoraproject.org/wiki/Category:GNOME_default_application_test_cases and am moving appropriate test cases to it, and doing appropriate things with other test cases in the "GNOME 3 acceptance" category (marking them obsolete, or just making them members of the relevant package's category).
There is also https://fedoraproject.org/wiki/Category:Desktop_Acceptance_Test_Cases , which should only have test cases that enforce the criteria. So for instance if we have three test cases for a default app, one of which covers what we'd consider 'basic functionality' but the other two of which covered more advanced functionality, I guess all three should be in GNOME_default_application_test_cases but only the 'basic' one should be in Desktop_Acceptance_Test_Cases . I think? Still mulling this over.
OK, so I have emptied Category:GNOME3_acceptance_test_cases now. I created two new categories:
They are both members of a higher-level category, https://fedoraproject.org/wiki/Category:GNOME_test_cases , which is a member of the top-level Test Cases category. The first category contains test cases that cover things I categorized as 'GNOME desktop features' rather than parts of a specific app; the second category contains test cases that cover specific apps that are installed by default.
I marked fully obsolete tests as obsolete, updated some tests (but by no means all of them), kicked tests which are still valid but which relate to a package that's no longer default installed into just the package's test case category, and reconciled some duplicates. So things should be rather cleaner now.
Coverage is still nowhere near adequate. Many default apps have no specific test cases at all, and several of the ones that do exist are pretty perfunctory. But at least we have a clearer framework now.
I'd say creating a page listing preinstalled apps seems a bit unnecessary and prone to bitrot - someone would have to remember to update it every time the set changed, and I bet they wouldn't. Testers can find out what apps are preinstalled by booting an image and looking, or checking comps.
I think the main value of a page is convenience. You can boot an image, but that takes time and effort. I wasn't aware that comps was a good way to check the default app list. That would be this? Is there a part of that file that specifically lists workstation apps?
The bitrot risk is real, though I think I could keep such a page updated, and preinstalled app changes aren't that common.
I think the main value of a page is convenience.
Just to follow up on this - I think that it would be useful to be able to link to the list of preinstalled apps from a test plan, for example. That way testers have a check list of apps to run through.
Thanks for working on this, @adamwill ! Checking out the new default apps test case category, I see that we have pages for Clocks, Document Viewer, Files, Maps, Photos, Software and Weather.
That leaves the following preinstalled apps without test cases:
Boxes Calculator Calendar Characters Cheese Connections Contacts Disk Usage Analyser Disks Document Scanner Fedora Media Writer Firefox Fonts Help Image Viewer Logs Problem Reporting Settings System Monitor Terminal Text Editor Videos
Yes, and yes - the gnome-desktop group. That's pretty much The List Of Things That Makes Up A GNOME Desktop Install. Strictly speaking, the key thing is the workstation-product-environment environment group, which includes gnome-desktop and several other groups, and a couple of those pull in apps, like libreoffice and firefox. But mostly, gnome-desktop is it.
gnome-desktop
workstation-product-environment
libreoffice
firefox
Everyone starts out thinking that, yeah...:D
Metadata Update from @chrismurphy: - Issue assigned to chrismurphy
Thanks for working on this, @adamwill ! Checking out the new default apps test case category, I see that we have pages for Clocks, Document Viewer, Files, Maps, Photos, Software and Weather. That leaves the following preinstalled apps without test cases: Boxes Calculator Calendar Characters Cheese Connections Contacts Disk Usage Analyser Disks Document Scanner Fedora Media Writer Firefox Fonts Help Image Viewer Logs Problem Reporting Settings System Monitor Terminal Text Editor Videos
I will start writing test cases for all these apps. If that okay with folks here? As I am done, I will post them here for review?
@adamwill Thanks for helping out with the category! I am willing to look deep and update the old test cases.
@aday @chrismurphy is it a possibility that we can have Gnome Apps test week post/pre beta ? I know a lot of apps go untested which are blocking by nature. WDYT?
I will start writing test cases for all these apps. If that okay with folks here? +1 is it a possibility that we can have Gnome Apps test week post/pre beta?
Ideally the test week coincides the availability of GNOME Shell beta in Fedora, which is usually a few days after upstream releases it. We can have two test weeks, one for the beta and one for final, and reduce the chance of last minute surprises.
Do the test cases, or the pass/fail standard, need modification to account for #269?
We discussed this issue during today's workstation WG call.
We agreed to review the test cases that are included in the upcoming GNOME test day, to ensure that they cover the most important areas.
There was also some interest in expanding the existing test cases, in particular in order to cover the Settings app.
However, there was also some disagreement around which areas we should focus on: on the one hand, you could say that our tests should focus on the most commonly used and critical functionality. On the other, we could assume that the most commonly used features will get tested by people using the pre-release, and that testing should therefore focus on those areas that aren't as commonly used.
Well, the specific cases whence all this activity started was one where "people using the pre-release" didn't really test the "most commonly used and critical functionality", because it seems almost nobody really uses the affected apps in production.
In general, for release validation, what the test cases should cover is the stuff we consider release-blocking, or at least, stuff we would very definitely want to fix ahead of release if it was broken, even if we didn't block on it.
There can be more extensive tests that can be run as part of test days or whatever, but for test cases that go on the release validation matrix, we kinda expect them to be ones where someone is going to care a lot if they fail at release time.
What we mean is: entire apps (Photos, Contacts) are not commonly-used, and these are the apps where quality problems were discovered late in the F36 cycle. Apps that Fedora beta testers are actually using probably don't need to be prioritized for more testing, since we assume people will notice when they break.
As part of working on the upcoming GNOME 43 test week, I encountered some issues with the test case pages:
Well...two of the things are not really "missing" then, are they? It's a category for default apps and they're not default apps. I'm not a big fan of "this thing says it's one thing but it's really another thing". Also, people might think it's a bug that one of them doesn't show up by default, since the test case is in a 'default apps' category, I guess.
Yeah, if there's a category with only one test case in it and nobody's planning to write more, getting rid of the category seems advised.
I have written a bunch of test cases and updated a few. I am yet to write 5 more to complete the list. @adamwill 's category will reflect 'em all https://fedoraproject.org/wiki/Category:GNOME_default_application_test_cases
@sumantrom told me on Telegram to post here so i do
i found some errors with the GNOME 43 Beta 1/2 test cases
https://fedoraproject.org/wiki/QA:Testcase_evince_file_display links to a pdf you have to have a Red Hat account and be logged in to download
https://fedoraproject.org/wiki/QA:Testcase_gnome-shell_extensions https://fedoraproject.org/wiki/QA:Testcase_gnome-shell_extensions_gnome_org https://fedoraproject.org/wiki/QA:Testcase_gnome-shell_extensions_install https://fedoraproject.org/wiki/QA:Testcase_gnome-shell_extensions_remove https://fedoraproject.org/wiki/QA:Testcase_gnome-shell_extensions_tweak_tool
says to use gnome-tweak-tool for fiddling with extensions, but the extension support in that has been deprecated for a while and it recommends using gnome-extensions-app instead as thats the right application for this now
@sumantrom told me on Telegram to post here so i do i found some errors with the GNOME 43 Beta 1/2 test cases https://fedoraproject.org/wiki/QA:Testcase_evince_file_display links to a pdf you have to have a Red Hat account and be logged in to download
@mkasik promised to look at it and update the test case.
@sumantrom told me on Telegram to post here so i do i found some errors with the GNOME 43 Beta 1/2 test cases https://fedoraproject.org/wiki/QA:Testcase_evince_file_display links to a pdf you have to have a Red Hat account and be logged in to download @mkasik promised to look at it and update the test case.
Hi, I've replaced the PDF document link with a link to Portable Document Format (PDF) 1.7 specification which is also a good PDF example.
I've spent today updating the test cases listed in the basic and gnome-shell sections of the F37 test day page. There was quite a lot of material there that was out of date - that should hopefully be resolved now.
It would be good if someone from QA could check these pages over.
I encountered the following outstanding issues while doing this work:
There is quite a lot of material in these pages - probably too much to expect volunteers to run through as part of a test week. We might want to reduce the number of tests listed on the test week page, or possibly divide them into more evenly weighted blocks, and ask people to run the tests from blocks which have had the least attention.
For the initial setup test I would recommend just writing a g-i-s specific test case and using that in the test days. We can't easily rewrite or replace the test for the release validation context, but it's definitely not a great fit for a GNOME test day, and it'd make much more sense for that context to have a test case that's much more specific and detailed about g-i-s in particular.
Thanks a lot for doing this work! I had it on my todo list to try and improve things before the next test event but had not gotten to it yet :| Been working on comps stuff.
On the naming: we cannot easily rename 'desktop_update_notification'; that is another generic test case that is part of the release validation stuff. If you look at https://fedoraproject.org/wiki/Template:Desktop_test_matrix , the name is consistent with most of the others there, and the test case is specifically intended to be applicable to all desktops, not just GNOME.
The other two could probably be renamed safely. When you rename a page mediawiki makes the old name automatically redirect to the new name, so most links will still be OK. Renaming pages that are part of the release validation stuff is hard because it throws off things like the openQA result reporting code and the code that generates testcase_stats, but that's not an issue for test cases that aren't part of that process.
It's nice to find other pages that link to the old name and update them to link to the new name so people don't have to go through the redirect, but not compulsory.
OK, I've done this.
For the initial setup test I would recommend just writing a g-i-s specific test case and using that in the test days.
That makes sense to me.
Digging into this, I think we probably need a new set of tests to cover the desktop parts of installation and first run. That would cover:
I see @chrismurphy is assigned to this issue. Are you still planning to work on this, or do we need to find a new volunteer or close it?
@catanzaro I was planning to do more work on this issue, specifically to address the items in my previous comment
Metadata Update from @catanzaro: - Issue assigned to aday (was: chrismurphy) - Issue tagged with: pending-action
This issue is still waiting on someone to write the tests listed in the comment above.
If anyone can help, that would be great!
Fedora Media Writer Can successfully write the development version iso to a USB stick on Fedora, Ubuntu, Windows
I haven't read the full ticket, but I noticed this and we already have a testcase for FMW as part of our installation matrix, see Testcase_USB_fmw. Does that fit your needs? Fedora, Windows and macOS are (in the ideal case) tested during the release process.
So, I kinda think Workstation WG has provided as much input here as we reasonably can. We don't get to dictate what people work on, and few people are interested in writing test cases.
Log in to comment on this ticket.