Discussion:
TEAM: Wily Werewolf Package Testing
elfy
2015-05-14 19:55:52 UTC
Permalink
This came up during the recent meeting.

Given the work necessary to make sure that testcases are up to date and
available and the general lack of testing being reported to the tracker
for the last 2 cycles.

I plan to not do so this cycle unless anyone thinks that we should - and
just thinking we should isn't likely to change my mind ;)

Currently planning to mail this list and ping social media for targetted
app testing when it's required and requested by our devs.
--
xubuntu-devel mailing list
xubuntu-***@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/xubuntu-devel
elfy
2015-05-24 10:44:13 UTC
Permalink
Post by elfy
This came up during the recent meeting.
Given the work necessary to make sure that testcases are up to date
and available and the general lack of testing being reported to the
tracker for the last 2 cycles.
I plan to not do so this cycle unless anyone thinks that we should -
and just thinking we should isn't likely to change my mind ;)
Currently planning to mail this list and ping social media for
targetted app testing when it's required and requested by our devs.
I'm assuming that no-one has an issue with this plan and will move on.

I'll revisit the topic at the beginning of the next cycle.
--
xubuntu-devel mailing list
xubuntu-***@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/xubuntu-devel
Istimsak Abdulbasir
2015-05-24 12:12:23 UTC
Permalink
If I understand this correctly, not enough app testing is being performed.
Either we need more testers or more testing.
Post by elfy
Post by elfy
This came up during the recent meeting.
Given the work necessary to make sure that testcases are up to date and
available and the general lack of testing being reported to the tracker for
the last 2 cycles.
I plan to not do so this cycle unless anyone thinks that we should - and
just thinking we should isn't likely to change my mind ;)
Currently planning to mail this list and ping social media for targetted
app testing when it's required and requested by our devs.
I'm assuming that no-one has an issue with this plan and will move on.
I'll revisit the topic at the beginning of the next cycle.
--
xubuntu-devel mailing list
https://lists.ubuntu.com/mailman/listinfo/xubuntu-devel
elfy
2015-05-24 16:25:35 UTC
Permalink
Post by Istimsak Abdulbasir
If I understand this correctly, not enough app testing is being
performed. Either we need more testers or more testing.
Well.

More testing is (assuming the same footfall as Vivid) just the same few
people looking at the same thing more.

Not particularly useful - what we need is more people ;)

That said - the package tracker tends towards static testing - when we
/did /get test results they showed up after I'd said something like 'go
forth and test this please'

What is more useful is that you,me and everybody, use the new OS when we
can - check things out as you go, then report bugs via Launchpad - if
they get tagged wily - we can then grab them easier.

However I am willing to be convinced otherwise ...
Post by Istimsak Abdulbasir
This came up during the recent meeting.
Given the work necessary to make sure that testcases are up to
date and available and the general lack of testing being
reported to the tracker for the last 2 cycles.
I plan to not do so this cycle unless anyone thinks that we
should - and just thinking we should isn't likely to change my
mind ;)
Currently planning to mail this list and ping social media for
targetted app testing when it's required and requested by our devs.
I'm assuming that no-one has an issue with this plan and will move on.
I'll revisit the topic at the beginning of the next cycle.
--
xubuntu-devel mailing list
https://lists.ubuntu.com/mailman/listinfo/xubuntu-devel
Istimsak Abdulbasir
2015-05-25 01:33:57 UTC
Permalink
Post by elfy
Well.
More testing is (assuming the same footfall as Vivid) just the same few
people looking at the same thing more.
Post by elfy
Not particularly useful - what we need is more people ;)
That said - the package tracker tends towards static testing - when we
did get test results they showed up after I'd said something like 'go forth
and test this please'
Post by elfy
What is more useful is that you,me and everybody, use the new OS when we
can - check things out as you go, then report bugs via Launchpad - if they
get tagged wily - we can then grab them easier.
Post by elfy
However I am willing to be convinced otherwise ...
That should be mandatory to use the developing image like a production
system. After all, the only way to really test it is to use.

Also, the current testcases we have for packages and iso are the same. They
need to change or be upgrade with new procedures. We can test the packages
over and over only to confirm what worked during the last development
cycle.

Every cycle should have new upgraded testcases. Rather testing basic
functionality, the testcases should really push the system. The objective
should be to try and break the system then fix it.
Post by elfy
Post by elfy
Post by elfy
This came up during the recent meeting.
Given the work necessary to make sure that testcases are up to date
and available and the general lack of testing being reported to the tracker
for the last 2 cycles.
Post by elfy
Post by elfy
Post by elfy
I plan to not do so this cycle unless anyone thinks that we should -
and just thinking we should isn't likely to change my mind ;)
Post by elfy
Post by elfy
Post by elfy
Currently planning to mail this list and ping social media for
targetted app testing when it's required and requested by our devs.
Post by elfy
Post by elfy
I'm assuming that no-one has an issue with this plan and will move on.
I'll revisit the topic at the beginning of the next cycle.
--
xubuntu-devel mailing list
https://lists.ubuntu.com/mailman/listinfo/xubuntu-devel
--
xubuntu-devel mailing list
https://lists.ubuntu.com/mailman/listinfo/xubuntu-devel
elfy
2015-05-25 12:11:14 UTC
Permalink
[snip]
That should be mandatory to use the developing image like a production
system. After all, the only way to really test it is to use.
Some do, and can. Other's are not able to run a dev system this early.
Also, the current testcases we have for packages and iso are the same.
They need to change or be upgrade with new procedures. We can test the
packages over and over only to confirm what worked during the last
development cycle.
What's different about iso's this cycle?
Every cycle should have new upgraded testcases. Rather testing basic
functionality, the testcases should really push the system. The
objective should be to try and break the system then fix it.
Right.

And how do you think we can do that. If people don't run a 5 minute test
what makes you think they would do more?

The whole point of this thread is that there is next to no package
testing done and thus the work that 2 or 3 people do /every /cycle is
wasted.
[snip]
Pasi Lallinaho
2015-05-25 12:52:11 UTC
Permalink
Post by Istimsak Abdulbasir
Also, the current testcases we have for packages and iso are the
same. They need to change or be upgrade with new procedures. We can
test the packages over and over only to confirm what worked during
the last development cycle.
How do they need to change? What are the new procedures you are
referring to?
Post by Istimsak Abdulbasir
Every cycle should have new upgraded testcases. Rather testing basic
functionality, the testcases should really push the system. The
objective should be to try and break the system then fix it.
What's wrong with making sure that some basic features work with our
applications? Can we honestly make the assumption that the features that
have worked before will work in the future?

These have been broken before, and it's more likely than not that
something well break in the future. If the basic functionality breaks
for a user, what use is the application even if it can allegedly perform
advanced operations?

Sure, it is great if we can get testers do testing that actually "pushes
the system", but that can never be achieved with prewritten testcases.
Running prewritten testcases never put the system in the real test, they
will always just test things that people in the team have though that
are "worth testing" and/or basic enough to have a clearly specified
testcase.

That's what we have exploratory testing for. Due to its nature, you
can't really tell people do it though, unless they have the motivation
to run the development version. If they do, they will most likely file
the bugs that they found while doing their usual activities.

Instead of laying out a list of things we need to do, do you have any
ideas HOW we can do/achieve those things - or get started doing them?
Above, I've asked some questions that I think would help us figure these
things out.

Cheers,
Pasi
--
Pasi Lallinaho (knome) » http://open.knome.fi/
Leader of the Shimmer Project » http://shimmerproject.org/
Ubuntu member, Xubuntu Website lead » http://xubuntu.org/
--
xubuntu-devel mailing list
xubuntu-***@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/xubuntu-devel
saqman2060
2015-05-25 17:28:34 UTC
Permalink
Post by elfy
What's different about iso's this cycle?
Nothing is different in reference to the images. They test for entire
drive install, alongside installs, manual partitioning and Live
session. I am wondering, if they tests need to include something new,
like install packages manually using aptitude, does the installer
prompt an error if it detects a bad setting, or give options to install
grub to another hard drive to name a few. Would that make the tests
more interesting?
Post by elfy
Right.
And how do you think we can do that. If people don't run a 5 minute
test what makes you think they would do more?
The whole point of this thread is that there is next to no package
testing done and thus the work that 2 or 3 people do every cycle is
wasted.
[snip]
Well for every package testcase, add a new test. Something different.
Does not have to be overly technical, if there is no need for it. This
new test should be colored or highlighted to indicate it is something
new. Something similar to a call to testing of a new package.

The basic functions of the package will always be tested. All we are
doing is telling the testers to test other features. Exploratory
testing I agree is the best way to test the system for bugs. Can we
assume the testers will test every feature of a package or just the
ones they use daily? By adding a new testcase, we can slowly start
covering every feature of a package, at the same time teach the testers
to go beyond basic testing remembering to dig deep into the package. If
they don't, we still have testcases for those features for testers who
actually like testing everything.

What I am trying to say, make the testing a little more interesting.
Something new every cycle. Also, we can remove the test for a feature
and create another test for a feature that was not covered. In this
attempt, we don't over extend the testcase itself.

Istimsak Abdulbasir.
elfy
2015-05-25 17:51:39 UTC
Permalink
[snip]
What I am trying to say, make the testing a little more interesting.
Something new every cycle. Also, we can remove the test for a feature
and create another test for a feature that was not covered. In this
attempt, we don't over extend the testcase itself.
Istimsak Abdulbasir.
What I am trying to say is that unless someone has a _real_ reason to
run with package testing - then we will not be doing it.

I'm certainly not going to get into creating more tests for people not
to do in the hope that magically we get more testers - and by more I
really don't mean 4 instead of 2.

These tests, creating new testsuites and all the other things that need
to be done in order for them to be usable take time - they don't just
happen.

I'm all for ideas, but this thread is about

I plan to not do so this cycle unless anyone thinks that we should -
and just thinking we should isn't likely to change my mind

I've seen nothing to change my mind.
elfy
2015-05-25 18:13:05 UTC
Permalink
Post by elfy
[snip]
What I am trying to say, make the testing a little more interesting.
Something new every cycle. Also, we can remove the test for a feature
and create another test for a feature that was not covered. In this
attempt, we don't over extend the testcase itself.
Istimsak Abdulbasir.
What I am trying to say is that unless someone has a _real_ reason to
run with package testing - then we will not be doing it.
I'm certainly not going to get into creating more tests for people not
to do in the hope that magically we get more testers - and by more I
really don't mean 4 instead of 2.
These tests, creating new testsuites and all the other things that
need to be done in order for them to be usable take time - they don't
just happen.
I'm all for ideas, but this thread is about
I plan to not do so this cycle unless anyone thinks that we should
- and just thinking we should isn't likely to change my mind
I've seen nothing to change my mind.
That said - with the next cycle being LTS it's always possible we change
back for that. Which gives us a cycle to think about and deal with that.
Istimsak Abdulbasir
2015-05-26 04:19:45 UTC
Permalink
Well we certainly can't remove the test suites because no one will test
them. We also need to show growth and change. This is the life of a tester.
We test the system as is.

Nothing will happen overnight. We are brain storming ideas. We need not
only more testers but testers who are committed to the role and life of the
tester.

We do have time until the next lts to come up with a solution.
[snip]
What I am trying to say, make the testing a little more interesting.
Something new every cycle. Also, we can remove the test for a feature and
create another test for a feature that was not covered. In this attempt, we
don't over extend the testcase itself.
Istimsak Abdulbasir.
What I am trying to say is that unless someone has a *real* reason to
run with package testing - then we will not be doing it.
I'm certainly not going to get into creating more tests for people not to
do in the hope that magically we get more testers - and by more I really
don't mean 4 instead of 2.
These tests, creating new testsuites and all the other things that need to
be done in order for them to be usable take time - they don't just happen.
I'm all for ideas, but this thread is about
I plan to not do so this cycle unless anyone thinks that we should - and
just thinking we should isn't likely to change my mind
I've seen nothing to change my mind.
--
xubuntu-devel mailing list
https://lists.ubuntu.com/mailman/listinfo/xubuntu-devel
David Pearson
2015-05-24 22:04:00 UTC
Permalink
I'm also for new/extra methods of tracking/advising what to test for.

As you say, a lot of what we test for 99 out of 100 times works great. Due to this some of the testing we have become used to it just working, and may not see an error. For example, the bug in 15.04 where it didn't ask you to remove the install media. I my self missed this, as after the install, I immediately pressed restart and unplugged the USB, ready for the reboot and to start testing it the install.

The go forth and test this email, made me redo the test from scratch, and yep I say that particular bug mentioned above.

So I would welcome any additional prompts, even if its just to stop me from assuming that everything that has always workes, even as simple as a text prompt, still works.


Dave.

PS. I was also a winner of the stickers, and must say was like a little boy in a sweet shop when myhand written letter from Elizabeth arrived. Its not often you get a letter from a celebrity. 😁😁😁
If I understand this correctly, not enough app testing is being performed. Either we need more testers or more testing.
Well.
More testing is (assuming the same footfall as Vivid) just the same few people looking at the same thing more.
Not particularly useful - what we need is more people ;)
That said - the package tracker tends towards static testing - when we did get test results they showed up after I'd said something like 'go forth and test this please'
What is more useful is that you,me and everybody, use the new OS when we can - check things out as you go, then report bugs via Launchpad - if they get tagged wily - we can then grab them easier.
However I am willing to be convinced otherwise ...
Post by elfy
Post by elfy
This came up during the recent meeting.
Given the work necessary to make sure that testcases are up to date and available and the general lack of testing being reported to the tracker for the last 2 cycles.
I plan to not do so this cycle unless anyone thinks that we should - and just thinking we should isn't likely to change my mind ;)
Currently planning to mail this list and ping social media for targetted app testing when it's required and requested by our devs.
I'm assuming that no-one has an issue with this plan and will move on.
I'll revisit the topic at the beginning of the next cycle.
--
xubuntu-devel mailing list
https://lists.ubuntu.com/mailman/listinfo/xubuntu-devel
--
xubuntu-devel mailing list
xubuntu-***@lists.ubuntu.com
https://lists.ubuntu.com/
Istimsak Abdulbasir
2015-05-24 23:51:27 UTC
Permalink
I know the feeling. I won some stickers as well. Loved them.
Post by David Pearson
I'm also for new/extra methods of tracking/advising what to test for.
As you say, a lot of what we test for 99 out of 100 times works great.
Due to this some of the testing we have become used to it just working, and
may not see an error. For example, the bug in 15.04 where it didn't ask you
to remove the install media. I my self missed this, as after the install, I
immediately pressed restart and unplugged the USB, ready for the reboot and
to start testing it the install.
The go forth and test this email, made me redo the test from scratch, and
yep I say that particular bug mentioned above.
So I would welcome any additional prompts, even if its just to stop me
from assuming that everything that has always workes, even as simple as a
text prompt, still works.
Dave.
PS. I was also a winner of the stickers, and must say was like a little
boy in a sweet shop when myhand written letter from Elizabeth arrived. Its
not often you get a letter from a celebrity. 😁😁😁
Post by elfy
Post by Istimsak Abdulbasir
If I understand this correctly, not enough app testing is being
performed. Either we need more testers or more testing.
Post by elfy
Well.
More testing is (assuming the same footfall as Vivid) just the same few
people looking at the same thing more.
Post by elfy
Not particularly useful - what we need is more people ;)
That said - the package tracker tends towards static testing - when we
did get test results they showed up after I'd said something like 'go forth
and test this please'
Post by elfy
What is more useful is that you,me and everybody, use the new OS when we
can - check things out as you go, then report bugs via Launchpad - if they
get tagged wily - we can then grab them easier.
Post by elfy
However I am willing to be convinced otherwise ...
Post by Istimsak Abdulbasir
Post by elfy
Post by elfy
This came up during the recent meeting.
Given the work necessary to make sure that testcases are up to date
and available and the general lack of testing being reported to the tracker
for the last 2 cycles.
Post by elfy
Post by Istimsak Abdulbasir
Post by elfy
Post by elfy
I plan to not do so this cycle unless anyone thinks that we should -
and just thinking we should isn't likely to change my mind ;)
Post by elfy
Post by Istimsak Abdulbasir
Post by elfy
Post by elfy
Currently planning to mail this list and ping social media for
targetted app testing when it's required and requested by our devs.
Post by elfy
Post by Istimsak Abdulbasir
Post by elfy
I'm assuming that no-one has an issue with this plan and will move on.
I'll revisit the topic at the beginning of the next cycle.
--
xubuntu-devel mailing list
https://lists.ubuntu.com/mailman/listinfo/xubuntu-devel
--
xubuntu-devel mailing list
https://lists.ubuntu.com/mailman/listinfo/xubuntu-devel
Loading...