<div dir="ltr"><div class="gmail_quote">On Sat, Dec 10, 2011 at 7:56 PM, Gema Gomez <span dir="ltr"><<a href="mailto:gema.gomez-solano@canonical.com">gema.gomez-solano@canonical.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi,<br>
<br>
I agree with Charlie, but I think it is worth discussing this idea and<br>
the reasons why (imo) this way of developing test cases wouldn't work<br>
in our case.<br>
<br>
When I was reading about this method in this thread it altogether<br>
reminded me of the days when we were trying to move the development of<br>
Symbian OS to agile methodology, we were asked to plan in a way that<br>
tasks had this format (also called stories in SCRUM):<br>
<br>
> As a <USER> I want <FUNCTION> so that <BUSINESS VALUE> (see<br>
<a href="http://www.thedailyscrum.co.uk/post/86171940/effective-story-writing-in-scrum" target="_blank">http://www.thedailyscrum.co.uk/post/86171940/effective-story-writing-in-scrum</a>)<br>
<br>
We underwent 2 years of intensive work trying to figure out how to do<br>
testing in such environment and trying to figure out how system<br>
testing, integration testing and unit testing fitted in this world.<br>
And believe you me, the Ubuntu dev world is much more complex than<br>
Symbian ever was.<br>
<br>
The first problem was that SCRUM doesn't really care about integration<br>
and system testing, it cares only about unit testing. The only reason<br>
I have been able to come up with for this is that SCRUM was initially<br>
developed for small projects where developers could talk to each other<br>
on daily basis plus the QA role wasn't a separate one. Only unit<br>
testing was required.<br>
<br>
I learned that for a big project this is not enough and you still need<br>
QA people and testers so that you get the integration of the product<br>
and the packaging of it right.<br>
<br>
But Agile was cool and we were forced to move to it. We did, and we<br>
implemented the lower layers of it, we wrote stories, tracked backlogs<br>
and did daily stand ups like champions. When it came to write test<br>
cases, ISTQB format still made sense for the whole company and was<br>
widely used. Even for unit testing we were using that when tests were<br>
written in something more formal than lines of code.<br>
<br>
The Gherkin method seems to be an evolution of Xtreme Programming or<br>
Agile (<a href="http://en.wikipedia.org/wiki/Behavior_Driven_Development" target="_blank">en.wikipedia.org/wiki/Behavior_Driven_Development</a>). It is used<br>
for doing Behaviour Driven Development, i.e. the developer writes such<br>
test case, then implements the test code or generates it with a<br>
cucumber (I haven't figured out this bit but as funny as it sounds, it<br>
seems to be the way... a code generator from stories, call me old<br>
school, but I still prefer to write code with an editor) and then,<br>
when the test case fails, they go to the code and create the piece of<br>
software that will make that test case pass. That is unit testing,<br>
doesn't matter which language you use to write the test case.<br>
<br>
As we've seen with the example that roignac put together (thanks!),<br>
the language doesn't scale very well in terms of readability. Despite<br>
making the test case logically very accurate, it doesn't really make<br>
the manual tester's life easier, and at the end of the day, we are<br>
trying to attract people to do more manual testing. Irrespective of<br>
the subject having or not having a degree. It probably works for some<br>
programmers, but definitely doesn't work for testers (imho).<br>
<br>
In a diverse environment like ours where developers use different<br>
development life cycles (even though they all try to fit in the 6<br>
month bound releases), I am not sure how we would be able to push this<br>
forward nor sure if we would want to do that if it is going to be more<br>
complicated than just getting the wording of the test cases right and<br>
accurate enough so that in front of the same behavior of the system,<br>
two different people fail the test in the same way (consistency).<br>
<br>
Those are my thoughts. More ideas?<br>
<br>
Thanks,<br>
Gema<br>
<div class="HOEnZb"><div class="h5"><br>
On 10/12/2011 14:43, Charlie Kravetz wrote:<br>
> On Sat, 10 Dec 2011 12:35:21 +0200 Руаньяк <<a href="mailto:roignac@gmail.com">roignac@gmail.com</a>><br>
> wrote:<br>
><br>
>> Hi,<br>
><br>
>> 2011/12/10 Alex Lourie <<a href="mailto:djay.il@gmail.com">djay.il@gmail.com</a>>:<br>
>>> I have the following questions:<br>
>>><br>
>>> 1. How long does it take to write something like that for<br>
>>> someone who's not a programmer and have no idea about Cucumber<br>
>>> or unit testing in general?<br>
>> I guess, it should not take much time. The only rule to be<br>
>> followed is 'Action after When, Expected result after Then,<br>
>> Concatenation is And'. Maybe, someone inexperienced in Gherkin<br>
>> could measure the time and post results here? For instance, this<br>
>> conversion of DesktopWhole case took about 10 min. for me.<br>
><br>
> As someone completely ignorant of Gherkin, it took me over 10<br>
> minutes to read through this test case. In reading through, I<br>
> attempted to picture most of the steps, and found the flow doesn't<br>
> work right here. I test a lot of images, and I test images daily.<br>
> This test is much too specific. Also, it does not read well, from<br>
> an english language user. It reads much like something written by<br>
> lawyers reads to the lay person.<br>
><br>
> I don't think I could actually write such a test case, in less than<br>
> at least an hour, and perhaps that is not enough enough time.<br>
><br>
><br>
>>> 2. How would one execute this in LiveCD environment?<br>
>> Manual testers may use this as a usual test case. Whenever bug<br>
>> appears, the tester may break the instruction, e.g. --- When I<br>
>> double click 'Install Ubuntu' icon Then Ubuquity starts #here we<br>
>> can check main window etc.<br>
><br>
>> Result: Ubiquity crashes with error 'ImportError:..." ---<br>
>>> 3. Is it possible to run something like that in automated VM<br>
>>> environment?<br>
>> Yes, as there is nose framework plugin Freshen (see<br>
>> <a href="https://github.com/rlisagor/freshen" target="_blank">https://github.com/rlisagor/freshen</a>), which generates xUnit<br>
>> reports from feature files. Another option is Lettuce (see<br>
>> <a href="https://github.com/gabrielfalcao/lettuce" target="_blank">https://github.com/gabrielfalcao/lettuce</a>)<br>
><br>
>> All steps should be defined (using ldtp) in definition file(s),<br>
>> which contain the actual automation code for each step, something<br>
>> like this: -- from steps.py -- @step("I double click 'Install<br>
>> Ubuntu' icon"): def double_click_on_ubuntu_icon:<br>
>> ltdp.double_click("btnInstallUbuntu");<br>
><br>
>> Then, using steps definitions and scenarios, automation suite<br>
>> will do the following: a) create a virtual machine b) start up<br>
>> from live cd c) fire up LDTP server on virtual machine d) connect<br>
>> to LDTP server and execute steps from scenarios e) collect xUnit<br>
>> results and post them to CI (Jenkins)<br>
><br>
>> I haven't tried this at home, so we should contact automation<br>
>> guys, as they are using something similar for automation daily<br>
>> precise images -- Vadim Rutkovsky<br>
><br>
><br>
> While I realize that many of you are degree holders, community<br>
> members as a rule are users who decided to get involved in helping<br>
> their favorite software. For some, this is a way to give back for<br>
> the ability to use the software.<br>
><br>
> For the common user, these types of tests are not actually<br>
> something they can follow easily. The test cases as written, using<br>
> steps, are usable by all of us.<br>
><br>
><br>
<br>
</div></div><div class="HOEnZb"><div class="h5">--<br>
Ubuntu-qa mailing list<br>
<a href="mailto:Ubuntu-qa@lists.ubuntu.com">Ubuntu-qa@lists.ubuntu.com</a><br>
Modify settings or unsubscribe at: <a href="https://lists.ubuntu.com/mailman/listinfo/ubuntu-qa" target="_blank">https://lists.ubuntu.com/mailman/listinfo/ubuntu-qa</a><br>
</div></div></blockquote></div><br><br>First, let me just say wow. Thank you roignac for the idea; Gema, Brandon and Charlie for comments. This is a great discussion, and I would like to have much more of these in the future.<div>
<br></div><div>To sum up what we have now:</div><div><br></div><div>Roignac - Thanks a lot!! I think that this is a viable idea on its own, and we may continue discussing it in some form.</div><div>Others (including Gema, Brendan, Charlie and myself) are not very fond of it. Not because of its merit, but because we feel it will make the work harder for manual testers.</div>
<div><br></div><div>I would currently recommend the following course of action: continue with the rewriting test cases in ISTQB format, and, maybe, add future plans for exploring the possibilities for automation of those test cases, including Gherking format. I don't think that it's necessarily mutually exclusive to have both manual and automatic test cases. But I believe that we need to concentrate on just one format currently (I prefer the one which is easier for manual testers), and look for the possible expansions later.</div>
<div><br></div><div>Great to have this discussion, it's refreshing after long period of low traffic in QA. </div><div><br>Thanks all.<br clear="all"><div><br></div>-- <br>Alex Lourie<br>
</div></div>