Using Lucid in Context, Part 2

This post will clearly follow on from the first part where we created a project (called tutorial-web), created a test spec, created test matchers and test definitions based on the test steps in the test spec, and we wrote a page definition.

Let’s add to our specs. In your specs directory, create a file called triangle_responses.spec and start it off like this:

Feature: Parsing Triangle Test Results

  As a candidate
  I need the app to indicate a specific response for data conditions
  so that I can determine if I am entering valid conditions
  Scenario: Invalid Triangle
    Given the triangle test app
    When  the data condition is "2", "2", "4"
    Then  the user receives the following message:
    Good. You are testing for invalid triangles, on the assumption of the theorem,
    which is that the addition of any two sides should always be greater than the
    third. This was a critical data condition.

You could just run the lucid command and test this out but that will execute both this spec and the one you created in the last post. For now, let’s just run the spec we want:

$ lucid specs\triangle_responses.spec

You already have a passing step here and that’s because this is the same step you used in your other test spec. So you immediately can see the benefit of re-using steps. Keep this in mind because later you’ll see an example where we actually could re-use a step but don’t want to. So now add the following to triangle_steps.rb (in your steps directory):

Note that I’ve changed the arg1, arg2, and arg3 parameters in the When matcher to side1, side2, and side3. If you remember from the last post, Lucid will provide parameters for any aspects of the test step that you capture via regular expressions. So if that’s the case, why does the Then test matcher have an argument since the test step itself is not parameterized? The reason for this is that Lucid recognizes that you provided an implicit parameter with the large text string. We’ll get to the string momentarily.

In the previous post, I talked a lot about setting contexts, providing element definitions, providing action definitions and so forth. You might want to try to write up your own logic in terms of what you think go in the When test definition.

Consider what we’re doing. With the When step, which is our key test action for this scenario, we know we want to add values to the text fields. That is, in effect, providing test data for the form. We then want evaluate the test data to see how the application responds to what we did. So first let’s put those captured side values to use.

Did you add some logic to your When test definition? Here’s what I would do:

You might not have called your action “enter_sides”. That’s fine. What you want to call the actions is completely up to you. Based on what you know from the previous post, you probably know where to put the enter_sides action or method, right? Before moving on, see if you can figure out where to put it and how to construct it, at least roughly.

Keep in mind what this action has to do. The enter_sides action will have to reference the text fields on the browser page and enter the values that were provided into those text fields.

Any ideas? Well, here’s what I did. I first added the method to the page definition (TriangleTest) that is stored in the pages/triangle.rb file.

If you run the lucid command to execute this spec, you may (or may not) be surprised to see that this now causes your When test step to pass. If you are surprised at this, keep in mind that from Lucid’s perspective you told it to execute an action (enter_sides) on a particular page (TriangleTest) and pass in three provided numbers. Since the test step did in fact have three numbers provided, and since the enter_sides action existed, as far as Lucid is concerned, you must be thrilled. Everything you told it to expect, it found. And thus did it pass.

Lucid has no way to know what failure looks like to you. Specifically, it has no idea how you want the When step to indicate it failed, so the best it can do is indicate that nothing bad happened when it tried to execute the test definition.

Have you decided how you will put logic in that action? What you need to do is similar to what you did in the previous post, albeit there is one element you really haven’t been introduced to.

Here’s what I did:

What I’m doing here is referencing the element definitions that I referred to with friendly names of ‘side1’, ‘side2’, and ‘side3’. These are, remember, text fields. So what I then did was say that each object “equals” the value of a parameter passed in.

Now, here we get back into Fluent a little bit. If you remember from the previous post, I showed you how a button could be clicked even though you did not specify a “click” action anywhere. And this was because Fluent automatically generated methods that were relevant to buttons, one being that the default action of a button is to click it. Therefore just indicating the button automatically clicked it.

With text fields, Fluent has created methods that essentially look like the = sign. When this method is provided on a text field, then what happens is Fluent assumes you want to set, or enter, the text into the text field. Consider that I could have written the action like this:

If you try running the lucid command for this spec again, you will see that you are still passing but now at least the When test definition is doing something; namely, it’s adding a value to each text field.

There’s one more thing to do in this test definition, however. We have to actually evaluate the test data. Can you think of how you would do this, given what you know? It’s very similar to what you did in the last post when you clicked a button during that scenario.

Here’s what I did. First you have to add a call to an action in the test definition:

Now in the page definition, you have to add the evaluate_data action that you are now calling.

Here you can see that I’m clicking on the button that I’ve given the friendly name of “test_data” and that corresponds to the “Evaluate Test Data” button in the application.

If you run lucid against this spec, you’ll see that we’re looking pretty good here. The test enters values and clicks a button. That button clearly generates a slew of text. So now we just need the observable part of this test coded so that we can determine if that slew of text contained what was expected.

We finally get to the Then test step and that string. Specifically, you have an extended string as part of the test step. Most tools refer to this as a “doc string”, which is somewhat stolen from Python which uses three quotes to indicate a longer string that it calls doc strings. Unlike in Python, however, you must have the triple quotes on lines by themselves. (This is actually not allowed in Python.)

As I mentioned earlier, the test matcher for this test step already has a capture element for the string. So what we want to do now is create another action definition. Feel free to try your hand at it before moving on.

Here’s what I did:

Now you need the check_comment action in the page definition. Here’s what I did for that:

So here we’re basically saying that the comment element should be equal to the value of the string that was passed in.

Wait, what’s self.comment referring to here? Ah, that’s another div. Remember in the last post that when we wanted to check the score, we ended up finding out that the text containing the score was in a div element that had an id of “score”. Well, if you check the markup for the text we want now you’ll find that it too appears in a div that has an id of “dataComment”. Add that to the page definition.

Here’s what I ended up with:

Run the lucid command against this test spec and you will find that it fails. You will get a nice long error message about what’s happening. And if you look closely you will see that the issue is that your string, in the test spec, has newlines in it. I purposely did this to make the string readable in the spec because otherwise someone would have to scroll to read the entire thing and that’s not very business friendly.

However, clearly my being nice to my readers has provided a problem for me in that the string can’t be read. Well, all that means is that I have to do a bit to parse the string as part of the action. Here’s one example of what you can do:

Basically all I’ve done there is replace any newlines with a space character.

Run lucid again with this test spec and you should see that everything passes. So essentially what we did here is simply expand slightly (or maybe not so slightly) on what you did in the first post. You can basically see that we generated test matchers when necessary, we created test definitions by adding logic to the matchers, and we delegated all work off to the page definitions. That latter part of our approach kept our test definitions looking very clean.

To see that they are clean, consider what they would have looked like if we had not used a page definition to hold the actions:

Pretty messy looking, huh? And note that it really ties my test definitions to the browser library I’m using. For example, in that last Then test definition, I specifically call out to Watir::Wait. The above is pretty much exactly what you don’t want in test definitions. Contrast the above with what you do in fact have and the difference is readily apparent.

Now let’s add another scenario to the test spec:

  Scenario: Scalene Triangle
    Given the triangle test app
    When  the data condition is "3", "4", "5"
    Then  the user receives the following message:
    Good. You have made sure that a scalene triangle will work. This is a critical test.

Running the lucid command against this test spec should show you both specs passing and that’s because you are simply reusing test definitions.

When Your Evolving TDL Branches

So note what you’ve done here. You’ve essentially evolved a TDL so that you can reuse steps. To show how your TDL may evolve to allow other phrasings, let’s consider that the triangle program does let you enter no input for each side or for any one side. As an exercise you might want to try writing this scenario yourself after observing what the application does.

Here’s the scenario I ended up with:

  Scenario: No Input
    Given the triangle test app
    When  the data condition is "", "", ""
    Then  the user receives the following message:
    Good. You have a test for no input.

Running lucid against this spec will still show you everything passing and that makes sense — after all, you are still reusing steps. But let’s say someone feels that it looks sloppy to have that When step written as such. For example, someone could assume it was a mistake and values were meant to be entered. Perhaps that’s less likely given the scenario title. But perhaps only one side should have had no input. The point is that the When step could be ambiguous to someone. This is a case where you might want to be more descriptive. What if I said this:

  When no conditions are tried

That sounds like it could work. Let’s change it to that and try it out. And to make things quicker let’s just run that one scenario:

$ lucid specs\triangle_responses.spec --name "No Input"

Hmm. You’re getting an error this time. You might want to spend some time analyzing what’s happening but a big clue is that you were not told that you needed to create a test matcher, which means the phrase you were attempting to use already exists. And if it already exists that means it’s probably doing something. In this case, you may note that the phrase we just used is what we also used in the triangle_fail_test.spec file. That was the step where the candidate doesn’t bother trying anything and just evaluates their test. What this tells us is that we don’t want to re-use that test step here. How about this:

  When no conditions are entered

If you try to run lucid against this particular scenario now you will be told you need a matcher. So put the following in your triangle_steps.rb file:

What actions should go in there? Well, this is pretty much just doing what the previous matcher did except it’s not taking values that are passed in. Can you figure out what to put in here for the actions?

Here’s what I did:

Being DRY with Test Definitions

Now you might notice something here. We have two test matchers that are very, very similar in operation. Consider what we have:

This can happen when you have decided to use a specific domain phrase (“no conditions are entered”) for what was previously a parameterized domain phrase (“the data condition is ”, ”, ””). The question is: does this matter? Or, rather, is this a problem? Some people will look at the above and say it does not adhere to DRY (Don’t Repeat Yourself).

There is no hard and fast answer to this. My thinking is that adhering to DRY works for some areas but a slavish devotion to the principle does not work in all areas. And this may be one of them. Certainly you have options. You could, for example, just keep the original test step as you worded it:

  When the data condition is "", "", ""

You could also decided instead to call one step from another step. Let’s consider how that would look. You could change your When test definition for “no conditions are entered” to the following:

Here what you’ve done is simply call a pre-existing test step from this test definition. Some people consider this kind of thing a horrible practice. Is it? Well, it’s certainly true that you’ve now tied the implementation of one test definition to that of another. So if you were to change anything about the test definition that is being called, then the current test definition would fail. Certainly that could introduce some issues, as all such dependencies can. On the other hand, you have kept your test definitions reasonably adhering to the DRY principle.

I don’t pretend to have the answer to this necessarily. My instinct is that this idea of calling test steps from other test steps seems really helpful initially, but it could lead you down a path where you have many dependencies and thus you start to build in a great deal of fragility and brittleness. It’s something to think about as you venture down your own path to evolving a TDL and building up your test definitions.

Being DRY with Test Specs

Since we’re on the topic of DRY, however, let’s also consider the test spec itself. You’ll notice that we have the same Given for each scenario. That certainly screams out for a background. So what you can do here is add the following to the top of your test spec (before any Scenarios):

    Given the triangle test app

Then remove the Given test step from each scenario.

If you rerun lucid for this test spec, you should see that all of your scenarios are still called and all of them still pass.

New Browser Considerations

One other thing you might have noticed and wondered about: each of these scenarios restarts the browser. You may or may not want that. But let’s say you don’t. How would you stop that. In this case, the notion of the browser restarting with each scenario was something that the project generator created as part of its logic. If you wanted to change that, you could open the events.rb file (located in the common/support directory).

At the bottom of the After block you will will see a statement like this:

If you comment out that line, then the browser will not be stopped after each scenario, which means the current browser instance will be reused each time.

Running Multiple Test Specs

You now have two test specs as part of your specs repository. You’ve already seen that you can run them all simply by typing the lucid command:

$ lucid

You could also run them all by running the lucid command and specifying the repository directory:

$ lucid specs

Note that if you had sub-directories within the specs directory that also contained test specs, those too would be executed. You’ve also seen that you can execute just one test spec:

$ lucid specs/triangle_responses.spec

Lucid generally assumes that you want to include as many specs as possible given a particular execution. So, again, if you call out a directory, Lucid assumes you want to run all specs in that directory. But let’s say that you want to exclude a particular test spec from running during a given test run. For that, you could do this:

$ lucid --exclude triangle_responses

Here you’ve told Lucid to exclude any test spec called triangle_responses. So in our case that means it will run only the triangle_fail_test spec.

Getting Results

You’ve seen that Lucid will generate command line output. You can also choose to output results using formatters. Some people like the idea of progress formatters since it is similar to unit testing tools they use. You can use a progress formatter by doing this:

$lucid --format progress

Here instead of the verbose output at the command line, you’ll simply get a series of dots if a scenario passes. If any test step fails, then you will see an F appear. If you see a U appear in the list, that means the test step was undefined, which essentially means it has no matcher.

Speaking of people being comfortable with unit test formatting, you can also use a junit formatter. You can do this:

$lucid --format junit --out results

This kind of formatter is helpful if you want to feed the results to something that knows how to interpret JUnit reports. Note here that you must provide an output path for the results to be stored in. Also note that if you do the above, you will not see any console output because you have told Lucid to generate in the JUnit format. You can, however, combine formatters. So if you wanted the standard output and the JUnit formatting, you could do this:

$ lucid -f standard -f junit --out results

Note here that I’m using the short form option of -f to specify the formatter.

Sometimes you might want a list of all the test definitions in your project as well as the test steps that are using a particular definition. The usage formatter generates a report that is designed to provide that. Try this:

$lucid --format usage

This will execute your scenarios as normal. You will also note that the test definitions will be sorted by their average execution time, which can help you determine if you have a “heavy” test definition that is doing a lot of work. This output format will also show you any unused test definitions. To test that you could just create a test definition in your triangle_steps.rb file like this:

The usage formatted report will tell you that this test definition is not matched by any steps that are in a test spec.

You may also want to be able to get the test definition information itself but without all the usage details. For that you can run this:

$ lucid --format testdefs

As with the usage formatter, this will run through all of your test specs. What if you want the usage information but don’t want to run everything? This is where you can do a dry run. For example:

$ lucid --format testdefs --dry-run
$ lucid --format usage --dry-run

NOTE: If your version of Lucid was less than 0.1.1 when you generated the project via lucid-gen, then you will have a problem that was fixed in the 0.1.1 patch release. If you get a stack trace when running the above commands, simply do this: go to the common\support\driver.rb file and cut the contents from that file and paste them at the top of the common\support\browser.rb file. Yes, this will leave driver.rb entirely empty. Future projects that are generated with Lucid will not have this issue. The reason for the problem is that when you indicate to Lucid that it should perform a dry-run, it will not load your driver.rb file. Since the driver.rb file was previously including Fluent, it meant that certain elements of the page definitions could not be recognized.

You can also generate HTML reports, which can be nice if you want your reports distributable to a wider group via a web server.

$ lucid --format html --out results\spec-results.html

As with the JUnit formatter, you do have to provide an output path for the results file that is generated. Note a key difference, though. With the JUnit formatter you simply provide a path. That’s because multiple files will be generated. With the HTML formatter, however, you must specify the path and the results file that you want the results written to.

On this latter idea, note that the lucid.yml also comes with a report profile, which essentially defaults to running an HTML report in the way I’ve just shown you. You can run by executing Lucid via:

$ lucid -p report

You could of course define other profiles that provide different reporting options.

Lucid vs. Cucumber?

You have, at this point, a soup to nuts introduction to Lucid. So you still might wonder: why would I pick Lucid over Cucumber? Or over one of the many other solutions out there, many of which have an active development community around them?

Answer: you might not.

Lucid is something I wrote for myself but as something that I could use and trust as part of my work. Having built it for myself, it necessarily has some bias and it has some opinionated elements to it.

I wrote Lucid with the intention of cloning a popular tool but refocusing it for my purposes. That means I wanted to keep the parts of Cucumber that I thought were good — which is most of it, actually — but allow myself the ability to add, extend, or refine where and when necessary without worrying about Cucumber’s future 2.0 path.

Part of that refocusing was simply in nomenclature. I think the name “Cucumber” for a tool is silly. And, of course, it’s spawned like-mined tools such as Turnip, Spinach, CukeSalad, and Lettuce. There’s even been a few anti-tools such as Steak. That all sounds cute and fun but I want these tools to actually be used by technical and non-technical stakeholders and be something that even larger corporations can rally around. In some cases, naming can matter. I also like when tools themselves sort of pivot around a concept or approach. So Lucid focuses around the concept of Lucid Testing.

I have a series of plans for Lucid in the future that will be incorporating various elements, like a type of macro support, a query language, a plug-in formatter subsystem, a better project generator, etc. Whether that makes Lucid better or worse than Cucumber, I’m not sure. Then again, I never set out to build the “better” mousetrap so much as I set out to make a different mousetrap.


This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.