Feeds:
Posts
Comments

When Bugs Swarm…

Ben Simo tweeted the following the other day:  “When bugs swarm, writing bug reports for each one becomes difficult. And reporting together looks like big ball of mud.”

First of all I like the term “swarming” for bugs. I’ve tested several releases where one or more features had bugs around every corner and it severely slowed down testing. Swarming is  a good name for that.

It also reminded me of my frustrations working in a waterfallish environment where every bug must be documented the same way, and people held “best practices” above the heads of others. I responded to Ben’s tweet with my thoughts on what testers should do when faced with a swarm of bugs: don’t document all of those bugs, the feature/product needs more dev time and rigorous bug documentation may be wasteful. I  didn’t show much context-driven thinking by replying in that manner.  Michael Bolton, one of my favorite testing thinkers, called me on it:
“Easy: you’re the waiter, not the client. If the client wants four salads, offer different things, but remember who’s who.”
“For an exercise, list at least five reasons to keep going.”

It appears that I’m coming across as trying to control the development process instead of supporting it. To be clear, I agree with Michael. Testers are here to provide quality related information, not force a process on the project. I was, however, trying to provide a heuristic to testers and others on software development projects. Sometimes it’s cheaper to do more programming without the aid of testing than it is with the bug reports. In other words, once you have the information you need (“this feature needs more programming”) then testing is no longer needed.

I will respond to Michael’s exercise though. After all, he provided more than five reasons to stop testing, so I should be able to think of five reasons to keep testing:

Ultimately we should keep testing a product/feature if we want more information. So, when bugs swarm here are some reasons that we may still want more information:

  1. We want to find a pattern behind the swarm – We wouldn’t be very good testers if all we did is report to the project that “feature x” is broken. Perhaps we should do additional testing to help them figure out if there is a general pattern. For example, on an insurance application that I was testing we couldn’t issue any auto policies due to several bugs on various screens. It didn’t seem worth it to even worry about the auto line of business. However, after some additional testing we discovered that the majority of the issues were related to IE6 only. It was beneficial for us to continue testing to discover what that pattern was.
  2. We want to find the worst possible failure – So, there are so many bugs it’s hard to see through the dense cloud. Maybe it’s not too helpful to report all of them right now, but it would be useful to know if there are any particularly nasty bugs.
  3. We really do want to know about all the failures we can find – The programmers may not be able to just redevelop a feature without understanding what all of the bugs are. Perhaps the bugs are business rule problems. In this case it will be necessary to find all of the business rule problems in the application and report them to the programmers. I recommend (note: I don’t enforce) that the testers report this in a lightweight way. A spreadsheet should suffice, instead of documenting every last problem in a bug tracking system.
  4. We want to demonstrate the failures – The programmers or management may want to see these bugs first hand. In that case we will have to work our way through the swarm again.
  5. We are compelled by politics – Let’s face it. We don’t work in perfect environments. Sometimes management will expect the testers to “keep their heads down” and “plow through test cases”. The appearance of hard work is often more important in some environments than actually doing good work.

In summary, we should respond in a context-driven way when bugs swarm and that means we have a lot of options. We need to keep our eyes and brains open to the possible solutions. So, what do you think? Why might you still test when faced with a swarm of bugs?

The Software Testers of Cincinnati (SToC) had their first meeting last night at MaxTrain in Mason, OH. Using the “If you build it they will come” heuristic, Brett Leonard scheduled the meetup and there was a great turnout! Sixteen software testers from about eight different companies attended the meeting.  Here is a brief summary of events:

  1. Introductions – Brett Leonard introduced himself and then asked all of us to do the same. He asked for us to provide our name, company, the type of environment in which we work (tools used, methodologies, etc.), and what topics we are interested in discussing.  There were a few trends:
    • Automation – Quite a few attendees wanted to talk about ways to use automation and hear about how others have used it.
    • Agile – A surprising number of people work in what they call an agile environment. I’ll be interested in learning in the coming months what they mean by agile.
    • Exploratory Testing – A few of us have been exposed to the ideas of exploratory testing as described by Cem Kaner, James Bach, and Michael Bolton. We want to hear how others are using this approach and discuss when it is most appropriate to use.
    • Load/Performance Testing – The group was mostly involved in functional testing, so items on the para-functional side or non-functional side intrigue us.
    • Organization – How is the development team structured? Where do people sit? How do they communicate? How many testers do you have for each developer? Why? We would like to know!
  2. Next Meeting – We then tried to decide what we would do for the next meeting. Brett has attended the Indianapolis Workshop on Software Testing, and we decided to adopt their method for our second meeting. Four people will present experience reports related to test automation. These presentations should last no longer than 15 minutes, then the audience gets about 15 minutes to ask questions.
  3. Pizza – I would say the majority of the meeting involved testers conferring while waiting on the pizza. This was fun for me. I spent the time to talking to testers about what they are testing, what they’ve tested before, etc.  It was a lot of fun.

The next meeting is February 23, 2010 at MaxTrain. If you are anywhere near the Cincinnati area then please join the Software Testers of Cincinnati group on LinkedIn and join us next month!

I appreciate it when other testers share how they test things.  It can be tough and tiresome to read a lot of theory without some context and examples.  Matt Heusser’s recent testing challenge is an opportunity for tester’s to show how they think they would test a specific item.  Matt is also going to go one step further and share how he tested the same widget at his company.  For that, I thank him.

Now, these challenges are tough for me to get excited about sometimes, because they don’t represent a genuine problem.  I need to be motivated by an “authentic problem” as James Bach says.  However, this does represent an opportunity to make Matt $50 poorer, plus I might learn something in the end.🙂


Matt is asking for two deliverables, a strategy and a plan.  He defined them in his original post, so I will follow his definitions in my response.

Strategy

My strategy for the testing on this project is heavily influenced by (or completely stolen from) James Bach’s Heuristic Test Strategy Model.  It really exemplifies context-driven testing in that the test techniques (detailed in the plan section) are decided by the quality criteria that we will consider, environment of the project, and the scope of the testing effort.  In addition I like to refer to James’ Test Planning Model.  In a nutshell the model shows that choices are determined by the givens and the mission.  The rest of this strategy will explain what kind of heuristics that test team will use on this project considering the givens and the mission.

Assumptions – In this exercise there are all kinds of details that are not included.  This is to be expected, but I wanted to share what my assumptions are during this exercise.

  • The test team is separate from the development team
  • There are no regulations to consider
  • The test team is being used to provide information rather than defend the quality
  • The two testers on the team are skilled testers
  • The testing budget is determined by the project

Mission – The mission (as negotiated) is to find any “showstopper” bugs as fast as possible in order to ship working software every two weeks.  This means that it is important to get an idea of what a “showstopper” is and what “working software” means. I won’t manage this by doing a lot of up front training and embedding the testers with users.  Instead I’ll react to any problems that are identified.

Organization – I would recommend that the testers be as closely embedded with the developers as possible.  This means that I would want them active in unit test development, integration testing, code reviews, spec reviews, and I would want them to be able to communicate with them easily.  Co-location would be preferred, but if a distributed team is required then we would have to create processes and provide tools to make communication easy.

Time Management – We will use heuristics to determine if it is time to stop testing.  This doesn’t require specific meetings or checkpoints.  Instead the entire project should constantly question whether there is additional information that is needed about the project to ship working software within the 2 week time line.  In addition an exploratory heavy approach will be used in testing.  Scripted and rigorously planned testing will be used when necessary, but should be the exception instead of the rule.  This will allow for more training and testing time.

Automation – If there are not automated unit unit tests on this project then that is one place that I would recommend an automation effort starts.  The heuristic for the rest of the testing will be “If it is best for a machine to do it then automate it, but if it is better for a human to do it then don’t.”

Documentation – The testers will document items on an as-needed basis.

Tools -Whenever a testing tool is considered it should work to meet the mission, and if it does not specifically help find showstoppers faster then it definitely should not detract from that mission.

Budget – I would recommend that the project consider the testing budget the same way they would insurance.

Plan

Staying true to the strategy, the plan for this challenge will purely consist of charter titles based on Session-Based Test Management.  This isn’t necessarily how I would do it in a professional context, but for this challenge I’d like to show the breadth of testing possibilities.  I will also draw on the mnemonic SFDPOT when thinking about possible charters.

  1. Explore the ‘Post’ feature – Consider domain testing, flow testing and stress testing.  All buttons and options that affect ‘Posting’ should be explored. 90 minutes
  2. Explore the “Showing…from…within…” dropdowns – Consider combinations and stress. 60 minutes
  3. Explore the features of responding to signals.  120 minutes
  4. Explore browser configurations. 90 minutes
  5. Explore all features with different times and dates. 120 minutes
  6. Explore potential security risks. 90 minutes
  7. Explore conformance with requirements or spec document. 90 minutes
  8. Explore the accuracy of any help documentation. 90 minutes
  9. Explore common “real-world” scenarios. 90 minutes
  10. Explore error handling. 120 minutes

These nine charters would probably take 3 days of testing between 2 testers.  It is highly likely that additional charters will be needed once testing begins.  That is the strength of this approach, though.  It is very easy to adapt to new information.

The whole list of quality criteria in James’ Heuristic Test Strategy Model should be considered, but the testers would be trained on test techniques and quality criteria, and they would use that training during the charter execution.

Automation Ideas

  1. Code a script that posts random strings to random groups to stress the system and try many more combinations of strings that would be unlikely with manual testing.
  2. Click every link on a page – This seems like an easy thing to script, and is not something that I would want to do as a human.  It could find any bad links if the string manipulation code that generates the URLs has a defect in it.
  3. Keep posting until every unicode character is used.

Application

I’d like to document some of the specific tests I would try if I ran the first charter.

  1. Try to make a post with more than 140 characters (Copy & Paste, URL parameters, hold down a key)
  2. Use PerlClip to generate a string with different characters.
  3. Try pasting different objects (images, OLE objects, etc.)
  4. Private posts to people with a variety of names and lengths.
  5. Empty posts
  6. Insert different kinds of people (maybe go over 140 char limit that way)
  7. Identify all areas that the posts would appear and ensure the posts display correctly (log files, other people’s feeds, etc.)
  8. Post signals to many different areas
  9. Change profile picture and post a new message
  10. Change server date and time and perform posts
  11. Hold down the “Post” button for a long time
  12. Try to post when service is down
  13. Refer to Automation Ideas section

Conclusion

I learned a lot during this exercise, and I know there is more I can do.  However, I think I’ve put together a flexible strategy and a plan that shows that I can think very broadly about that kinds of tests that can be carried out.  I may revisit this later and comment on other things to consider.  In the meantime, I invite you to point out any mistakes I made or any suggestions you have.  Thank you for the challenge, Matt!

Going Through the Motions

Michael Bolton posted a list of heuristics to use when deciding if it is time to stop testing.  As usual he provides a pretty comprehensive list, but for a change I feel like I can add something to it.

The seventh heuristic is “The Pause that Refreshes Heuristic” which means that testing isn’t stopped, but it is suspended until the testers feel inspired to start testing again.  I don’t think that this is a heuristic trigger to stop testing though.  I think it is a heuristic action that takes place when a test team uses some other rule of thumb.  Michael identified two reasons that the testers might stop, and I would suggest that this one heuristic should be broken out into two separate ones.

Change in Priorities – Michael mentioned that testing may pause if there is a higher priority at the moment.  I think this one is largely self-explanatory, but a heuristic that I often don’t see applied is the next one.

Lights are Off – This heuristic says that if you don’t know why you are testing then pause and refresh.  I’ve seen very large test teams continue to run the same test cases over and over, because they feel like they need to be doing something even if they don’t know why.  Let’s explore some of the reasons that this might occur:

  1. “We’ve always done it this way.” – The testers may just have faith in the fact that if the team has been doing it this way then it must be good.
  2. The testers are rewarded by numbers – If the testers reduce the amount of test cases that are “executed” then it could look like less work is being done.
  3. There is time left – The test team might be given more time than is actually necessary to test a feature so they fill it up with old test cases that don’t have much meaning.

I would suggest that testing has technically stopped at this point, because the testers are no longer thinking about their actions.  That is precisely the reason that the guise of testing should stop.  It’s time to rethink the mission, analyze the risks in the application or perhaps the “testing” that is performed is actually important, but the testers don’t understand why.  In the latter case it would be a better use of time to take a step back and help the testers understand why specific test cases were chosen for execution rather than telling them to “just finish your test cases”.

So, Michael’s list of heuristics are helpful, but I’d recommend changes.  In the end it doesn’t matter if you use his list, my list or any list at all.  These heuristics are rules of thumb to help thinking testers decide when testing should stop.  The most important thing is that you are thinking about it.

Tests and Test Cases

As a test manager, I like to interact with new testers on my team. So, I had a question for our newest tester, Venkat, after he completed his first test case.  The test case instructs the tester to enter an invalid value into a field to ensure that the application rejects it.

“So, how many tests have you run so far, Venkat?”

He looked puzzled.  “Well, I’m still on my first test.”

“Well, maybe we should talk about what I think a test is. I actually think that you have ‘executed’ countless tests at this point.”

I explained that I define a test as the action that someone takes to answer a question about the application.  These actions are abstract.  It could be a thought, a visual check, or a feeling.  Each action is testing the application’s abilities to meet our expectations, and they are difficult to separate sometimes.  I then asked Venkat to list all the things that were going on in his head while he executed that one ‘test’.  Here is the list he provided:

  • It was difficult to enter the invalid value.
  • The application invalidated his input rather quickly.
  • The other labels and edit boxes on the screen seemed correct.
  • There was a UI bug on a screen that he needed to use prior to the current one.
  • He liked the invalidation message, but he was confused by the location of some of the edit boxes.
  • He learned some business rules and how to navigate the system.

I asked Venkat to stop there.  Venkat ran numerous (maybe countless) tests against the application while following the steps in the document.  This document is a “test case”.

I define a test case as a documented model of a predicted test.  The documented tests are  attempts by humans to turn the abstract into the concrete, and I think it can be a valuable tool.  Testers should consider documenting tests if the goal is to:

  • remember them for the future (personally or for a test team).
  • submit to others for review (peer review or auditing purposes)
  • assign to others for execution (more than likely these are checks)
  • submit to others for some kind of scripting support

These are rather ambiguous categories, but the method of documentation will differ depending on what the end goal is. It also isn’t very helpful to count the tests or test cases, as I’ve demonstrated that a skilled tester (even a junior one as Venkat demonstrated) will perform all kinds of tests that in the end is counted as one.  The benefit of counting is more in showing progress towards a total.

After Venkat completed 10 of the 20 introductory test cases, I asked him another question. “When will you be done with your training test cases?”

Venkat looked quickly at the clock, and then did some quick mental math. “It took 95 minutes to complete the first 10 tests, so I would expect it to take 95 more minutes.”

“Aw, you let me trick you again!  Tell me about your testing so far.”

Venkat found four bugs that took 40 of the 95 minutes to write.  He also couldn’t understand a step in one of the test cases, so he had to ask his neighbor about that.  Consider the possibilities with the remaining test cases:

  • The remaining tests are somewhat redundant, and Venkat has already technically performed the work that the test cases suggest.
  • There are more or fewer bugs to be found with the remaining tests.
  • The remaining test cases could be more or less difficult to understand.
  • The remaining test cases require more or fewer steps to complete.

So, even reporting a percentage of the total test cases can be misleading and unhelpful. I recommend that testers be able to tell a story about the testing that they have completed so far, and the testing that remains.  If numbers are required, then at least a qualitative story is available.

A week after having this discussion with Venkat I decided to stop by and see if it shaped the way he talked about his testing.

“How many tests have you run today, Venkat?”, I said in a half-joking tone.

He smiled, and said “Can you please help me understand why the answer would be helpful to you?  The number of tests aren’t the same as number of test cases, and I’m not sure why either number would be useful.”

I just gave him a thumbs up and kept walking.

Follow

Get every new post delivered to your Inbox.