Feeds:
Posts
Comments

Archive for the ‘Uncategorized’ Category

When Bugs Swarm…

Ben Simo tweeted the following the other day:  “When bugs swarm, writing bug reports for each one becomes difficult. And reporting together looks like big ball of mud.”

First of all I like the term “swarming” for bugs. I’ve tested several releases where one or more features had bugs around every corner and it severely slowed down testing. Swarming is  a good name for that.

It also reminded me of my frustrations working in a waterfallish environment where every bug must be documented the same way, and people held “best practices” above the heads of others. I responded to Ben’s tweet with my thoughts on what testers should do when faced with a swarm of bugs: don’t document all of those bugs, the feature/product needs more dev time and rigorous bug documentation may be wasteful. I  didn’t show much context-driven thinking by replying in that manner.  Michael Bolton, one of my favorite testing thinkers, called me on it:
“Easy: you’re the waiter, not the client. If the client wants four salads, offer different things, but remember who’s who.”
“For an exercise, list at least five reasons to keep going.”

It appears that I’m coming across as trying to control the development process instead of supporting it. To be clear, I agree with Michael. Testers are here to provide quality related information, not force a process on the project. I was, however, trying to provide a heuristic to testers and others on software development projects. Sometimes it’s cheaper to do more programming without the aid of testing than it is with the bug reports. In other words, once you have the information you need (“this feature needs more programming”) then testing is no longer needed.

I will respond to Michael’s exercise though. After all, he provided more than five reasons to stop testing, so I should be able to think of five reasons to keep testing:

Ultimately we should keep testing a product/feature if we want more information. So, when bugs swarm here are some reasons that we may still want more information:

  1. We want to find a pattern behind the swarm – We wouldn’t be very good testers if all we did is report to the project that “feature x” is broken. Perhaps we should do additional testing to help them figure out if there is a general pattern. For example, on an insurance application that I was testing we couldn’t issue any auto policies due to several bugs on various screens. It didn’t seem worth it to even worry about the auto line of business. However, after some additional testing we discovered that the majority of the issues were related to IE6 only. It was beneficial for us to continue testing to discover what that pattern was.
  2. We want to find the worst possible failure – So, there are so many bugs it’s hard to see through the dense cloud. Maybe it’s not too helpful to report all of them right now, but it would be useful to know if there are any particularly nasty bugs.
  3. We really do want to know about all the failures we can find – The programmers may not be able to just redevelop a feature without understanding what all of the bugs are. Perhaps the bugs are business rule problems. In this case it will be necessary to find all of the business rule problems in the application and report them to the programmers. I recommend (note: I don’t enforce) that the testers report this in a lightweight way. A spreadsheet should suffice, instead of documenting every last problem in a bug tracking system.
  4. We want to demonstrate the failures – The programmers or management may want to see these bugs first hand. In that case we will have to work our way through the swarm again.
  5. We are compelled by politics – Let’s face it. We don’t work in perfect environments. Sometimes management will expect the testers to “keep their heads down” and “plow through test cases”. The appearance of hard work is often more important in some environments than actually doing good work.

In summary, we should respond in a context-driven way when bugs swarm and that means we have a lot of options. We need to keep our eyes and brains open to the possible solutions. So, what do you think? Why might you still test when faced with a swarm of bugs?

Read Full Post »

The Software Testers of Cincinnati (SToC) had their first meeting last night at MaxTrain in Mason, OH. Using the “If you build it they will come” heuristic, Brett Leonard scheduled the meetup and there was a great turnout! Sixteen software testers from about eight different companies attended the meeting.  Here is a brief summary of events:

  1. Introductions – Brett Leonard introduced himself and then asked all of us to do the same. He asked for us to provide our name, company, the type of environment in which we work (tools used, methodologies, etc.), and what topics we are interested in discussing.  There were a few trends:
    • Automation – Quite a few attendees wanted to talk about ways to use automation and hear about how others have used it.
    • Agile – A surprising number of people work in what they call an agile environment. I’ll be interested in learning in the coming months what they mean by agile.
    • Exploratory Testing – A few of us have been exposed to the ideas of exploratory testing as described by Cem Kaner, James Bach, and Michael Bolton. We want to hear how others are using this approach and discuss when it is most appropriate to use.
    • Load/Performance Testing – The group was mostly involved in functional testing, so items on the para-functional side or non-functional side intrigue us.
    • Organization – How is the development team structured? Where do people sit? How do they communicate? How many testers do you have for each developer? Why? We would like to know!
  2. Next Meeting – We then tried to decide what we would do for the next meeting. Brett has attended the Indianapolis Workshop on Software Testing, and we decided to adopt their method for our second meeting. Four people will present experience reports related to test automation. These presentations should last no longer than 15 minutes, then the audience gets about 15 minutes to ask questions.
  3. Pizza – I would say the majority of the meeting involved testers conferring while waiting on the pizza. This was fun for me. I spent the time to talking to testers about what they are testing, what they’ve tested before, etc.  It was a lot of fun.

The next meeting is February 23, 2010 at MaxTrain. If you are anywhere near the Cincinnati area then please join the Software Testers of Cincinnati group on LinkedIn and join us next month!

Read Full Post »

I appreciate it when other testers share how they test things.  It can be tough and tiresome to read a lot of theory without some context and examples.  Matt Heusser’s recent testing challenge is an opportunity for tester’s to show how they think they would test a specific item.  Matt is also going to go one step further and share how he tested the same widget at his company.  For that, I thank him.

Now, these challenges are tough for me to get excited about sometimes, because they don’t represent a genuine problem.  I need to be motivated by an “authentic problem” as James Bach says.  However, this does represent an opportunity to make Matt $50 poorer, plus I might learn something in the end. 🙂


Matt is asking for two deliverables, a strategy and a plan.  He defined them in his original post, so I will follow his definitions in my response.

Strategy

My strategy for the testing on this project is heavily influenced by (or completely stolen from) James Bach’s Heuristic Test Strategy Model.  It really exemplifies context-driven testing in that the test techniques (detailed in the plan section) are decided by the quality criteria that we will consider, environment of the project, and the scope of the testing effort.  In addition I like to refer to James’ Test Planning Model.  In a nutshell the model shows that choices are determined by the givens and the mission.  The rest of this strategy will explain what kind of heuristics that test team will use on this project considering the givens and the mission.

Assumptions – In this exercise there are all kinds of details that are not included.  This is to be expected, but I wanted to share what my assumptions are during this exercise.

  • The test team is separate from the development team
  • There are no regulations to consider
  • The test team is being used to provide information rather than defend the quality
  • The two testers on the team are skilled testers
  • The testing budget is determined by the project

Mission – The mission (as negotiated) is to find any “showstopper” bugs as fast as possible in order to ship working software every two weeks.  This means that it is important to get an idea of what a “showstopper” is and what “working software” means. I won’t manage this by doing a lot of up front training and embedding the testers with users.  Instead I’ll react to any problems that are identified.

Organization – I would recommend that the testers be as closely embedded with the developers as possible.  This means that I would want them active in unit test development, integration testing, code reviews, spec reviews, and I would want them to be able to communicate with them easily.  Co-location would be preferred, but if a distributed team is required then we would have to create processes and provide tools to make communication easy.

Time Management – We will use heuristics to determine if it is time to stop testing.  This doesn’t require specific meetings or checkpoints.  Instead the entire project should constantly question whether there is additional information that is needed about the project to ship working software within the 2 week time line.  In addition an exploratory heavy approach will be used in testing.  Scripted and rigorously planned testing will be used when necessary, but should be the exception instead of the rule.  This will allow for more training and testing time.

Automation – If there are not automated unit unit tests on this project then that is one place that I would recommend an automation effort starts.  The heuristic for the rest of the testing will be “If it is best for a machine to do it then automate it, but if it is better for a human to do it then don’t.”

Documentation – The testers will document items on an as-needed basis.

Tools -Whenever a testing tool is considered it should work to meet the mission, and if it does not specifically help find showstoppers faster then it definitely should not detract from that mission.

Budget – I would recommend that the project consider the testing budget the same way they would insurance.

Plan

Staying true to the strategy, the plan for this challenge will purely consist of charter titles based on Session-Based Test Management.  This isn’t necessarily how I would do it in a professional context, but for this challenge I’d like to show the breadth of testing possibilities.  I will also draw on the mnemonic SFDPOT when thinking about possible charters.

  1. Explore the ‘Post’ feature – Consider domain testing, flow testing and stress testing.  All buttons and options that affect ‘Posting’ should be explored. 90 minutes
  2. Explore the “Showing…from…within…” dropdowns – Consider combinations and stress. 60 minutes
  3. Explore the features of responding to signals.  120 minutes
  4. Explore browser configurations. 90 minutes
  5. Explore all features with different times and dates. 120 minutes
  6. Explore potential security risks. 90 minutes
  7. Explore conformance with requirements or spec document. 90 minutes
  8. Explore the accuracy of any help documentation. 90 minutes
  9. Explore common “real-world” scenarios. 90 minutes
  10. Explore error handling. 120 minutes

These nine charters would probably take 3 days of testing between 2 testers.  It is highly likely that additional charters will be needed once testing begins.  That is the strength of this approach, though.  It is very easy to adapt to new information.

The whole list of quality criteria in James’ Heuristic Test Strategy Model should be considered, but the testers would be trained on test techniques and quality criteria, and they would use that training during the charter execution.

Automation Ideas

  1. Code a script that posts random strings to random groups to stress the system and try many more combinations of strings that would be unlikely with manual testing.
  2. Click every link on a page – This seems like an easy thing to script, and is not something that I would want to do as a human.  It could find any bad links if the string manipulation code that generates the URLs has a defect in it.
  3. Keep posting until every unicode character is used.

Application

I’d like to document some of the specific tests I would try if I ran the first charter.

  1. Try to make a post with more than 140 characters (Copy & Paste, URL parameters, hold down a key)
  2. Use PerlClip to generate a string with different characters.
  3. Try pasting different objects (images, OLE objects, etc.)
  4. Private posts to people with a variety of names and lengths.
  5. Empty posts
  6. Insert different kinds of people (maybe go over 140 char limit that way)
  7. Identify all areas that the posts would appear and ensure the posts display correctly (log files, other people’s feeds, etc.)
  8. Post signals to many different areas
  9. Change profile picture and post a new message
  10. Change server date and time and perform posts
  11. Hold down the “Post” button for a long time
  12. Try to post when service is down
  13. Refer to Automation Ideas section

Conclusion

I learned a lot during this exercise, and I know there is more I can do.  However, I think I’ve put together a flexible strategy and a plan that shows that I can think very broadly about that kinds of tests that can be carried out.  I may revisit this later and comment on other things to consider.  In the meantime, I invite you to point out any mistakes I made or any suggestions you have.  Thank you for the challenge, Matt!

Read Full Post »