Feeds:
Posts
Comments

Archive for October, 2009

I appreciate it when other testers share how they test things.  It can be tough and tiresome to read a lot of theory without some context and examples.  Matt Heusser’s recent testing challenge is an opportunity for tester’s to show how they think they would test a specific item.  Matt is also going to go one step further and share how he tested the same widget at his company.  For that, I thank him.

Now, these challenges are tough for me to get excited about sometimes, because they don’t represent a genuine problem.  I need to be motivated by an “authentic problem” as James Bach says.  However, this does represent an opportunity to make Matt $50 poorer, plus I might learn something in the end. 🙂


Matt is asking for two deliverables, a strategy and a plan.  He defined them in his original post, so I will follow his definitions in my response.

Strategy

My strategy for the testing on this project is heavily influenced by (or completely stolen from) James Bach’s Heuristic Test Strategy Model.  It really exemplifies context-driven testing in that the test techniques (detailed in the plan section) are decided by the quality criteria that we will consider, environment of the project, and the scope of the testing effort.  In addition I like to refer to James’ Test Planning Model.  In a nutshell the model shows that choices are determined by the givens and the mission.  The rest of this strategy will explain what kind of heuristics that test team will use on this project considering the givens and the mission.

Assumptions – In this exercise there are all kinds of details that are not included.  This is to be expected, but I wanted to share what my assumptions are during this exercise.

  • The test team is separate from the development team
  • There are no regulations to consider
  • The test team is being used to provide information rather than defend the quality
  • The two testers on the team are skilled testers
  • The testing budget is determined by the project

Mission – The mission (as negotiated) is to find any “showstopper” bugs as fast as possible in order to ship working software every two weeks.  This means that it is important to get an idea of what a “showstopper” is and what “working software” means. I won’t manage this by doing a lot of up front training and embedding the testers with users.  Instead I’ll react to any problems that are identified.

Organization – I would recommend that the testers be as closely embedded with the developers as possible.  This means that I would want them active in unit test development, integration testing, code reviews, spec reviews, and I would want them to be able to communicate with them easily.  Co-location would be preferred, but if a distributed team is required then we would have to create processes and provide tools to make communication easy.

Time Management – We will use heuristics to determine if it is time to stop testing.  This doesn’t require specific meetings or checkpoints.  Instead the entire project should constantly question whether there is additional information that is needed about the project to ship working software within the 2 week time line.  In addition an exploratory heavy approach will be used in testing.  Scripted and rigorously planned testing will be used when necessary, but should be the exception instead of the rule.  This will allow for more training and testing time.

Automation – If there are not automated unit unit tests on this project then that is one place that I would recommend an automation effort starts.  The heuristic for the rest of the testing will be “If it is best for a machine to do it then automate it, but if it is better for a human to do it then don’t.”

Documentation – The testers will document items on an as-needed basis.

Tools -Whenever a testing tool is considered it should work to meet the mission, and if it does not specifically help find showstoppers faster then it definitely should not detract from that mission.

Budget – I would recommend that the project consider the testing budget the same way they would insurance.

Plan

Staying true to the strategy, the plan for this challenge will purely consist of charter titles based on Session-Based Test Management.  This isn’t necessarily how I would do it in a professional context, but for this challenge I’d like to show the breadth of testing possibilities.  I will also draw on the mnemonic SFDPOT when thinking about possible charters.

  1. Explore the ‘Post’ feature – Consider domain testing, flow testing and stress testing.  All buttons and options that affect ‘Posting’ should be explored. 90 minutes
  2. Explore the “Showing…from…within…” dropdowns – Consider combinations and stress. 60 minutes
  3. Explore the features of responding to signals.  120 minutes
  4. Explore browser configurations. 90 minutes
  5. Explore all features with different times and dates. 120 minutes
  6. Explore potential security risks. 90 minutes
  7. Explore conformance with requirements or spec document. 90 minutes
  8. Explore the accuracy of any help documentation. 90 minutes
  9. Explore common “real-world” scenarios. 90 minutes
  10. Explore error handling. 120 minutes

These nine charters would probably take 3 days of testing between 2 testers.  It is highly likely that additional charters will be needed once testing begins.  That is the strength of this approach, though.  It is very easy to adapt to new information.

The whole list of quality criteria in James’ Heuristic Test Strategy Model should be considered, but the testers would be trained on test techniques and quality criteria, and they would use that training during the charter execution.

Automation Ideas

  1. Code a script that posts random strings to random groups to stress the system and try many more combinations of strings that would be unlikely with manual testing.
  2. Click every link on a page – This seems like an easy thing to script, and is not something that I would want to do as a human.  It could find any bad links if the string manipulation code that generates the URLs has a defect in it.
  3. Keep posting until every unicode character is used.

Application

I’d like to document some of the specific tests I would try if I ran the first charter.

  1. Try to make a post with more than 140 characters (Copy & Paste, URL parameters, hold down a key)
  2. Use PerlClip to generate a string with different characters.
  3. Try pasting different objects (images, OLE objects, etc.)
  4. Private posts to people with a variety of names and lengths.
  5. Empty posts
  6. Insert different kinds of people (maybe go over 140 char limit that way)
  7. Identify all areas that the posts would appear and ensure the posts display correctly (log files, other people’s feeds, etc.)
  8. Post signals to many different areas
  9. Change profile picture and post a new message
  10. Change server date and time and perform posts
  11. Hold down the “Post” button for a long time
  12. Try to post when service is down
  13. Refer to Automation Ideas section

Conclusion

I learned a lot during this exercise, and I know there is more I can do.  However, I think I’ve put together a flexible strategy and a plan that shows that I can think very broadly about that kinds of tests that can be carried out.  I may revisit this later and comment on other things to consider.  In the meantime, I invite you to point out any mistakes I made or any suggestions you have.  Thank you for the challenge, Matt!

Advertisements

Read Full Post »