Mittwoch, 21. Juni 2017

CDMET: a mnemonic for generating exploratory testing charters

I gave a workshop about exploratory testing a few weeks ago. Furthermore, some colleagues want to use session based testing in another project and don’t have much experience with it so far. One topic both groups were eager to know more about is how to generate test charters: How do I find missions I want to explore during a test session.

 My short answer to this is “focus on the perceived risks in your software and on unanswered questions”. This statement alone is not very helpful, so I came up with various sources, which can point to interesting testing areas and charters. They are also good to start figuring out the perceived risks.

While I clustered these sources I found that there is a little mnemonic to remember them: CDMET. 
Conversation, Documentation, Monitoring and Earlier Testing. Alongside these four clusters, I listed various ideas, which can help you find test charters. I find them especially useful when combined with other oracles, FEW HICCUPS from Michael Bolton for example.

My list is by no means exhaustive, I still think it can help finding new test charters.

Conversation

Conversation means every form of people speaking to each other. This can span from water cooler talk over regular meetings you join up to meetings you specifically create to talk about testing and risks.
  • talk to various people involved in the product
    • business departments; marketing; product development
    • developers; architects; security people
    • user; customers; managers
  • listen closely and make notes during every meeting you attend
    • daily; retrospective; grooming; planning; debriefing
    • product demonstrations; training
    • status meetings; jour fixes; risk workshops

Documentation

Documentation is everything that is written down and yes this includes source code. There are a variety of official specification documents, which you can use but you should not end there. There are also emails, group chats, user documentation, etc.
  • official specification documents
    • requirement documentations; use cases; business cases
    • user stories; customer journey; personas 
    • UX designs; mock-ups; feature descriptions
  • technical documentation
    • class diagrams, architecture descriptions
    • sequence diagrams, interface descriptions
    • source code
  • general project documentation
    • wiki or confluence for everything someone deemed worthy of writing down
    • chat logs; emails; project plans; risk lists; meeting protocols 
  • test documentation
    • test cases; test notes; test protocols; test reports
    • bug descriptions; beta feedback
    • automation scripts, code and reports
  • user documentation
    • manuals; handbooks; online help; known bugs
    • tutorials; product descriptions; release notes; FAQs
    • flyers; marketing campaigns

Monitoring

Monitoring encompasses everything that I connect with the actual usage of the product because this is a very powerful input for generating new test charters. Therefore I use the term a bit more loosely as people usually do.
  • technical monitoring
    • crash reports; request times; server logs
    • ELK Stack (Elasticsearch, Logstash, Kibana); Grafana
  • user tracking
    • usage statistics for features, time of day, any other contexts
    • interaction rates; ad turnovers
    • top error messages the user faces regularly
  • user feedback
    • App or Play Store Reviews; reviews in magazines or blogs
    • customer services tickets; users reaching out to product team via email
    • social media like Twitter, Facebook, LinkedIn, etc

Earlier Testing

Earlier Testing is very useful to inform your future testing, it basically means: Use what you figured out yesterday to know what you should look at today. This feedback loop can be even faster when you come across an interesting part of the software while already performing a test session. Note this down and create new charters afterwards. 
If you played your hand right Earlier Testing should blend in with some of the other clusters, because you should document your testing and tell other team members about it.
  • results & artifacts
    • test case execution reports; test notes; status reports
    • bug descriptions; beta feedback; debriefings
  • hunches & leftovers
    • whenever you went “huh?” or “that’s interesting" in the past
    • known bug lists; “can’t reproduce” issues
    • unfinished business (“there is so much more I can look at if I had the time")

Whatever helps

You see that some items in the clusters are not strictly separated, a meeting can have a written down protocol for example. It does not really matter if you remember the meeting you had because you recall the talks you had or because you are flipping through the email protocol you wrote down. 
The important part is that thinking about conversations you had, the documentation you can read, the testing you already performed or the monitoring you can access can help you in figuring out what to test next. It surely helps me.