Posts mit dem Label testing werden angezeigt. Alle Posts anzeigen
Posts mit dem Label testing werden angezeigt. Alle Posts anzeigen

Montag, 13. November 2017

RiskStorming - Maping Risks with TestSphere

Why RiskStorming?

Beren van Daele and Andreas Faes were chosen to give a workshop at the Agile & Automation days in Krakau. In that workshop, they were going to utilize the famous TestSphere Cards as a tool to create a test strategy. In the end, Andreas sadly couldn’t make it and Beren asked me if I want to jump in. Feeling a bit guilty and thrilled at the same time I agreed.
onTable.jpg
In our first hangout, we discussed, that we want the created test strategy to be risk-based and I volunteered to look for a nice, visualizing format to generate risks, that would be fun to do during a workshop. Andreas said something along the lines of “this is not easy to find” and he was absolutely right. In the end, I found nothing, that really suited our needs so on a weekend I just put my phone on the table and sorted the TestSphere cards around it only to think to myself that this seems to look like a circle. Thus “RiskStorming” was born.

The idea behind it is to use the TestSphere cards to steer a discussion about product risk and help come up with a test strategy, that actually tackles the identified risks.


How does RiskStorming work?

Screen Shot 2017-11-10 at 23.23.09.pngRiskStorming can help you to guide your thinking towards product risks and how to mitigate them either when on your own or with a group of people. Therefore I will first describe how RiskStorming generally works and then address some topics, which can come up when you facilitate a group session.

(Re)Inventing your test strategy

RiskStorming itself is structured in circles with the application under test in the centre. I found it helpful to have some form of representation of that application on the table, while you go forward, like a smartphone showing the application or any other token, e.g. the company’s mascot. This helps to visualize the rather abstract concept of software. If you are lucky and actually are testing a smartphone app or website you can even access it during the RiskStorming session.

The first circle surrounding the application focuses on “Quality Aspects”, which are covered by the blue TestSphere cards. Think about the most important Quality Aspects for your application and lay them around your representation. The hard thing is not to come up with potentially important Quality Aspects, but acknowledging that not all of them can or will be equally important. Focus. If you use our board we try to foster this by actively limiting the number of Quality Aspects to six.

The second circle focuses on the actual risks. Take a pen and sticky notes then start writing down risks, which threaten the application, specifically those connected to the Quality Aspects you choose to be the most important in the last phase, e.g. if you chose “Security and Permissions” maybe “loss of personal credit card information and social security number” is a valid risk to your application. Put the sticky notes on the Quality Aspects and try to sort them into the ones they belong to.

The last circle deals with risk mitigation. Look at the identified risks and use the full TestSphere card deck to find heuristics, patterns and techniques, which can help you to mitigate them. Since the TestSphere cards are by no means complete you might come up with more ideas than you find cards. Write down your ideas on additional sticky notes, preferable different coloured ones than those showing the risks, and add them. The idea behind using TestSphere cards is to give you new inspirations about how to approach a testing problem, e.g. a lot of testers don’t think about utilizing “Log-digging” for their testing yet this is a very powerful tool.
In the end, you will come up with a mixture of your own sticky notes and TestSphere cards, which is absolutely fine.

Using Riskstorming in a timebox

I believe it is helpful to timebox RiskStorming phases since it makes sure you stay on track and arrive at a result after a given amount of time because the meeting you scheduled for you and your team will end eventually. You can always double check the resulting test strategy after a good night's sleep.
In my experience so far I would spend most of the time on risks and mitigating them and not too much on the Quality Aspects. Here is a setup, that worked now for several rounds:

  • Phase 1: Discussing Quality Aspects: 10 minutes
  • Phase 2: Finding Risks: 25 minutes
  • Phase 3: Finding Risk Mitigations: 25 minutes

Facilitating a Riskstorming session

In addition to knowing the rules and having an idea about good timeboxes, it is also helpful to be aware of questions and struggles participants might have when facilitating a session.

Thinking in bugs, not in risks

More than one group noted down bugs instead of risks, e.g. “order of elements is wrong” instead of “users don’t finish the ordering process” for the Quality Aspect of “User Friendliness”. This can subsequently lead to a shallow testing strategy, which only takes care of very specific incidents. Make sure to remind the participants to come up with risks, not bugs.

Making decisions by reducing resources

In the beginning, we did not give participants a limit on the Quality Aspects, but we changed this once Beren came up with the board. The point is not that there are never more than six important Quality Aspects, but to foster discussions and eventually a decision by the whole group. These discussions give the participants a better understanding of each other's view on the product and what's most important about it.

Giving them time to read the cards

There are a lot of TestSpehre cards. Your participants should have the time to read them properly when trying to assign certain techniques or heuristics to their test strategy. If possible try to make the cards available for them before the session, if not you might want to prolong the “Finding Risk Mitigation” phase.

Make them read the cards

The participants should not only have the time to read the cards, they should actually do so. Look out for people only reading the headline and then applying the card in a wrong way, because they have not grasped the whole meaning, but just assumed they knew what that card is about. Encourage them to take their time and read the cards entirely, especially if a term is new to them.

Don’t restrict them to the cards

It is not possible to cover every technique, every pattern, every heuristic or Oracle, that ever came up in the world of testing with a TestSphere card. This means the participants may come up with ideas how to test for or mitigate a risk they identified, which is not covered by a card. Encourage them to still write it down and put it on the board. Don’t let a finite set of cards get in the way of finding the best possible test strategy.

Make them use the cards

Not restricting to the participants to the cards does not mean they should not use them at all. The idea of using the TestSphere cards is to give people new ideas how to approach a testing strategy. The cards can inspire them to use methods, they have never tried before, e.g. a group of people, which previously approached their testing problems strictly from a business point of view is often surprised that “Log-Digging” can provide them with a lot more options than before.

Be aware if they lack experience in an area

In some sessions, we encountered, that participants did not have much experience with testing in general or certain aspects testing, e.g. security testing. Try to place at least one person in a group proficient in topics, that might come up. If that is not possible be prepared that you will have more coaching to do.

Use the board

Beren came up with a board, that helps you to keep in mind what the different phases are about and also makes it more fun since it amplifies the gamified character of RiskStorming. Then Thomas Harvey made it beautiful. If you print it in the right size the cards perfectly fit the slots.




Here is the board in different sizes for you to download:

TestSphere Riskstorming 4xA3 - A1

TestSphere Riskstorming 8xA4 - A1

TestSphere Riskstorming A1 (can be printed A3 or A4)

TestSphere Riskstorming A3

Impressions

Here are a few pictures of what this actually looks like:



Mittwoch, 21. Juni 2017

CDMET: a mnemonic for generating exploratory testing charters

I gave a workshop about exploratory testing a few weeks ago. Furthermore, some colleagues want to use session based testing in another project and don’t have much experience with it so far. One topic both groups were eager to know more about is how to generate test charters: How do I find missions I want to explore during a test session.

 My short answer to this is “focus on the perceived risks in your software and on unanswered questions”. This statement alone is not very helpful, so I came up with various sources, which can point to interesting testing areas and charters. They are also good to start figuring out the perceived risks.

While I clustered these sources I found that there is a little mnemonic to remember them: CDMET. 
Conversation, Documentation, Monitoring and Earlier Testing. Alongside these four clusters, I listed various ideas, which can help you find test charters. I find them especially useful when combined with other oracles, FEW HICCUPS from Michael Bolton for example.

My list is by no means exhaustive, I still think it can help finding new test charters.

Conversation

Conversation means every form of people speaking to each other. This can span from water cooler talk over regular meetings you join up to meetings you specifically create to talk about testing and risks.
  • talk to various people involved in the product
    • business departments; marketing; product development
    • developers; architects; security people
    • user; customers; managers
  • listen closely and make notes during every meeting you attend
    • daily; retrospective; grooming; planning; debriefing
    • product demonstrations; training
    • status meetings; jour fixes; risk workshops

Documentation

Documentation is everything that is written down and yes this includes source code. There are a variety of official specification documents, which you can use but you should not end there. There are also emails, group chats, user documentation, etc.
  • official specification documents
    • requirement documentations; use cases; business cases
    • user stories; customer journey; personas 
    • UX designs; mock-ups; feature descriptions
  • technical documentation
    • class diagrams, architecture descriptions
    • sequence diagrams, interface descriptions
    • source code
  • general project documentation
    • wiki or confluence for everything someone deemed worthy of writing down
    • chat logs; emails; project plans; risk lists; meeting protocols 
  • test documentation
    • test cases; test notes; test protocols; test reports
    • bug descriptions; beta feedback
    • automation scripts, code and reports
  • user documentation
    • manuals; handbooks; online help; known bugs
    • tutorials; product descriptions; release notes; FAQs
    • flyers; marketing campaigns

Monitoring

Monitoring encompasses everything that I connect with the actual usage of the product because this is a very powerful input for generating new test charters. Therefore I use the term a bit more loosely as people usually do.
  • technical monitoring
    • crash reports; request times; server logs
    • ELK Stack (Elasticsearch, Logstash, Kibana); Grafana
  • user tracking
    • usage statistics for features, time of day, any other contexts
    • interaction rates; ad turnovers
    • top error messages the user faces regularly
  • user feedback
    • App or Play Store Reviews; reviews in magazines or blogs
    • customer services tickets; users reaching out to product team via email
    • social media like Twitter, Facebook, LinkedIn, etc

Earlier Testing

Earlier Testing is very useful to inform your future testing, it basically means: Use what you figured out yesterday to know what you should look at today. This feedback loop can be even faster when you come across an interesting part of the software while already performing a test session. Note this down and create new charters afterwards. 
If you played your hand right Earlier Testing should blend in with some of the other clusters, because you should document your testing and tell other team members about it.
  • results & artifacts
    • test case execution reports; test notes; status reports
    • bug descriptions; beta feedback; debriefings
  • hunches & leftovers
    • whenever you went “huh?” or “that’s interesting" in the past
    • known bug lists; “can’t reproduce” issues
    • unfinished business (“there is so much more I can look at if I had the time")

Whatever helps

You see that some items in the clusters are not strictly separated, a meeting can have a written down protocol for example. It does not really matter if you remember the meeting you had because you recall the talks you had or because you are flipping through the email protocol you wrote down. 
The important part is that thinking about conversations you had, the documentation you can read, the testing you already performed or the monitoring you can access can help you in figuring out what to test next. It surely helps me.



Dienstag, 16. Mai 2017

Mingling Conferences

OOP
Recently a thread started at twitter originating with this tweet:


What then happened were developers asking for testers to join their conferences and vice versa.
We then started gathering developer conferences and testing conferences, which actively want both disciplines mingling. On some of these testing conferences I personally met developers, some are completely built around people meeting, some state they want everybody to join them on their web page.

As twitter is a fleeting medium I want to use this blog post to collect these conferences. I will state which conference it is, where and when it takes place and if it is a developer or tester conference. This turns out to be pretty easy to guess since testing conferences love to put the word “test” in their name.

I want this list to grow so please DM me on twitter, write me a mail or leave a comment so I can curate and update. I am eager to get this blog post out before the energy to do so leaves me so I will start with the mentioned conferences on the twitter thread and over time research more, e.g. by crawling through the respective web pages for partnering conferences. I will also try my best to keep conference dates and places updated in the future, and the table below will hopefully become less ugly, too.
Wish me luck.

Oh, and if you are a UX designer, requirements engineer, product manager or something completely else and think: “Hey, why don’t you want to meet me? I want developers and testers on my conference, too!” Then also contact me, I will add you here. I will add anyone, who wants to help all disciplines mingling more.

Here is the list:



Conference
When?
Where?
2017-08-24 to 2017-08-27
Soltau,
Germany
European Testing Conference
2018-02-08 to 2018-02-10
Amsterdam, Netherlands
2017-05-11 to 2017-05-12
Bucharest, Romania
2017-05-10 to 2017-05-12
Cluj,
Romania
2017-10-06 to 2017-10-06
Munich,
Germany
2017-11-13 to 2017-11-17
Potdam,
Germany
2017-07-21 to 2017-07-22
Munich,
Germany
2017-01-15 to 2017-01-19
Kiilopää,
Finland
2017-04-06 to 2017-04-09
Gran Canaria,
Spain
2017-03-25 to 2017-03-27
Rimini,
Italy
2017-11-09 to 2017-11-12
La Roche-en-Ardenne,
Belgium
2017-10-26 to 2017-10-29
Rochegude,
France
2017-03-09 to 2017-03-12
Ftan,
Switzerland
2017-11-13 to 2017-11-17
Potdam,
Germany
2017-06-15 to 2017-06-18
Dorking,
England
2017-19-15
Zürich,
Switzerland
2017-10-20 to 2017-20-21
Linz,
Austria
2017-03-23 to 2017-03-24
Brighton,
England
2017-01-26 to 2017-01-27
Utrecht,
Netherlands
TestBash Dublin
May 2018
Dublin,
Ireland
2017-10-26 to 2017-10-27
Manchester,
England
2017-11-09 to 2017-11-10
Philadelphia,
USA
2017-11-06 to 2017-11-10
Malmö,
Sweden
2017-05-19 to 2017-05-20
Amsterdam,
Netherlands
2017-10-12 to 2017-10-13
Amsterdam,
Netherlands
2017-11-02
Ede,
Netherlands
2017-09-25 to 2017-09-26
Swansea,
Wales
2017-11-26 to 2017-11-28
Zwartkop Mountains,
South Africa
2018-02-05 to 2017-02-09
Munich,
Germany
2017-10-14
Stockholm,
Sweden
2017-10-24 to 2017-10-26
Ludwigsburg,
Germany
2017-04-20 to 2017-04-21
Lyon,
France


I hope a lot of you people start going to the "other" conferences now. If not we have to take extreme measures:

Montag, 22. August 2016

A tester’s thoughts on characterization testing

Michael Feathers just recently posted something about characterization testing on his blog. The term is not new, in fact it is used at least since 2007, still I stumbled over something in this particular blog post. Since I also read Katrina Clokies post about human centered automation at the same time the two topics kind of merged a bit together in my head and got me thinking.
So what is this blog post going to be? Basically it is my stream of thought about characterization testing to see if I can make sense of my thoughts. Hopefully someone else benefits from this, too.

characterization testing is an exploratory testing technique

Let’s start with what characterization testing actually is, at least to my understanding. Characterization testing is a technique, which facilitates writing unit tests to check and document what an existing source code actually does and is therefore especially useful when dealing with legacy code. Note that it is not important if the checked behaviour is also the wanted behaviour.
The created checks are used to find out what a system does and then automatically check that it still works as before while you refactor the code base. If you want to dig deeper on characterization tests and how to create them I suggest you read Michael’s inital blog post or go to Alberto Saviola’s four piece article series, which starts here and ends with a tool, that can create characterization tests automatically.  

Michael starts his blog with the following statement before he moves on to characterization testing itself: "We use [the word testing] for many things, from exploratory testing and manual testing to unit testing and other forms of automation. The core issue is that we need to know that our code works, and we’ve lumped together a variety of practices and put them under this banner.” 

I have a problem with that statement and this was the kicker, which started my thoughts. I namely disagree with stating exploratory testing is there to make sure “that our code works”, because this is not how I see exploratory testing. I use exploratory testing to find out how a system works. If my findings represent desired behaviour or if they result in a series of bugs is often up to debate with my teammates. 

The ultimate reference for exploratory testing to me is Elisabeth Hendrickson’s book Explore It!. I own a german translation therefore I cannot quote here and will summarise instead. Right in the beginning of the book she writes a test strategy should answer two questions:
  1. Does the software behave as planed under the specified conditions?
  2. Are there further risks?
The first one deals a lot with knowing “our code works” as Michael puts it. The second one goes further and also explores (sic!) the system in more detail than just checking it against a specification. Risks are found by learning what the system actually does and using this as an input for even further exploration. 
I think you already know where I am going with this: If exploratory testing is there to learn about risks by learning how the system at hand behaves doesn’t this mean that characterization testing is an exploratory testing technique? Elisabeth’s book even has a whole chapter dedicated to exploring existing (aka legacy) systems, which is precisely what Michael uses characterization testing for.

In this case I think the terms black box testing and white box testing are helpful: While Elisabeth describes mainly black box testing techniques in her book I see characterization testing as a white box testing technique for exploration on unit level. Combine Elisabeth’s techniques with Michael’s characterization testing and you have a very powerful framework to start working on a legacy system, still I see characterization testing more as a part of and not an addition to exploratory testing.

You can read Meike Mertsch’s blog post Exploratory testing while writing code to see how a tester with an exploratory mind works with code while testing, although it might not be characterization testing in the most strictest sense. Meike was also the translator of Explore It! to german.

If you look at characterization testing as a white box exploratory testing technique they have a very unique property when being compared to all the black box techniques in Elisabeth’s book: they create automated checks, which can be seen as a form of documentation of the current system behaviour.

characterization tests are fascinating for testers

This is the point where I have to say that I am a big fan of characterization testing when dealing with legacy systems. Developers, who have to refactor the system, benefit from them directly, because they give them confidence that they did not change the system behaviour in unexpected ways. Testers can use existing characterization tests as a starting point for finding out more about the system.

I don’t know you, but to me finding or writing characterization checks begs the question why the system behaves that way. What is is this behaviour good for and what does it lead to if you put it in the bigger picture of the overall system? Characterization checks can be an input for exploratory testing sessions or fuel discussions with developers, product managers or users. They are an invitation to explore even when they don’t fail and therefore are a good example of checks that help you learn about the system even if the build is green. 

As a tester there are two fallacies regarding characterization tests, I have encountered in the past. The first one is not fixing bugs, because the bugfix breaks a characterization test. Remember that you cannot know if the checked behaviour is correct or wrong. I saw it happen that someone wanted to commit code, but reverted it because it broke some checks. Only later did we find out that the checked behaviour was actually faulty.
The second one is the exact opposite: You know that they are just checking the current state and you are very confident your new code works better than the old one, when the checks break you adjust them to your code and commit everything together. Guess what: The old behaviour was correct and you just introduced a bug.
Since characterization testing comes with all the pros and cons of unit testing (fast & cheap vs. checking only a small part of the system) the situation can even change over time: the checked behaviour is correct until the implementation of a new feature, now the checked behaviour is wrong. The build however stays green. 

ageing characterization and regular checks 

Characterization checks do not just come into existence, in fact Michael and Alberto both wrote down some rules when and how to create them. Now while developers work on a legacy system characterization checks are not the only unit checks they create. There are also regular checks for new code, which are created using TDD and check for a desired behaviour. Both kind of checks end up in the code base and in the continuous integration. In time you may not know anymore if a check stems from characterization testing or TDD. In this sense characterization checks itself can become legacy code, which is hard to deal with.

Imagine entering a project finding 1000 automated checks, 250 of which are characterization checks and the rest are regular checks. If one of the characterization checks fails it is not necessarily a bug, if one of the others fails it most certainly is. Only you cannot see which is which. if the person, who wrote the check, is not on the project anymore you have to treat every failing check as a characterization check and always have to investigate if you found a bug or not. A way to mitigate this is following up on Richard Bradshaw’s  advice to state the intent of a specific check. If you do this you know if a check is a characterization check or not.

Furthermore I have the feeling that a lot of checks become characterization checks over time. When they were written in the first place there was a reason for creating them exactly like they are, checking for a specific behaviour. Now, one or two project member generations later, they are there and document a specific system behaviour. The people, who know why they were created and why the system behaves like this, are gone. The checks have become characterization checks.

This is maybe what Katrina is facing in her project. She writes about a test suite, which is longer with the project than all of the testers, hence they don’t know why there is some certain logic coded into it. Katrina uses this as an example why they do not automate after mastery. I tend to disagree a little bit: The initial team members might very well have automated after mastery, I cannot know for sure, yet knowledge of why has been lost over time. Moving away from Katrina’s example this happens quite often: testers inherit checks from previous testers.

I like to think of a project as body of knowledge, not just the people, but the project itself. There is a lot of knowledge about the system, the users, the workflows in the project’s confluence, in the specific build setup and in the automated checks. From the project’s perspective I see the automated checks as a form of codified prior knowledge.
The current team is left with this form of prior knowledge and now has the problem of finding out why the system behaves like that. Otherwise they risk running into one of the two problems I mentioned earlier: being reluctant to change behaviour that needs changing or introducing bugs by ignoring the checks. This is actually a tough exercise, because finding out why a systems does what it does is usually very challenging. 

Conclusion

Characterization testing is a white box exploratory testing technique and a very powerful tool when dealing with legacy systems. As a tester you should make sure characterization checks are marked as such and try to find out why a system behaves as a characterization check says it does.

Sources: