Software testers don’t sneak into the office at 3 a.m. when everyone’s gone. We don’t earn “poker points” for fixing your build (that’s coding, remember?). Testing doesn’t happen in some secret underwater lab in the Mariana Trench… though honestly, that would explain some of the bugs. 🐠💻
So pretending our work can’t be estimated or calculated? That’s just bananas. 🍌
Break software test methodology is not especially fond of metrics and testing in this way but since people kept asking, if I had to do it, this is then my best effort for it. What to test? well the requirements with the highest risks are a good starting point.
Did you thoroughly test the high-risk requirement before scripting the automated checks for both UI and API? That's fantastic—you're doing an amazing job!
Have you included a screenshot in the Jira Story to illustrate how the weight distribution of the test effort (including automation) was determined? Wow, that's impressive!
Did you also add custom fields in Jira to capture this information? You're truly the best!
Have you documented the testing process (test steps) in detail and attached it to the Jira Sprint Story? That's incredible work!
Did you check the test automation for false-positives and such (shows 'passed' but actually is a bug if you look at it with your own eyes.)
Did you link the requirements with the automated checks? Amazing.
Now that you've tested the requirement, what's next?
Keep up the momentum by tracking changes to the requirements and maintaining a holistic view of both long-term and short-term test planning.
Take a moment to review previous steps in the (Break) process to identify any opportunities for improvement. Keep up the great work!