- The main reason to mark an issue as wontfix (or discard it) is already implemented / not needed change requests.
- The second reason is bug reports that are not bugs actually.
- Both of them are easily avoidable:
- using Machine Learning
- if the issue writer or bug reporter use a template to:
a) demonstrate the business importance in the case of change / new requests.
b) clearly state the steps to reproduce the bug and the expected outcomes.
To justify these claims, we will go through a recently published article and tie it with my own experience.
Do you know that issues finally marked as wontfix (though, discarded) consume on average 5 months of developers’ time? (yes, 5 months of YOUR TIME!). When I read that in the article ‘‘Won’t We Fix this Issue?’’ Qualitative characterization and automated identification of wontfix issues on GitHub my internal alarms rang. I remember I thought “that’s too much time for nothing!”.
That’s why we think this research deserves to be shared with you. I hope that showing their discoveries and tying them with our own experiences will shed some light on this topic. As usual, I strongly recommend reading the original paper to get all the insights, how they did the research, and the detailed discussion over it.
Issues-list driven development
As a developer’s team, your product is your baby. You know it, you grew it, you can justify and explain each feature and relate it to a business value. And now it is mature enough to be released and finally interact with real users and you release it. This is what happens right after:
- Finally we deployed our baby! 💪😷🍾 🎉
- 😴😴😴NEXT DAY
- Good mornin…, wait!. What’s going one here? Why our issue tracking software started to be flooded by change requests and bug reports from people that we don’t even know.
- Let’s see …
- WHAAAAT? The user BadKarma42 wants to do what??? OMG, Why did she need that feature? 🤔
And user BigFootIsReal666 did what? What the heck? How did he break that? How do I suppose to produce that bug if the user just said what he broke but didn’t describe how? 😡
You know that software needs to be updated and fixed. The reasons for this deserve it’s own article and won’t be covered here. I’ll just say that to meet users’ expectations (and market requirements) software developers need to continuously update their source code.
But how do developers know when they need to update or fix things? What is the artifact that drives development, shows product improvements, and the team’s work pace? Yes, you’re right. It’s the list of issues and bugs reported/created by users, customers, Product Owners or any other project stakeholder.
How to deal with issues?
Let’s face it. The list of features requested by stakeholders and bugs reported can be intimidatingly long. In addition, the quality of what’s inside tends to be very different, so we, as developers, have to spend a significant amount of time managing those reports. Obviously, there are software engineering approaches to address these issues, such as backlog refinement meetings, specific roles that review and prioritize tasks (i.e., Product Owners), QA best practices, templates, and a long etc. However, at some point, your valuable time will be needed to at least give your “expert opinion”.
For these reasons, in the last decade or so, several research works have proposed/developed automated solutions to prioritize requested changes (Lamkanfi et al. 2011), (Tian, Lo, and Sun 2013), find out “who should fix bugs” (Anvik, Hiew, and Murphy 2006), figure out whether am issue “is a bug or an enhancement” (Antoniol et al. 2008), detect issues misclassifications or bug duplication. Believe it or not, nobody investigated why we discard issues following costly processes until the work named “Won’t We Fix this Issue?” Qualitative Characterization and Automated Identification of Wontfix Issues on GitHub (Panichella, Canfora, and Di Sorbo 2021) did it.
What if we could automate the process of ruling out issues without losing too much time on the process? Panichella et al. have just published an article in which they use machine learning to train a model which proved to be very accurate (statistically speaking) in identifying potentially wontfix issues. In addition, they generate a lot of related information, such as a list of reasons why stakeholders open issues or a list of why community members of OSS projects discard issues. Let’s review the latter:
This list was constructed by analyzing 667 closed issues (wontfix) from 97 different projects on Github. As the authors of the article explain, the sum of the percentage values in the table is greater than 100% due to issues that have been assigned to more than one reason.
Machine Learning? 🤔
What is the second-best thing we can do if we don’t know how to train a model but also don’t want to wait for a commercial tool to be developed? We can use the information that Panichella et al. make available to us and extrapolate it to our own projects to make educated guesses about issues that will never be solved and should be quickly discarded
The first reason on the list for rejecting an issue seems to be that the requested new features/enhancements are already implemented, are not needed at all or are not a relevant change. Therefore, we can build a template for requesting new features/enhancements by asking questions to avoid creating potential issues that are not going to be solved. If you think for a moment, you will see that change requests are like small business cases, so we can ask them to contain, more or less, the same elements as business cases.
- A one-line summary: Add a tooltip over the call-to-action (CTA) button to increase conversion.
- Problem statement: Users let us know that many times they identify the CTA but the action goal isn’t clear.
- Options: Besides adding a tooltip, other options are changing the color of each CTA button (CR-323) or re-think buttons’ texts (CR-324).
- Solution description: On every CTA according to the attached table, add a tooltip to reinforce the message that stays open 3-5 seconds (also stated on the table).
- Cost-benefit analysis: This task will cost 10-14 development hours, but preliminary tests show an increase of 3-5% of conversion with the new tooltips. Extrapolating linearly that percentage to all the CTAs we will start seeing net benefits one week after we put it in production.
Many change requests only state the summary, some the problem statement, but then quickly jump to the solution. They forget to sell the change to the decision maker tying it to a business benefit, which makes this person’s job very difficult. Panichella et al. state that developers spend an average of five months deciding whether a problem should be labeled as a wontfix.
Panichella et al. did an amazing job researching a novel topic that is a real problem for everyday developers. They discovered a list of reasons that could lead to flagging problems as wontfix faster and used that information to train a model and automate the detection of those problems. We analyzed the most frequent reason, but the rest of the reasons are also important and can be worked on in the same way as we did with the first one (i.e. we can build templates to write better bug reports by adding replication steps, expected and actual outputs). We encourage you to use this information in your current projects, generating new tools that will serve your teams and, why not, the rest of the industry.
- S. Panichella, G. Canfora and A. Di Sorbo, “Won’t We Fix this Issue?”Qualitative characterization and automated identification of wontfix issues on GitHub, Information and Software Technology (2021), doi: https://doi.org/10.1016/j.infsof.2021.106665.
- Antoniol, Giuliano, Kamel Ayari, Massimiliano Di Penta, Foutse Khomh, and Yann-Gaël Guéhéneuc. 2008. “Is It a Bug or an Enhancement? A Text-Based Approach to Classify Change Requests.” In Proceedings of the 2008 Conference of the Center for Advanced Studies on Collaborative Research: Meeting of Minds, 304–18. CASCON ’08 23. New York, NY, USA: Association for Computing Machinery.
- Anvik, John, Lyndon Hiew, and Gail C. Murphy. 2006. “Who Should Fix This Bug?” In Proceedings of the 28th International Conference on Software Engineering, 361–70. ICSE ’06. New York, NY, USA: Association for Computing Machinery.
- Lamkanfi, Ahmed, Serge Demeyer, Quinten David Soetens, and Tim Verdonck. 2011. “Comparing Mining Algorithms for Predicting the Severity of a Reported Bug.” 2011 15th European Conference on Software Maintenance and Reengineering. https://doi.org/10.1109/csmr.2011.31.
- Tian, Yuan, David Lo, and Chengnian Sun. 2013. “DRONE: Predicting Priority of Reported Bugs by Multi-Factor Analysis.” In 2013 IEEE International Conference on Software Maintenance, 200–209.