Test-Driven Development
Late last month we held the March session of the Indianapolis Workshop on Software Testing. The attendees were:
The topic we focused on for the five-hour workshop was ‘test-driven development.’
The first experience report came form Beth Shaw, who talked about her experiences working as a tester on a lean-agile project where the programming team was doing test-driven development. Because the project was regulated, she shared an interesting idea of tracing the requirements to the unit-test cases to show coverage. Not something I had heard of before and it's an intriguing idea. Along with her comments on TDD and the overall quality of the code, she also shared some examples of using kanban boards for the first time. Beth's talk also sparked some interesting discussion on the topic of using unit tests as documentation.
We polled the attendees with the question, "Are unit tests sufficient documentation for code?" The responses were as follows:
The most common reasons cited for why the wouldn't be sufficient, was for big picture relationships Tests might document code on the micro-level, but there seemed to me to be a general agreement that they came up short on the macro-level (system architecture, etc...).
After Beth, Matt Gordon shared an experience report where he used Ruby on Rails with fixtures and factories to automate his tests. He started the project with TDD (using fixtures), but found that over time his code coverage kept going down. In addition, it would take 30 to 40 minutes to run all the tests. He made the decision to switch from fixtures to factories, and found that it was easier to get coverage up and execution time down. Matt's experience report got us talking about the scope of a "unit." It also got us talking about testing public vs. private methods and the relative risks/rewards involved in that investment. The general feeling of the room was that a unit is a low-level piece of code (not integration testing) and that public is good enough (testing the behavior is what's important).
After Matt's talk, Patrick Bailey shared his experience report on systemic considerations. Given that Pat's slides are available, I won't go into too much detail. Suffice to say that one of the coolest things I pulled out of his talk was the idea of fan-in and fan-out metrics. That idea was new to me. During the discussion around Patrick's talk, Matt gave a great quote about one of the things he misses by not having a manager (he now works semi-independently) is that "It's somebody's job to defend my time." As a manager, I often forget that it's my job to do that. It was a good and welcome reminder.
After Patrick, Dustin Sparks shared a problem solving opportunity. Dustin was wondering what steps he might take to help change the culture in his programming organization to get people to want to start doing test-driven development. We captured a bunch of ideas on the whiteboard.
After that, Matt Gordon shared another story of testing using fixtures, this time illustrating how his fixtures hid enough behavior from him that he was unable to notice failures in the code. A great quote came out of this talk, "Wrong code should look wrong." He also shared his two rules for unit testing:
Finally, Steve Pollak took us home with an experience report around developing unit tests for a real-time distributed system and the complexities of doing so. On the project they had written the code first, then the tests. His question to the group was how they could have done it in more of a test-driven way. That sparked some talk of tools and creating stubs and harnesses, but I think by that point people were "full."
I think it was a good workshop. (I certainly liked it.) It was the best turnout we've had for a workshop to date, and we had great discussion during the talks. It quickly became a safe environment, as several people felt comfortable enough to challenge some of the preconceptions around TDD in general. (Special thanks to Courtney Jones for leading the charge there.)
We're taking this month off for WOPR and next month off for WREST; both of which will be here in Indy. The next workshop is June 27 on the topic of 'Test design and development.' Drop me a line if you're interested in attending.
- Chris Achard
- Patrick Bailey
- Anthony Bye
- Jason Gladish
- Matthew Gordon
- Tim Harvey
- Frank Jaloma
- Courtney Jones
- Michael Kelly
- Baher Malek
- Joel Meador
- Elijah Miller
- Steve Pollak
- Russell Scheerer
- Elizabeth M. Shaw
- Dustin Sparks
- Miles Z. Sterrett
- Jon Strayer
- Nick Voll
- Jeff White
The topic we focused on for the five-hour workshop was ‘test-driven development.’
The first experience report came form Beth Shaw, who talked about her experiences working as a tester on a lean-agile project where the programming team was doing test-driven development. Because the project was regulated, she shared an interesting idea of tracing the requirements to the unit-test cases to show coverage. Not something I had heard of before and it's an intriguing idea. Along with her comments on TDD and the overall quality of the code, she also shared some examples of using kanban boards for the first time. Beth's talk also sparked some interesting discussion on the topic of using unit tests as documentation.
We polled the attendees with the question, "Are unit tests sufficient documentation for code?" The responses were as follows:
- Yes: 4
- Possibly: 9
- No: 7
The most common reasons cited for why the wouldn't be sufficient, was for big picture relationships Tests might document code on the micro-level, but there seemed to me to be a general agreement that they came up short on the macro-level (system architecture, etc...).
After Beth, Matt Gordon shared an experience report where he used Ruby on Rails with fixtures and factories to automate his tests. He started the project with TDD (using fixtures), but found that over time his code coverage kept going down. In addition, it would take 30 to 40 minutes to run all the tests. He made the decision to switch from fixtures to factories, and found that it was easier to get coverage up and execution time down. Matt's experience report got us talking about the scope of a "unit." It also got us talking about testing public vs. private methods and the relative risks/rewards involved in that investment. The general feeling of the room was that a unit is a low-level piece of code (not integration testing) and that public is good enough (testing the behavior is what's important).
After Matt's talk, Patrick Bailey shared his experience report on systemic considerations. Given that Pat's slides are available, I won't go into too much detail. Suffice to say that one of the coolest things I pulled out of his talk was the idea of fan-in and fan-out metrics. That idea was new to me. During the discussion around Patrick's talk, Matt gave a great quote about one of the things he misses by not having a manager (he now works semi-independently) is that "It's somebody's job to defend my time." As a manager, I often forget that it's my job to do that. It was a good and welcome reminder.
After Patrick, Dustin Sparks shared a problem solving opportunity. Dustin was wondering what steps he might take to help change the culture in his programming organization to get people to want to start doing test-driven development. We captured a bunch of ideas on the whiteboard.
After that, Matt Gordon shared another story of testing using fixtures, this time illustrating how his fixtures hid enough behavior from him that he was unable to notice failures in the code. A great quote came out of this talk, "Wrong code should look wrong." He also shared his two rules for unit testing:
- A test should test one thing; and it should be obvious about what that one thing is.
- Everything should be in the test.
Finally, Steve Pollak took us home with an experience report around developing unit tests for a real-time distributed system and the complexities of doing so. On the project they had written the code first, then the tests. His question to the group was how they could have done it in more of a test-driven way. That sparked some talk of tools and creating stubs and harnesses, but I think by that point people were "full."
I think it was a good workshop. (I certainly liked it.) It was the best turnout we've had for a workshop to date, and we had great discussion during the talks. It quickly became a safe environment, as several people felt comfortable enough to challenge some of the preconceptions around TDD in general. (Special thanks to Courtney Jones for leading the charge there.)
We're taking this month off for WOPR and next month off for WREST; both of which will be here in Indy. The next workshop is June 27 on the topic of 'Test design and development.' Drop me a line if you're interested in attending.