Interesting experience yesterday, was introduced to the concept of "Test Driven Development", it's a term I've heard banded about, but not something I've seen in action before. After an initial reaction revolving around "why?!" I think I've arrived at the conclusion that like all "new" methodologies, it has it's place, but it's not necessarily best suited for every problem.

My feeling is that if the bounds of the problem are well established, traditional forward-looking design maybe a better option, however if you're working with a problem where the boundaries are unknown or continually changing, 'TDD' may have some merit as a more efficient approach.

On revisiting yesterday's problem (calculating bowling scores) and re-implementing from scratch using a more traditional approach, the end result was a completely different animal to the one yielded by the TDD approach.

The main difference for me was confidence in the final result. From the TDD perspective, in the final analysis I was confident that the solution satisfied the terms of tests that had been created, but not 100% confident that all test cases had been exhausted, hence not totally confident in the final result.

Working the other way round, full understanding the problem, then designing a solution to fit the problem, left me very confident in the solution, which was then backed up by test cases agreeing with the implemented solution.

I guess distilling this down, design + testing gives you two independently verifiable paths to the solution, so belt and braces validation. TDD on the other hand has only one path, so if you make a mistake, you're missing the second path with regards to spotting the mistake. Following on, I have to think that if you take the TDD approach, you really need some sort of additional validation for the end result .. maybe I need to do some reading on TDD theory.

In the meantime, this is my non-TDD solution;

``````#!/usr/bin/python3

import unittest

ROUNDS = 10
PINS = 10

class Frame(object):

def __init__(self):
self.scores = [-1, -1]
self.next = None

def isSpare(self):
return sum(self.scores) == PINS and not self.isStrike()

def isStrike(self):
return self.scores == PINS

def next1(self):
return self.next.scores

def next2(self):
if not self.next.isStrike():
return self.next.scores
return self.next.next1()

def score(self):
if self.isStrike():
return PINS + self.next1() + self.next2()
if self.isSpare():
return PINS + self.next1();
return sum(self.scores)

if self.scores != -1:
self.scores = pins
return True

if pins == PINS:
self.scores = [PINS, 0]
return True

self.scores = pins
return False

class Game(object):

def __init__(self):
self.frames = self.frame = Frame()

def bowl(self, pins):
self.frame.next = self.frame = Frame()

def getScore(self):
score = 0
frame = self.frames
for i in range(ROUNDS):
score += frame.score()
frame = frame.next

return score

class TestGame(unittest.TestCase):

def setUp(self):
self.game = Game()

def test_allStrikes(self):
[self.game.bowl(10) for x in range(12)]
self.assertEqual(self.game.getScore(), 300)

def test_allSpares(self):
[self.game.bowl(5) for x in range(21)]
self.assertEqual(self.game.getScore(), 150)

def test_allOnes(self):
[self.game.bowl(1) for x in range(20)]
self.assertEqual(self.game.getScore(), 20)

def test_allZeros(self):
[self.game.bowl(0) for x in range(20)]
self.assertEqual(self.game.getScore(), 0)

def test_alternateStrikeSpare(self):
for x in range(11):
[self.game.bowl(x) for x in [5,5,10]]
self.assertEqual(self.game.getScore(), 200)

def test_alternateSpareStrike(self):
for x in range(11):
[self.game.bowl(x) for x in [10,5,5]]
self.assertEqual(self.game.getScore(), 200)

def test_NonSpareStrike(self):
for x in range(4):
[self.game.bowl(x) for x in [1,1,5,5,10]]
self.assertEqual(self.game.getScore(), 104)

def test_NonStrikeSpare(self):
for x in range(4):
[self.game.bowl(x) for x in [1,1,10,5,5]]
self.assertEqual(self.game.getScore(), 101)

if __name__ == '__main__':
unittest.main()
``````

To pick one attribute of this design that stands out from yesterday, Frame.score in isolation succinctly describes the scoring algorithm which was sadly missing from yesterday's attempt.

``````\$ ./bolwing.py
........
----------------------------------------------------------------------
Ran 8 tests in 0.001s

OK
``````

Tests validated here; Here