Are we solving the real problem?

Saturday, November 11, 2006

Acceptance Testing and Continuous Integration

Company X currently runs a very agile development team, developing a system in java, very TDD, continuous integration (cruise) ... you get the picture.

They also have a dedicated test team, very impressive. Of that team, one group called "QA" who are essentially responsible for black-box testing and are very knowledgeable in the business process - they are very close to the user-base. In fact they report directly to the business, in contrast to the developers who report to Technology.

Another test group is responsible for white-box testing. These guys are trained developers but their sole purpose in life (thats going a bit far) is to break the system from the perspective of getting under the bonnet and tinkering, pouring oil in the radiator and mistakenly jamming a potato in the exhaust pipe. Then sit back, wait, see what blows up. Then run up to the developers and say "Come take a look at this!!!".

I hope you appreciate me painting the picture here, it will mean something later.

The main message behind this post is a nagging problem with our continuous integration, running on cruise, I will refer to it as Cruise for simplicity although this problem should occur on other CI systems. As I arrived on the project when it was already in full swing (I estimate 18 mths had already elapsed), a lot of the architecture, toolset, build scripts, tests are already well established - perhaps looking a bit shabby now, but were probably a great idea 2 yrs ago. One of the most annoying problems with Cruise is the length of time to build - ages it seems and nearing unacceptable. The system architecture is split as Common, Management, Capture and is built as such. In total I estimate 70mins to do a full build of all 3 subsytems.

The question I pose here is should the black-box and white-box tests be run on every Cruise build cycle? My gut says, no. I think it is always a case-by-case basis depending on the running time and urgency. For this site I describe I think they should run at least at the end of the day.

I would be interested in hearing comments.

Wednesday, October 25, 2006

Agile Retrospective

Today I attended an Agile Retrospective meeting at Company X. These meetings are held at the end of each project, this is the 2nd one I have had the pleasure to attend ;).

There is a structure which actually keeps it interesting. Firstly you rate yourself from 1-5 being (I’m trying to recall the descriptions - some are made up slightly but I hope you get the point)

1. Just a smiler
2. Will talk about things if cornered
3. Will talk occasionally about things
4. Will actively discuss most things
5. Will discuss/argue anything (perhaps for the sake of it)

I was a 4. I don’t like arguing for the sake of it. Most people in the room out of 11 declared themselves a 5. Although I’d say only 3 were ranters (5).

Next, there were 4 charts A0 size on the wall. Each one titled:

  • “What we did well”,
  • “What we did poorly”,
  • “What puzzled you”,
  • “Top Tips”.

We were asked to write 3 issues for each chart and they were stuck on the chart. For the Top Tip chart we wrote 1 issue, they were all stuck on the chart and then we voted. Duplicate issues were purged.

Some of the Top Tips were:

  • Write a Domain/Persistence layer using Hibernate, to replace the current db layer
  • More openly discuss issues as they arise, involve management, instead of waiting for a meeting.
  • Allow more time to define Stories from the Product Requirements Document
  • Fix the slow build on continuous integration
  • Replace CVS with Subversion
  • Replace Eclipse with Intellij for its benefits of better Refactoring

Quite an effective meeting but certainly required a good chairperson as it did get quite roudy at times. Plus the flicking of paper across the room got out of hand at times, although good fun :)

The issues go into a summary document and fed back to the business and technology areas for improvement.

Saturday, October 14, 2006

Iteration Planning

Another agile experience....


At "Company X" each project has:

- 1 PM (manages resourcing and release plan, but very hands-off, may be managing many projects)
- 1 BA (the customer)
- 1 Tech Analyst (reports to Technology but liaises very closely with the BA, is the conduit to the BA)
- 1 Lead Developer (manages the iteration)
- n Developers (liaise closely with Tech Analyst)
- 4 QA (report to the business)
- n TechOps (for deployment)
- n DevSupport (for fixing bugs after deployment)
- 1 Architect (adhoc basis - most developers are very senior so no strong requirement for a dedicated Architect)

No fancy software to manage the agile process. At most the PM uses Excel to manage resources.

There is very little emphasis on Velocity here. Rather, more focus on the release plan. Most projects have 2 wks bug fix. A developer must only work on 1 story at a time.

The amount of emphasis on TDD here is amazing...huge! And that shows. Company X has excellent systems in place to manage customers and to register customers. If the systems were rubbish then Company X would not be 3rd in the market. Actually one of the main reasons Company Y have purchased Company X is because they want to draw on the quality of systems and processes inside Company X.

Prior to iterations starting, a Planning Game is held where estimates are taken on stories (from the group). The stories then feed into the iteration. The regularity of this is usually up to the discresion of the TA. The Release Plan is always kept in mind, any movement outside the Release Plan is treated as a CR. The BA sits in to clarify the stories; the TA leads the Planning Game. The PM is not present. Stories are documented by the TA prior to the Game and are a word doc of 1-2 pgs. The Story card is written up by the lead developer and the estimate is written on the card, the card is pinned up on the radiator board.

Showcases are performed weekly by the lead developer and one other senior developer. This provides good feedback, both ways. CRs are created by this process.

Pairing is done here. We have 6 pairing stations. We swap about every 2 days.\n

Risk cards are also a big thing here and are pinned up on the radiator board.

They also do Retrospectives after the completion of each project. What have we done badly, done well, how can we improve?

Good article here.



Digg!