I was talking with one of my colleagues the other day about strategies for helping customers solve development problems. You can of course take logs & traces, and try to detect odd behaviour based on protocol flows, unexpected error conditions and so on. At the other extreme, you can take the customer app in toto and try to reproduce the problem in a lab setting. For one reason and another, this strategy is rarely a good one these days, since apps can be complex, requiring careful provisioning, and often a very specific environment, relying on SQL servers and other hardware components to interact with. In the middle of these two is a third way (as Tony Blair might say), which is to take the piece of "suspect code" out of the customer app and insert it into a test framework.

This third way works well for me, and for the Diva Server API I have a framework sample called "multi" that I often use for testing such code fragments. I know many of my colleagues on both sides of the Atlantic do the same thing, each has their own "tame" sample for GlobalCall, .NET, whatever. Often these samples have evolved from humble beginnings to become terrifying pieces of software. Multi started life as a two-thread console app that simply accepted any number of incoming calls, and played a message on connection. However, it has gradually had all kinds of functions added to it, so that it can do call transfer; call progress analysis; human speech detection and so on. It is a horrible piece of software, and I'm the only one that knows their way around it. I will never give the source code away, since it is in no way a "best practices" document for others to follow. But it is a simple and flexible tool, and I know that many of my friends here at Dialogic are using something similar, probably some of you out there too?