Decouple that UI
This suggestion follows on from my "economic testability" requirement on software. The basic idea is to be able to record a series of transactions and replay them, with the expectation that identical results will be obtained and that this identity is easily proven automatically. This "proof" may be a comparison of database before and after, or a comparison of logging records, or both. Whatever it is, it should be automated and restartable.
In particular, if we build a GUI layer cleanly separated from the rest of our system then the interface beneath the GUI can be an appropriate point in which to inject the record/replay mechanism. If the benefits of having such a mechanism are seen as important, then this need could influence the design of this interface such that the implementation and use becomes very economical. The record/replay can be implemented via a simple serial file. I have seen a relational database used for capture and replay! The reason? Because it comes complete with analysis tools, allowing editing, reporting, analysing and all the other bells and whistles that you find useful once you have them. As typically we are simulating user-input with the record/replay mechanism (but not necessarily) there is not usually a need for very high performance, but if you want it, you can build it. This test mechanism allows the separation of GUI testing from functional testing and is constrained by the richness of the GUI/System interface. If the GUI gets massively reorganized then so necessarily does the record/replay mechanism. From the point of view of tracking changes and effects, once the system is baselined it is probably a good idea to baseline the record/replay logs in the event of needing to identify some subsequent change in system behaviour. None of this is particularly difficult to do providing that it is planned in to the project and, eventually, there is a momentum in terms of knowledgeable practitioners in this part of the black art of testing.
Downsides: There is always at least one.... usually that the investment in recording and replaying what are typically suites of regression tests becomes a millstone around the project. The cost of change to the suite becomes so high that it influences what can economically be newly implemented. The design of reusable test code requires the same skills as those for designing reusable production code.
Upsides: Regression testing is not sensitive to cosmetic changes in the GUI, massive confidence in new releases, and providing that all error-triggers are retrofitted into the record/replay tests, once a bug is fixed it can never return! Acceptance tests can be captured and replayed as a smoke-test giving a minimum assured level of capability at any time.