I want to be able to specify how much error is reduced per second (e.g., an error reduction rate of 0.3 means that the error is reduced by 30% per second). Here's how I do it:
error_reduction_rate = 1 - exp(-step_size / error_reduction_tc)
"step_size" is the amount of real elapsed time between weight updates (it's best if this is a fixed constant, both for stable learning and to avoid recomputing the error reduction rate constantly). "error_reduction_tc" is a time constant which determines how fast errors are reduced.
For example, an error reduction time constant of 2.0 means that the error will be reduced to about 37% of its original value after 2.0 seconds of updates. If weight updates occur every 0.05 seconds, this yields an error reduction rate of 0.02469. If we change the time constant to 0.1, leaving the step size at 0.05, the error reduction rate jumps up to 0.3935 since the time constant is much faster. Note: it's important to keep the step size smaller than the time constant; otherwise, things can get unstable.