The degree of regret minimization behaviour imposed by the
Classical RRM or the µRRM model (or profundity of regret) is not
constant, but depends on the size of the estimated parameter as
well as on the distribution of the attribute levels in the data.
Therefore, the parameter estimates on their own are not very
informative for the imposed behaviour by the Classical RRM or µRRM
model.

To acquire insight on the behaviour imposed by these two RRM models, a measure is recently proposed (Cranenburgh et al. 2015). With this measure it is possible to pinpoint the overall degree of regret behaviour for each attribute. The equation below shows how to compute this measure for attribute m, where |Am| denotes the cardinality Am, ßm denotes the estimated taste parameter associated with attribute m, and μ denotes the scale of the error term ε (which equals one for the classical RRM model).

To acquire insight on the behaviour imposed by these two RRM models, a measure is recently proposed (Cranenburgh et al. 2015). With this measure it is possible to pinpoint the overall degree of regret behaviour for each attribute. The equation below shows how to compute this measure for attribute m, where |Am| denotes the cardinality Am, ßm denotes the estimated taste parameter associated with attribute m, and μ denotes the scale of the error term ε (which equals one for the classical RRM model).

$$ \alpha_m = \frac{1}{|A_m|}\sum_{A_m} \left ( \left |
\frac{e^{\frac{\widehat{\beta}_m}{\mu}\left [ x_{jnm} - x_{inm}
\right ]}-1}{e^{\frac{\widehat{\beta}_m}{\mu}\left [ x_{jnm} -
x_{inm} \right ]}+1} \right | \right ) A_m =\left \{ x_{jmn} -
x_{imn}| x_{jmn} - x_{imn}\neq 0 \right \} $$

When all attribute differences xjmn – ximn are non-zero, and all
choice sets contain the same number of alternatives, denoted J, then
we can write the equation as follows, where N denotes the total
number of choice observations (given that we have one choice
observation per decision makers).

$$ \alpha_m = \frac{1}{N}\frac{1}{J\cdot (J-1)}\sum_{n} \sum_i
\sum_{j\neq i} \left | \frac{e^{\frac{\widehat{\beta}_m}{\mu}\left
[ x_{jnm} - x_{inm} \right
]}-1}{e^{\frac{\widehat{\beta}_m}{\mu}\left [ x_{jnm} - x_{inm}
\right ]}+1} \right | $$

To compute the measure of profundity is easier than it looks at
first sight. You can even compute it easily in MS Excel. Below you
can find a bit of Matlab code to compute profundity of regret. For
those not familiar with Matlab I also included an MS Excel sheet.