*Bounty: 50*

*Bounty: 50*

I’m relatively new to statistical modelling and `R’, so please let me know If I should provide any further information/plots. I did originally post this question here, but unfortunately have not received any responses yet.

I am using the `lme()`

function from `nlme`

in `R`

to test the significance of fixed

effects of a repeated measures design. My experiment involves subjects listening to a pair of sounds and adjusting the sound level (in decibels) until both are equally loud. This is done for a 40 different pairs of stimuli, with both orders tested (A/B and B/A). There are a

total of 160 observations per subject (this includes 1 replication for every condition), and 14 participants in all.

The model:

`LevelDifference ~ pairOfSounds*order, random = (1|Subject/pairOfSounds/order)`

I have built the model up by AIC/BIC and likelihood ratio tests (method =

“ML”). Residual plots for the within-group errors are shown below:

The top left plot shows standardised residuals vs fitted values. I don’t see any

systematic pattern in the residuals, so I assume that the constant variation assumption

is valid, although further inspection of the subject-by-subject residuals do

show some unevenness. In conjunction with the top right plot, I have no reason to suspect

non-linearities.

My main concern is the lower left qqplot which reveals that the residuals are

heavy-tailed. I’m not sure where to go from here. From reading Pinheiro and Bates

(2000, p. 180), the

fixed-effects tests tend to be more conservative when the tails are

symmetrically distributed. So perhaps I’m OK if the p-values are very low?

The level two and three random effects show a similar departure from normality.

Basically:

- How do such heavy-tailed residuals effect the inference for fixed-effects?
- Would robust regression be an appropriate alternative to check?