In order to improve the performance of a linear auto-associator (which is a neural network model), we explore the use of several pre-processing techniques. The gist of our approach is to represent each pattern by one or several pre-processed (i.e., filtered) versions of the original pattern (plus the original pattern). First, we compare the performance of several pre-processing techniques (a plain vanilla version of the auto-associator as a control, a Sobel operator, a Canny-Deriche operator, and a multiscale Canny-Deriche operator) and a Wiener filter on a pattern completion task using a noise degraded version of faces stored. We found that the multiscale Canny-Deriche operator gives the best performance of all models. Second, we compare the performance of the multiscale Canny-Deriche operator with the control condition on a pattern completion task of noise degraded versions (with several levels of noise) of learned faces and new faces of the same or another race than the learned faces. In all cases, the multiscale Canny-Deriche operator performs significantly better than the control.