This article presents an approximate data encoding scheme called Significant Position Encoding (SPE). The encoding allows efficient implementation of the recall phase (forward propagation pass) of Convolutional Neural Networks (CNN)—a typical Feed-Forward Neural Network. This implementation uses only 7 bits data representation and achieves almost the same classification performance compared with the initial network: on MNIST handwriting recognition task, using this data encoding scheme losses only 0.03% in terms of recognition rate (99.27% vs. 99.3%). In terms of storage, we achieve a 12.5% gain compared with an 8 bits fixed-point implementation of the same CNN. Moreover, this data encoding allows efficient implementation of processing unit thanks to the simplicity of scalar product operation—the principal operation in a Feed-Forward Neural Network.