We introduce a novel method for speeding up image processing operators. Our method employs a fully convolutional neural network (CNN) that undergoes training using input-output pairs that demonstrate the operator's functionality. Following training, there is no need to execute the original operator. The trained CNN performs operations at full resolution and achieves constant runtime.In our study, we explored the impact of network architecture on approximation accuracy, runtime, and memory usage. Through careful analysis, we identified a specific architecture that strikes a balance between these considerations. We conducted evaluations on ten sophisticated image processing operators, encompassing various variational models, multiscale tone and detail adjustments, photographic style transfer, nonlocal dehazing, and nonphotorealistic stylization. All operators were approximated using the same model.Our experiments conclusively demonstrated that our method surpasses previous approximation techniques in terms of accuracy. On the MIT-Adobe dataset, we observed an 8.5 dB increase in approximation accuracy as measured by PSNR (from 27.5 to 36 dB), compared to existing schemes. Additionally, our approach achieved a three fold reduction in DSSIM when compared to the most accurate prior approximation scheme, all while maintaining superior speed.We verified that our models generalize well across different datasets and resolutions. Furthermore, we delved into several extensions of our approach, exploring additional possibilities for improvement and expansion.In summary, we propose an innovative method for accelerating image processing operators, utilizing a CNN trained on input-output pairs. Our approach outperforms previous approximation schemes in terms of accuracy, generalizes effectively, and presents opportunities for further development.