In recent years there is a wealth of activities for solving inverse problems in physics described by partial differential equations (PDEs) using neural networks (NNs). In these efforts, the solution of the PDEs are expressed as NNs that are trained through the minimization of a loss function involving the PDE. Here we show how to accelerate this approach by five orders of magnitude by deploying, instead of NNs, classical PDE approximations that are facilitated by machine learning tools. The framework of Optimizing a Discrete Loss (ODIL), minimizes a cost function for discrete approximations of the PDEs using gradient-based and Gauss-Newton methods. The framework relies on grid-based discretizations of PDEs and inherits their accuracy, convergence, and conservation properties. The implementation of the method is facilitated by adopting broadly accessible machine learning tools for automatic differentiation. We present applications to PDE-constrained optimization, optical flow, system identification, and data assimilation. We compare ODIL with the popular method of Physics Informed Neural Networks (PINNS) and show that it outperforms it by up to five orders of magnitude in computational speed while having better accuracy and convergence rates. We evaluate the method on various forward and inverse problems involving linear and nonlinear partial differential equations including the Navier-Stokes equations for flow reconstruction problems. The present framework interfaces numerical methods and machine learning and presents a powerful tool for solving effectively challenging inverse problems across scientific domains.