To fast approximate the least squares estimator (LSE) efficiently in a Big Data linear regression by a subsampling LSE, numerous optimal sampling distributions are derived based on the criterion of minimizing the sum of the component variances of the subsampling LSE. We discuss truncation of the distributions, and construct the Scoring Algorithm with far less running time for implementing the subsampling LSE than for the full-sample LSE. The subsampling LSE is proved to be almost surely asymptotically normal for an arbitrary sampling distribution under suitable conditions. Motivated by subsampling and data-splitting in machine learning, sample size determination for multidimensional parameters is investigated. We conduct a comprehensive evaluation of our proposed approach through various numerical studies and compare it with the uniform sampling. Our results in both simulated and real data indicate that our approach substantially outperforms the uniform and the Algorithm significantly reduces the computational time required for implementing the full-sample LSE.