We present some new convergence results for two-timescale gradient descent ascent (GDA) applied to approximately solve the nonconvex-concave minimax problems in the form of $\min _{\mathbf{x}} \max _{\mathbf{y} \in \mathcal{Y}} f(\mathbf{x}, \mathbf{y})$, where the objective function $f(\mathbf{x}, \mathbf{y})$ is nonconvex in $\mathbf{x}$ but concave in $\mathbf{y}$ and $\mathcal{Y} \subseteq \mathbb{R}^n$ is a convex and bounded set. Historically, the GDA algorithm has been widely used in machine learning, control theory and economics. Despite strong convergence guarantees for GDA in the convex-concave setting, in a more general setting, GDA with equal learning rates can converge to limit cycles or even diverge. In this paper, we revisit the scheme of GDA from the point of view of two-timescale dynamics, showing that, despite the nonconvexity and lack of smoothness, a two-timescale version of GDA can efficiently find a stationary point of the function $\Phi(\cdot):=\max _{\mathbf{y} \in \mathcal{Y}} f(\cdot, \mathbf{y})$. More specifically, we provide theoretical bounds on the complexity of solving both \emph{smooth} and \emph{nonsmooth} nonconvex-concave minimax problems. We further investigate properties of stochastic variants of two-timescale GDA. To the best of our knowledge, this paper provides the first instance of a nonasymptotic complexity analysis for two-timescale GDA in nonconvex-concave minimax problems, shedding light on its performance in training generative adversarial networks (GANs) and in other real-world application problems.