In order to grasp and transport an object, grip and load forces must be scaled according to the object's properties (such as weight). To select the appropriate grip and load forces, the object weight is estimated based on experience or, in the case of robots, usually use image recognition. We propose a new approach that makes a robot's weight estimation less dependent on prior learning and, thereby, allows it to successfully grasp a wider variety of objects. This study evaluates whether it is feasible to predict an object's weight in a replacement task based on the time series of upper body angles of the active arm. Furthermore, we wanted to investigate how prediction accuracy is affected by (i) the length of the time series and (ii) different cross-validation (CV) procedures. To this end, we recorded and analyzed the movement kinematics of twelve participants during a replacement task. The participants' kinematics were recorded by an optical motion tracking system while transporting an object, 80 times in total from varying starting positions to a predefined end position on a table. The object's weight was modified (made lighter and heavier) without changing the object's visual appearance. Throughout the experiment, the object's weight was randomly changed without the participant's knowledge. To predict the object's weight, we used a discrete cosine transform to smooth and compress the upper body angles and a support vector machine for supervised learning from the achieved discrete cosine transform parameters. Results showed good prediction accuracy (78.5 - 92.7 %, depending on the CV procedure and the length of the time series). Even at the beginning of a movement (after only 300 ms), we were able to predict the object weight reliably (within a classification rate of 79.5 - 88.3 %).