Computational models are conventionally created with input data, script files, programming interfaces, or graphical user interfaces. This paper explores the potential of expanding model generation, with a focus on multibody system dynamics. In particular, we investigate the ability of Large Language Models (LLMs), to generate models from natural language. Our experimental findings indicate that LLMs, some of them having been trained on our multibody code Exudyn, surpass the mere replication of existing code examples. The results demonstrate that LLMs have a basic understanding of kinematics and dynamics, and that they can transfer this knowledge into a programming interface. Although our tests reveal that complex cases regularly result in programming or modeling errors, we found that LLMs can successfully generate correct multibody simulation models from natural language descriptions for simpler cases, often on the first attempt (zero-shot). Next to a basic introduction into the functionality of LLMs, our Python code, and the test setups, we provide a summarized evaluation for a series of examples with increasing complexity. We start with a single mass oscillator, both in SciPy as well as in Exudyn, and include varied inputs and statistical analysis to highlight the robustness of our approach. Hereafter, systems with mass points, constraints, and rigid bodies are evaluated. In particular, we show that in-context learning can levitate basic knowledge of a multibody code into a zero-shot correct output.