Completing complex tasks in unpredictable settings (e.g., kitchens) currently challenges robotic systems, requiring a step change in machine intelligence. Sensorimotor abilities are widely considered integral to human intelligence. Thus, nature inspired machine intelligence might usefully combine artificial intelligence with robotic sensorimotor capabilities. On this basis, we created a novel framework that bootstrapped Large Language Models, a curated Knowledge Base, and Integrated Force and Visual Feedback (code available via an open-source GitHub repository). We hypothesised that this approach would transform the ability of robots to complete complex tasks in unpredictable settings. We tested our approach using a coffee making and plate decoration task. Our methodology enabled a robot to complete these complex tasks which included components ranging from draw opening to pouring, each benefiting from distinct feedback types and methods. This novel combination marks significant progress towards scalable, efficient, ‘intelligent robots’ able to complete complex tasks in uncertain environments.