Learning-capable edge artificial intelligence (AI) systems require inference and learning capacities combined with energy efficiency. However, no current memory technology combines all the desirable features for these systems. Memristor arrays are ideal for AI inference but have limited endurance and high programming energy. Conversely, ferroelectric capacitors are ideal for learning, but their destructive read process makes them unsuitable for inference. Here, we present a unified memory stack which functions as both a memristor and a ferroelectric capacitor. It uses a silicon-doped hafnium oxide and a titanium scavenging layer, is integrated into the back end of line of a complementary metal-oxide-semiconductor (CMOS) process, and allows digital to analog transfers between memory technologies without a formal digital-to-analog converter. We fabricated and tested an 18,432-device hybrid array (16,384-ferroelectric capacitors and 2,048-memristors) with its on-chip CMOS periphery circuitry. Based on this array, we propose and validate an on-chip learning solution, which, without batching, performs competitively with floating-point-precision software models across several benchmarks. This technology offers possibilities for applications requiring on-chip adaptive local training, allowing tailored tuning of neural network parameters.