Artificial neural networks (ANNs) have demonstrated their superiority over traditional computing architectures in tasks such as pattern classification and learning. While ANNs demonstrate high prediction accuracy, they do not measure uncertainty in predictions, and hence they can make wrong predictions with high confidence, which can be detrimental for many mission-critical applications. In contrast, Bayesian neural networks (BNNs) naturally include such uncertainty in their model. Unlike ANNs, where the synaptic weights are point estimates (single-valued), in BNNs, the weights are represented by probability distributions (e.g. Gaussian distribution). This makes the hardware implementation of BNNs challenging since on-chip Gaussian random number generators (GRNGs) based on silicon complementary metal oxide semiconductor (CMOS) technology are area and energy inefficient. Stochastic switching in memristors can be used to build probabilistic synapses but with very limited tunability. Additionally, memristor technology rely heavily on CMOS-based peripherals to emulate neurons. Here we introduce three-terminal memtransistors based on two-dimensional (2D) materials, which can emulate both probabilistic synapses as well as reconfigurable neurons. The cycle-to-cycle variation in the programming of the 2D memtransistor is exploited to achieve GRNG-based synapses, whereas 2D memtransistor based integrated circuits are used to obtain neurons with hyperbolic tangent and sigmoid activation functions. Finally, memtransistor-based synapses and neurons are combined in a crossbar array architecture to realize a BNN accelerator and the performance is evaluated using the IRIS data classification task.