Decision trees (DTs) play a crucial role in machine learning applications due to their quick execution and high interpretability. However, the training of decision trees is often time-consuming. In this concise summary, we propose a hardware training accelerator designed to expedite the training process. Our accelerator is specifically implemented on a fieldprogrammable gate array (FPGA) with a maximum operating frequency of 62 MHz. The architecture of our proposed accelerator leverages a combination of parallel execution to reduce training time and pipelined execution to minimize resource consumption. This design results in a significant acceleration of the training process. In comparison to a C-based software implementation, our hardware implementation is found to be at least 14 times faster. One notable feature of our architecture is its adaptability. The proposed design can be easily retrained for a new set of data using a single RESET signal. This on-the-go training capability enhances the versatility of the hardware, making it suitable for a wide range of applications.