Thinking about ML

Learning from Deep Learning from Scratch. Let’s me share some thinking and ideas.

  • 1. Neural Network, multi-layer affine and activation function work great with non-linear transformation. Results are boosted to higher dimension.
  • 2. The combination between NN and loss minimisation make backward propagation applicable. Each little step of movements are the result of previous scholars’ research. That makes me feel closely how technologies and theories are developed, and how well statistics and mathematics are applied.
  • 3. Theories are inherited from previous research. Every year and every month, there are researchers from different countries, from different institutions, universities or high-tech firms, publish brilliant papers that are highly shocked and astonishing to further development.
  • 4. The current development of CS or ML are really top-notch and inspiring. Hardware, CPU & GPU & Cloud provide speedy efficiency of calculation, linear algebra works well on the Computer ; algorithms are innovative and update vastly. Hardware and Software together improve the ability of prediction.
  • 5. Although good for predicting, Deep Learning algorithms, NN, is still a Black Box, which means one could not explain the reason of getting the result. The bridge between Input <-> Output are weights, and weights are explained by backward propagation. However, is that pattern true? Is that a Correlation or Causality?