R-113-1-5721004-Principle of Machine Learning
Course Period:Now ~ Any Time
LINE sharing feature only supports mobile devices

需登入帳號密碼,才可以進入課程

Course Introduction

Course Plan

  • Course description and requirements
  • Introduction to machine learning and related topics
  • Machine learning problem style and Occam’s Razor
  • Framework of machine learning: model, strategy and algorithm
  • Valiant’s probably approximately correct (PAC) learning theory
  • Vapnik’s statistical learning theory
  • Vapnik-Chervonenkin (VC) Dimension and Large Margin Theory
  • Generalized Error= Random Error + Bias + Variance
  • Midterm
  • Gold’s computational (algorithmic) learning theory and weak/strong learning (I)
  • Gold’s computational (algorithmic) learning theory and weak/strong learning (II)
  • Consistency: Uniform Law of Large Number (ULLN) and empirical process theory
  • Generalization and cross-validation
  • Ensemble learning(I)-Boosting (Kearns & Valiant) and Adaptive Boosting (AdaBoost )(Freund & Schapire)
  • Ensemble learning(II)-Bootstrap Aggregation (Bagging) (Breiman) and Random (Decision) Forests (Breiman)
  • Criteria of Evaluation: (1)Approximation (2)Generalization (3)Stability (4)Convergence (Effective vs. Efficiency)
  • Applications of machine learning in medical and healthcare
  • Final
  • Course description and requirements
  • Introduction to machine learning and related topics
  • Machine learning problem style and Occam’s Razor
  • Framework of machine learning: model, strategy and algorithm
  • Valiant’s probably approximately correct (PAC) learning theory
  • Vapnik’s statistical learning theory
  • Vapnik-Chervonenkin (VC) Dimension and Large Margin Theory
  • Generalized Error= Random Error + Bias + Variance
  • Midterm
  • Gold’s computational (algorithmic) learning theory and weak/strong learning (I)
  • Gold’s computational (algorithmic) learning theory and weak/strong learning (II)
  • Consistency: Uniform Law of Large Number (ULLN) and empirical process theory
  • Generalization and cross-validation
  • Ensemble learning(I)-Boosting (Kearns & Valiant) and Adaptive Boosting (AdaBoost )(Freund & Schapire)
  • Ensemble learning(II)-Bootstrap Aggregation (Bagging) (Breiman) and Random (Decision) Forests (Breiman)
  • Criteria of Evaluation: (1)Approximation (2)Generalization (3)Stability (4)Convergence (Effective vs. Efficiency)
  • Applications of machine learning in medical and healthcare
  • Final
  • Course description and requirements
  • Introduction to machine learning and related topics
  • Machine learning problem style and Occam’s Razor
  • Framework of machine learning: model, strategy and algorithm
  • Valiant’s probably approximately correct (PAC) learning theory
  • Vapnik’s statistical learning theory
  • Vapnik-Chervonenkin (VC) Dimension and Large Margin Theory
  • Generalized Error= Random Error + Bias + Variance
  • Midterm
  • Gold’s computational (algorithmic) learning theory and weak/strong learning (I)
  • Gold’s computational (algorithmic) learning theory and weak/strong learning (II)
  • Consistency: Uniform Law of Large Number (ULLN) and empirical process theory
  • Generalization and cross-validation
  • Ensemble learning(I)-Boosting (Kearns & Valiant) and Adaptive Boosting (AdaBoost )(Freund & Schapire)
  • Ensemble learning(II)-Bootstrap Aggregation (Bagging) (Breiman) and Random (Decision) Forests (Breiman)
  • Criteria of Evaluation: (1)Approximation (2)Generalization (3)Stability (4)Convergence (Effective vs. Efficiency)
  • Applications of machine learning in medical and healthcare
  • Final
  • Course description and requirements
  • Introduction to machine learning and related topics
  • Machine learning problem style and Occam’s Razor
  • Framework of machine learning: model, strategy and algorithm
  • Valiant’s probably approximately correct (PAC) learning theory
  • Vapnik’s statistical learning theory
  • Vapnik-Chervonenkin (VC) Dimension and Large Margin Theory
  • Generalized Error= Random Error + Bias + Variance
  • Midterm
  • Gold’s computational (algorithmic) learning theory and weak/strong learning (I)
  • Gold’s computational (algorithmic) learning theory and weak/strong learning (II)
  • Consistency: Uniform Law of Large Number (ULLN) and empirical process theory
  • Generalization and cross-validation
  • Ensemble learning(I)-Boosting (Kearns & Valiant) and Adaptive Boosting (AdaBoost )(Freund & Schapire)
  • Ensemble learning(II)-Bootstrap Aggregation (Bagging) (Breiman) and Random (Decision) Forests (Breiman)
  • Criteria of Evaluation: (1)Approximation (2)Generalization (3)Stability (4)Convergence (Effective vs. Efficiency)
  • Applications of machine learning in medical and healthcare
  • Final
Teacher / 鮑永誠

Related Courses

E-113-2-5721011-Special Topics in Data Science and Statistics
翁世峰
Period:Not set
R-113-1-5722002-Internshep and Professional Training ( I )
高浩雲
Period:Not set
R-113-1-5721001-Special Topics on Health Care Organization&Management
高浩雲
Period:Not set
E-112-2-5721013-Special Topics in Artificial Intelligence
何文獻
Period:Not set
LINE sharing feature only supports mobile devices