R-113-1-5721004-Principle of Machine Learning
Course Period:Now ~ Any Time
LINE sharing feature only supports mobile devices
需登入帳號密碼,才可以進入課程
Course Introduction
Course Plan
-
Course description and requirements
-
Introduction to machine learning and related topics
-
Machine learning problem style and Occam’s Razor
-
Framework of machine learning: model, strategy and algorithm
-
Valiant’s probably approximately correct (PAC) learning theory
-
Vapnik’s statistical learning theory
-
Vapnik-Chervonenkin (VC) Dimension and Large Margin Theory
-
Generalized Error= Random Error + Bias + Variance
-
Midterm
-
Gold’s computational (algorithmic) learning theory and weak/strong learning (I)
-
Gold’s computational (algorithmic) learning theory and weak/strong learning (II)
-
Consistency: Uniform Law of Large Number (ULLN) and empirical process theory
-
Generalization and cross-validation
-
Ensemble learning(I)-Boosting (Kearns & Valiant) and Adaptive Boosting (AdaBoost )(Freund & Schapire)
-
Ensemble learning(II)-Bootstrap Aggregation (Bagging) (Breiman) and Random (Decision) Forests (Breiman)
-
Criteria of Evaluation: (1)Approximation (2)Generalization (3)Stability (4)Convergence (Effective vs. Efficiency)
-
Applications of machine learning in medical and healthcare
-
Final
-
Course description and requirements
-
Introduction to machine learning and related topics
-
Machine learning problem style and Occam’s Razor
-
Framework of machine learning: model, strategy and algorithm
-
Valiant’s probably approximately correct (PAC) learning theory
-
Vapnik’s statistical learning theory
-
Vapnik-Chervonenkin (VC) Dimension and Large Margin Theory
-
Generalized Error= Random Error + Bias + Variance
-
Midterm
-
Gold’s computational (algorithmic) learning theory and weak/strong learning (I)
-
Gold’s computational (algorithmic) learning theory and weak/strong learning (II)
-
Consistency: Uniform Law of Large Number (ULLN) and empirical process theory
-
Generalization and cross-validation
-
Ensemble learning(I)-Boosting (Kearns & Valiant) and Adaptive Boosting (AdaBoost )(Freund & Schapire)
-
Ensemble learning(II)-Bootstrap Aggregation (Bagging) (Breiman) and Random (Decision) Forests (Breiman)
-
Criteria of Evaluation: (1)Approximation (2)Generalization (3)Stability (4)Convergence (Effective vs. Efficiency)
-
Applications of machine learning in medical and healthcare
-
Final
-
Course description and requirements
-
Introduction to machine learning and related topics
-
Machine learning problem style and Occam’s Razor
-
Framework of machine learning: model, strategy and algorithm
-
Valiant’s probably approximately correct (PAC) learning theory
-
Vapnik’s statistical learning theory
-
Vapnik-Chervonenkin (VC) Dimension and Large Margin Theory
-
Generalized Error= Random Error + Bias + Variance
-
Midterm
-
Gold’s computational (algorithmic) learning theory and weak/strong learning (I)
-
Gold’s computational (algorithmic) learning theory and weak/strong learning (II)
-
Consistency: Uniform Law of Large Number (ULLN) and empirical process theory
-
Generalization and cross-validation
-
Ensemble learning(I)-Boosting (Kearns & Valiant) and Adaptive Boosting (AdaBoost )(Freund & Schapire)
-
Ensemble learning(II)-Bootstrap Aggregation (Bagging) (Breiman) and Random (Decision) Forests (Breiman)
-
Criteria of Evaluation: (1)Approximation (2)Generalization (3)Stability (4)Convergence (Effective vs. Efficiency)
-
Applications of machine learning in medical and healthcare
-
Final
-
Course description and requirements
-
Introduction to machine learning and related topics
-
Machine learning problem style and Occam’s Razor
-
Framework of machine learning: model, strategy and algorithm
-
Valiant’s probably approximately correct (PAC) learning theory
-
Vapnik’s statistical learning theory
-
Vapnik-Chervonenkin (VC) Dimension and Large Margin Theory
-
Generalized Error= Random Error + Bias + Variance
-
Midterm
-
Gold’s computational (algorithmic) learning theory and weak/strong learning (I)
-
Gold’s computational (algorithmic) learning theory and weak/strong learning (II)
-
Consistency: Uniform Law of Large Number (ULLN) and empirical process theory
-
Generalization and cross-validation
-
Ensemble learning(I)-Boosting (Kearns & Valiant) and Adaptive Boosting (AdaBoost )(Freund & Schapire)
-
Ensemble learning(II)-Bootstrap Aggregation (Bagging) (Breiman) and Random (Decision) Forests (Breiman)
-
Criteria of Evaluation: (1)Approximation (2)Generalization (3)Stability (4)Convergence (Effective vs. Efficiency)
-
Applications of machine learning in medical and healthcare
-
Final
Teacher / 鮑永誠