Barry Van Veen
Barry Van Veen
  • 195
  • 6 562 133
Foundations of Artificial Intelligence and Machine Learning Course Promo Video
I'm very excited about this new short-form course being offered first in April through InterPro at UW-Madison. The past few months I have worked toward developing new content that condenses the key ideas in artificial intelligence and machine learning into digestible, accessible, and actionable insight. I've always enjoyed teaching learners with diverse backgrounds and interests and am looking forward to working with all who enroll. If you want to learn more about this timely and transformative topic, join us!
Переглядів: 498

Відео

Convergence, Tracking, and the LMS Algorithm Step Size
Переглядів 1,7 тис.Рік тому
The convergence and tracking behavior of the LMS algorithm are dependent on the step size parameter applied to the instantaneous gradient. The various performance tradeoffs involved with selecting a step size parameter are discussed. Small step sizes result in small misadjustment, but can have slow convergence and poor tracking performance. Large step sizes can result in unstable iterations.
Solving the Least-Squares Problem with Gradient Descent: the Least-Mean-Square Algorithm.
Переглядів 2,9 тис.Рік тому
The least-mean-square (LMS) algorithm is an iterative approach to finding the minimum mean-squared error filter weights based on taking steps in the direction of the negative gradient of the instantaneous error. The LMS algorithm is very simple and widely used in adaptive filtering.
Finding the MMSE Filter Optimum Weights
Переглядів 1,7 тис.Рік тому
The math of solving the MMSE problem to find the optimal weights. A linear algebra formulation is used to rewrite the mean-squared error as a perfect square, which allows the MMSE weights to be identified by inspection without defining gradients and. This is the matrix equivalent of the "completing the square" method used to find the minimum of a second order polynomial.
Introduction to Minimum Mean-Squared-Error Filtering
Переглядів 2,3 тис.Рік тому
Introduces the basic framework for MMSE filtering and applications to system modeling, equalization, and interference suppression.
Signals- The Basics
Переглядів 1,4 тис.2 роки тому
Introductory ideas and notation concerning signals.
Matrix Completion
Переглядів 6 тис.3 роки тому
Matrix Completion
Network Graphs and Page Rank Algorithm
Переглядів 12 тис.3 роки тому
Network Graphs and Page Rank Algorithm
Eigendecomposition, Singular Value Decomposition, and Power Iterations
Переглядів 4,6 тис.3 роки тому
Eigendecomposition, Singular Value Decomposition, and Power Iterations
Bias-Variance Tradeoff in Low Rank Approximations
Переглядів 6793 роки тому
Bias-Variance Tradeoff in Low Rank Approximations
Principal Component Analysis
Переглядів 1,5 тис.3 роки тому
Principal Component Analysis
Singular Value Decomposition and Regularization of Least Squares Problems
Переглядів 3 тис.3 роки тому
Singular Value Decomposition and Regularization of Least Squares Problems
The Singular Value Decomposition and Least Squares Problems
Переглядів 4,2 тис.3 роки тому
The Singular Value Decomposition and Least Squares Problems
Properties of the Singular Value Decomposition
Переглядів 1,8 тис.3 роки тому
Properties of the Singular Value Decomposition
The Singular Value Decomposition
Переглядів 4,3 тис.3 роки тому
The Singular Value Decomposition
Clustering Data with the K Means Algorithm
Переглядів 1,4 тис.3 роки тому
Clustering Data with the K Means Algorithm
Low Rank Decompositions of Matrices
Переглядів 11 тис.3 роки тому
Low Rank Decompositions of Matrices
Regularization and Ridge Regression for Supervised Learning
Переглядів 7923 роки тому
Regularization and Ridge Regression for Supervised Learning
Complexity, Overfitting, and Cross Validation
Переглядів 9033 роки тому
Complexity, Overfitting, and Cross Validation
Geometry of the Squared Error Surface
Переглядів 7993 роки тому
Geometry of the Squared Error Surface
Solving the Least Squares Problem Using Gradients
Переглядів 2,5 тис.3 роки тому
Solving the Least Squares Problem Using Gradients
Solving the Least-Squares Problem Using Geometry
Переглядів 2 тис.3 роки тому
Solving the Least-Squares Problem Using Geometry
Approximate Solutions, Norms, and the Least-Squares Problem
Переглядів 2,1 тис.3 роки тому
Approximate Solutions, Norms, and the Least-Squares Problem
Representing Data with Bases
Переглядів 5453 роки тому
Representing Data with Bases
Subspaces in Machine Learning
Переглядів 1,2 тис.3 роки тому
Subspaces in Machine Learning
Uniqueness of Solutions to Learning Problems
Переглядів 5253 роки тому
Uniqueness of Solutions to Learning Problems
Linear Independence and Rank in Learning
Переглядів 3,2 тис.3 роки тому
Linear Independence and Rank in Learning
Patterns in Data and Outer Products
Переглядів 6543 роки тому
Patterns in Data and Outer Products
Classifying Data and Matrix Multiplication
Переглядів 5293 роки тому
Classifying Data and Matrix Multiplication
Fitting Models to Data and Matrix Multiplication
Переглядів 6913 роки тому
Fitting Models to Data and Matrix Multiplication

КОМЕНТАРІ

  • @jameshopkins3541
    @jameshopkins3541 2 дні тому

    THERE ARE A LOT OF THIS TYPE OF UNUSEFUL VIDEOS

  • @andou_ryuu3205
    @andou_ryuu3205 3 дні тому

    I wish my professor even explained 10% of this as effectively

  • @jameshopkins3541
    @jameshopkins3541 7 днів тому

    SI no vas a explicar y demosrar nada. PARA q haces videos. PARA ganarvisitas. ?????

  • @jameshopkins3541
    @jameshopkins3541 8 днів тому

    AT LEAST USE SUBTITLE'S. DONT DOIT LIKE A SCHOOLKID

  • @jameshopkins3541
    @jameshopkins3541 8 днів тому

    I THINK IS UNUSEFUL TO TRY TO UNDERSTAND THIS EGIP

  • @jameshopkins3541
    @jameshopkins3541 8 днів тому

    DO IT IN PDF

  • @jameshopkins3541
    @jameshopkins3541 8 днів тому

    NOLIKE FOR UGLY KIDDING GRAPHICS

  • @jameshopkins3541
    @jameshopkins3541 8 днів тому

    REDOIT BUT WELL USING BIG IMAGES

  • @jameshopkins3541
    @jameshopkins3541 8 днів тому

    WHY NO EXAMPLE????????? BECAUSE IT DOESN WORK

  • @jameshopkins3541
    @jameshopkins3541 8 днів тому

    CAN YOU EXPLAIN SOMETHING ABOUT THE ALGOFFT????????

  • @ibissantananavarro7586
    @ibissantananavarro7586 10 днів тому

    Refresh long a go known thank you.

  • @tonyxu4310
    @tonyxu4310 16 днів тому

    It's really helpful for me, thanks!

  • @r410a8
    @r410a8 26 днів тому

    What is the formula of u[n] in 3:26 such the a variable's value determes the value of x[n] such a way in each case,i don't understand. And why in 4:36 when you aply the z transform formula on to x[n] you write Sum n-oo to +oo a^n*z^-n instead of Sum -oo to +oo a^n*u[n]*z^-n suposed the x[n] is a^n*u[n] not only a^n ? I don't understand this neither.

  • @abnereliberganzahernandez6337
    @abnereliberganzahernandez6337 Місяць тому

    you sck bro

  • @theoryandapplication7197
    @theoryandapplication7197 Місяць тому

    thanks sir

  • @bachkhoa1975
    @bachkhoa1975 Місяць тому

    This is a good overview of FFT. It would be nice to explain how the DFT convolution sum is derived. Also, the de-interlacing of the inputs was glossed over (not explained clearly) but only the reversed binary notation was mentioned (this is just an after-the-fact observation of How, not an explanation of Why). Readers who dive deeper into the splitting of a larger N-point FFT into two smaller N/2-point FFT’s, or understand the relationships between the twiddle factors (and their periodic nature) would understand and retain better the FFT technique (and be able to conquer any arbitrary size of N-point FFT (N being a power of 2, of course).

  • @jeevanraju8834
    @jeevanraju8834 Місяць тому

    your the best sir . thanks it really helpedfor my exam :)

  • @smesui1799
    @smesui1799 Місяць тому

    Excellent !

  • @VolumetricTerrain-hz7ci
    @VolumetricTerrain-hz7ci Місяць тому

    There are unknown way to visualize subspace, or vector spaces. You can stretching the width of the x axis, for example, in the right line of a 3d stereo image, and also get depth, as shown below. L R |____| |______| TIP: To get the 3d depth, close one eye and focus on either left or right line, and then open it. This because the z axis uses x to get depth. Which means that you can get double depth to the image.... 4d depth??? :O p.s You're good teacher!

  • @PankajSingh-dc2qp
    @PankajSingh-dc2qp Місяць тому

    PFE @6:50 is wrong. The residues are 6/5 and 4/5.

  • @PankajSingh-dc2qp
    @PankajSingh-dc2qp Місяць тому

    @ 3:50 the direction of the vector is wrong... it should be in the opposite direction

  • @PankajSingh-dc2qp
    @PankajSingh-dc2qp Місяць тому

    @ 1:11 the product in pole-zero form should start from k=1.

  • @adrenochromeaddict4232
    @adrenochromeaddict4232 Місяць тому

    great video. short, straight to the point

  • @eng.ameeryaseen1602
    @eng.ameeryaseen1602 Місяць тому

    plz.can you support my (what is the conditions of the linearity in phase for FIR filters. Enhance your answer with formulas)

  • @ONoesBird
    @ONoesBird 2 місяці тому

    Beautiful explanation! Loved it.

  • @tuongnguyen9391
    @tuongnguyen9391 2 місяці тому

    Why the link to all signal processing has die

  • @kottapallisaiswaroop9849
    @kottapallisaiswaroop9849 2 місяці тому

    Could you please make a video on Fast Iterative shrinkage threshold Algorithm and denoise an audio signal using it.

  • @ushamemoriya5391
    @ushamemoriya5391 2 місяці тому

    Consider an at rest linear system described by y"+25y=2sint+5cos 5t The response of this system will be Decaying oscillations in time. Oscillatory in time. Growing oscillations in time: None of the above.

  • @user-xk5rx9xe6w
    @user-xk5rx9xe6w 2 місяці тому

    15:33 ♡♡

  • @user-on4yf8tj1c
    @user-on4yf8tj1c 2 місяці тому

    Hello, I need help clarifying two concepts. First, 4:31 makes perfect sense to me. We're just using the definition of Euler's Identity; if we wanted to re-expand back to regular Acos(x) + Aisin(x) notation, and our function didn't have an imaginary component, the second term would go to zero. I.e., I can represent any sinusoid with Euler - even if it doesn't have an imaginary component. This is nice because we can break up our sinusoid into time dependent & independent components. This makes perfect sense. Then, at 4:57 we seem to transition into something completely different. I understand the math for both, but I don't understand why I would use 4:57 over 4:31? What was wrong with just using Euler's Identity like 4:31? For example, at 7:59 I could have just as well used Euler's Identity like done at 4:31 instead of using this cos definition. Could you please help me connect these two ideas? Thank you, sir.

  • @komuna5984
    @komuna5984 3 місяці тому

    Thanks a lot for this awesome content!

  • @mrtoast244
    @mrtoast244 3 місяці тому

    This video is so underrated, it's literally the most straight forward explanation of this topic I've seen

  • @georgyurumov8095
    @georgyurumov8095 3 місяці тому

    Brilliant thank you you

  • @AlexAlex-fo9gt
    @AlexAlex-fo9gt 4 місяці тому

    4:35-5:40 In my calculation of DFT X[2]=-1, not 0. Is the picture of DFM 7:36 right?

  • @tomoliveri6251
    @tomoliveri6251 4 місяці тому

    I know some of these words

  • @gynxrm2237
    @gynxrm2237 4 місяці тому

    the best i have watched so far..

  • @paedrufernando2351
    @paedrufernando2351 4 місяці тому

    will it be online?I could attend online as I am based out of India

  • @PikaGMS
    @PikaGMS 4 місяці тому

    i just want to say that i love you man

  • @mariacedeno3068
    @mariacedeno3068 4 місяці тому

    this is great :) Thanks!!

  • @HKHasty
    @HKHasty 4 місяці тому

    Awesome channel! Really helping me through advanced DSP!

  • @Net_Flux
    @Net_Flux 4 місяці тому

    Not sure why you left out the recursive relation between the odd and even functions and the DFT. I was so confused where the speed gain was from.

  • @jimmy21584
    @jimmy21584 5 місяців тому

    Came here because I’m reading the LoRA LLM paper. Thank you for the clear summary!

  • @user-wg7hu6xe1z
    @user-wg7hu6xe1z 5 місяців тому

    after we get the optimized w, how to get the optimized b?

  • @user-hk7nf5gt4b
    @user-hk7nf5gt4b 5 місяців тому

    On advantage of the DTFT is its ability to provide greater frequency resolution with a single dominant frequency than the DFT for a given N. One application could be trying to use a dsp to accurately estimate the frequency of a guitar string for tuning to the proper pitch. We wouldn't want our tuner to be limited to only 1/T. Also, the small battery powered DSP cannot do an very large FFT for more resolution. Can you derive or simulate the resolution enhancement limits with the W(e(-jw) convolution?

  • @user-hk7nf5gt4b
    @user-hk7nf5gt4b 5 місяців тому

    One advantage of the DTFT is you get a continuous frequency domain. With a single dominating frequency, the peak frequency can be resolved with higher resolution than with the DFT with frequency samples of 1/T. Can you calculate or show a simulation of the limiting resolution of DTFT over the DFT based on your convolution of W(e^jw) factor? Give the same N.

  • @tuongnguyen9391
    @tuongnguyen9391 5 місяців тому

    WHat happen to the website

  • @josecarlosribeiro3628
    @josecarlosribeiro3628 6 місяців тому

    Congratulations Professor Van Veen for your ability and beautful presentation! Mary Christmans nas Happy New Year! Gos bless you! Jacareí -Sao Paulo-Brasil

  • @ajingolk7716
    @ajingolk7716 6 місяців тому

    In the equation of 3rd order filter pll something called pole and zero pop up what are they? Thank you

  • @ZatoichiRCS
    @ZatoichiRCS 6 місяців тому

    This video had so much potential but glossed over so so much.

  • @RajivSambasivan
    @RajivSambasivan 6 місяців тому

    Awsome - thanks for putting this up.