Andrew Ng’s Google talk about unsupervised feature learning and deep learning

In the past few weeks, Pardis and I have been looking at “deep belief networks”, and their application to learning features for audio and music classification: DBNs, and convolutional DBNs.
I am still completely hazy on the details, but in this talk, Professor Ng provides an excellent overview of the power of such approaches. I think one reviewer’s comment summarizes it nicely in one line:

Andrew Ng got bored of improving one algorithm so he decided to improve all algorithms at once…

On his course website at Stanford, Ng provides some tutorials.

These approaches with DBNs — let the algorithm find the features that make sense with respect to some basic principle of economy (whether it be sparsity or energy) — makes me think about the recent opinion article by Malcolm Slaney, Does Content Matter? Of course content matters since that is how us expert humans “like” something, e.g., giving a thumbs up on YouTube… I “liked” Ng’s talk because of its content and delivery, and not because 52 other people liked it. (Who are the two people that “disliked” Ng’s talk!? How could someone not like him?) We just aren’t using the best features.

Also note the recent “resurgence” of neural networks!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s