Toronto Deep Learning Demos

Yichuan Tang, Tianwei Liu, University of Toronto, Jan 16, 2015
Commentary by Stephen Downes

'Deep Learning' is a form of machine learning that generates clusters or categorizations without the aid of a training set - the machine learns to recognize things by itself. This set of demonstrations from Toronto apply descriptions and captions to images. Most of the results are quite good, though you can fool it still with specific examples, like the Taj Mahal. Deep learning is important for a couple of reasons: it demonstrates that neural networks can learn abstractions without a priori knowledge, and it creates a set of applications that can be useful for e-learning analytics, such as resource classification for intelligent recommendation systems. The Toronto site has other resources that are equally applicable to e-learning. I've talked about Boltzmann machines in the past; Multimodal Deep Learning With Boltzmann Machines illustrates aspects of this. Also: Quantitative Structure-Activity/Property Relationship (QSAR/QSPR). And Multimodal Neural Language Models.

Views: 0 today, 169 total (since January 1, 2017).[Direct Link]
Creative Commons License. gRSShopper

Copyright 2015 Stephen Downes ~ Contact:
This page generated by gRSShopper.
Last Updated: Aug 18, 2017 08:26 a.m.