Reservoir computing has emerged in the last decade as an alternative to gradient
descent methods for training recurrent neural networks. Echo State Network (ESN) is one of the key reservoir computing \ avors". While being practical, conceptually simple, and easy to implement, ESNs require some experience and insight to achieve the hailed good performance in many tasks. Here we present...
Echo state networks (ESN) are a novel approach to recurrent neural network training. An ESN consists of a large, fixed, recurrent "reservoir" network, from which the desired output is obtained by training suitable output connection weights. Determination of optimal output weights becomes a linear, uniquely solvable task of MSE minimization. This article reviews the basic ideas and describes an...
A very simple way to improve the performance of almost any mac
hine learning algorithm is to train many different models on the same data a
nd then to average their predictions [3]. Unfortunately, making predictions
using a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are...
Deep neural nets with a large number of parameters are very powerful machine learning
systems. However, over?tting is a serious problem in such networks. Large networks are also
slow to use, making it di?cult to deal with over?tting by combining the predictions of many
di?erent large neural nets at test time. Dropout is a technique for addressing this problem.
The key idea is to randomly drop...
This paper proposes a Fast Region-based Convolutional Network method (Fast R-CNN) for object detection. Fast R-CNN builds on previous work to efficiently classify object proposals using deep convolutional networks. Compared to previous work, Fast R-CNN employs several innovations to improve training and testing speed while also increasing detection accuracy. Fast R-CNN trains the very deep...
Many deep neural networks trained on natural images exhibit a curious phenomenon
in common: on the first layer they learn features similar to Gabor filters
and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the...
The KIBO robotics kit offers a playful and tangible way for young children to learn computational thinking skills by building and programming a robot. KIBO is specifically designed for children ages 4-7 years old and was developed by the DevTech research group at Tufts University through nearly a decade of research funded by the National Science Foundation. KIBO allows young children to become...
Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on largescale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain...
In this paper, we introduce a simple but efficient greedy algorithm,
called SINCO, for the Sparse INverse COvariance selection problem, which is equivalent to learning a sparse Gaussian Markov Network, and empirically investigate the structure-recovery properties of the algorithm. Our approach is based on a coordinate ascent method which naturally preserves the sparsity of the network...
Recurrent Neural Networks are showing much promise in many sub-areas of natural language processing, ranging from document classification to machine translation to automatic question answering. Despite their promise, many recurrent models have to read the whole text word by word, making it slow to handle long documents. For example, it is difficult to use a recurrent network to read a book and...
The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last 10 years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach to this problem is based on machine learning techniques: a general inductive process automatically builds...
Reservoir computing provides a promising approach to efficient training of recurrent neural networks, by exploiting the computational properties of the reservoir structure. Various approaches, ranging from suitable initialization to reservoir optimization by training have been proposed. In this paper we take a closer look at short-term memory capacity, introduced by Jaeger in case of echo...
Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classi?cation and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but e?ective...
We show that small and shallow feedforward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small...
Gene-expression microarrays, commonly called gene chips, make it possible to
simultaneously measure the rate at which a cell or tissue is expressing
translating into a protein each of its thousands of genes. One can use these
comprehensive snapshots of biological activity to infer regulatory pathways in
cells, identify novel targets for drug design, and improve the diagnosis, prognosis,...