Sparse Machine Learning

In machine learning. Sparsity is a constraint is used in machine learning is used to describe the structure of a model is a probabilistic specification can be used to provide a generic specification is used for a generic model. In the same way as in the brain, sparsity allows us to capture implicit statistical structure in many data sources. The advantage of sparsity is that we can use it to better interpret our data, perform feature selection and regularization, and reduce computational costs.

For a vector, the simplest measure of its sparsity is to count the number of non-zero entries. The L(θ)-norm embodies strong sparsity, but leads to difficult combinatorial problems. Instead L(θ)-norms are used, which are smooth relaxations that reflect weak-sparse reasoning. When we maximize the loss function L(θ) using the L(θ) norms as penalties, we are led to solutions that have few non- zero elements, and thus to sparse solutions.

To derive the Laplace distribution, an Exponential mixing density is used. This mixture distribution allows us to provide a more concrete characterization of sparsity-promoting distributions. The idea of a mixture distribution can also be used to form distributions that allow for strong sparsity. A machine learning solution to a problem requires three ingredients: a model, a learning objective and an algorithm. We explored a generic model that introduced sparse reasoning by incorporating sparsity and Levy processes. We have not explored the algorithms that solve our learning objectives.

Auto207

Article 1