Kernel Methods
Kernel Methods
Traditional machine learning algorithms were typically developed for linear cases, but problems involving real world data usually requires a non-linear approach. Kernel methods utilizes the "kernel trick" to transform the data into a high dimensional feature space where the desired task is simpler to perform, without explicitly computing the high-dimensional feature transform. A range of different machine learning methods are underpinned by a kernel approach, with the most famous example being the Support Vector Machine.
The popularity of kernel methods stems from several desirable properties. Most algorithms based on kernel methods can be trained using convex optimization, which guarantees that a local minima is also a global minima. Furthermore, kernel methods have a solid statistical foundation and produce models that are interpretable for the user. Our group has over several years had a strong focus on innovating kernel machines, often in a synergistic fashion with information theoretic learning. The UiT Machine Learning Group developed methods such as kernel entropy component analysis and the information cut. In recent years, the group has investigated the concept of probabilistic cluster kernels.
Highlighted Publications
- Robert Jenssen
- Jonas N. Myhre, Michael Kampffmeyer and Robert Jenssen