Visualisation of non-linear decision boundaries: why interpretable models?

We developed several extensions to the angular variants of LVQ introduced in our ESANN 2017 contribution . These extensions made it possible to extract knowledge about not just the individual feature relevance but additionally the ‘inter-relevances’ between the features. Another of the introduced variants made it straight-forward visualisation of non-linear boundaries possible. We tested these newly developed classifiers on a synthetic dataset we created using the Chebfun functions from GitHub, and also on a publicly available heart disease dataset of UCI.

HD

The figure above shows the non-linear decision boundaries of the UCI heart disease dataset when we trained our classifier on it considering it as a five class (1 healthy ad 4 heart disease classes) problem. Further details about it can be found here.

This contribution has been accepted for presentation and then for the proceedings of the International Joint Conference on Neural Networks 2020.