The second figure shows the typical structure of a machine learning application
Posted: Sun Jan 26, 2025 3:56 am
The second use case relates to credit scoring and the general topic of liquidity predictions. Here, an AI predicts the risk of whether a person with certain characteristics (including job, salary or age) will be able to service a loan of a certain amount or not. Explainability is important here in order to prevent violations of Sections 1 and 2 of the General Equal Treatment Act - i.e. to prevent discrimination. It is also important to understand how a decision is made in order to prevent certain people who have a certain characteristic - such as old age - from being discriminated against or disadvantaged.
The third use case is predictive policing. This is an AI-driven approach to predicting crime. The police can use this to identify crime hotspots and/or to predict the likelihood of certain people committing a crime. One company that specializes in this process is Palantir. It has collaborations with the police in North Rhine-Westphalia and Bavaria, among armenia consumer email list​ others. Software that can do something like this is already being used in Germany. Since the models behind the predictions often come from the USA, where there are serious problems with racial profiling and discrimination, it is particularly important to understand why models make a prediction.
There are many other examples of why Explainable AI is so important for the use of machine and deep learning algorithms in practice. Unfortunately, a detailed description would go beyond the scope of this article.
Usually, most of the machine and deep learning methods used are so-called black box models. This means that it is unclear what exactly is happening inside, and therefore it is difficult to understand the decision. Examples of black box models include neural networks (deep learning), support vector machines (SVMs) and ensemble classifiers. Of course, there is also the opposite, the so-called white box models. With these models, it is easier to understand the decisions because they are based on simpler methods or have far fewer parameters. Examples of white box models include rule-based classifiers and decision trees.
The third use case is predictive policing. This is an AI-driven approach to predicting crime. The police can use this to identify crime hotspots and/or to predict the likelihood of certain people committing a crime. One company that specializes in this process is Palantir. It has collaborations with the police in North Rhine-Westphalia and Bavaria, among armenia consumer email list​ others. Software that can do something like this is already being used in Germany. Since the models behind the predictions often come from the USA, where there are serious problems with racial profiling and discrimination, it is particularly important to understand why models make a prediction.
There are many other examples of why Explainable AI is so important for the use of machine and deep learning algorithms in practice. Unfortunately, a detailed description would go beyond the scope of this article.
Usually, most of the machine and deep learning methods used are so-called black box models. This means that it is unclear what exactly is happening inside, and therefore it is difficult to understand the decision. Examples of black box models include neural networks (deep learning), support vector machines (SVMs) and ensemble classifiers. Of course, there is also the opposite, the so-called white box models. With these models, it is easier to understand the decisions because they are based on simpler methods or have far fewer parameters. Examples of white box models include rule-based classifiers and decision trees.