Image by Mattia Ascenzo

Rapid Artificial Intelligence based Detection of Aggressive or Radical content on the Web

RAIDAR conducts a thorough research of methods and approaches for the quantitative survey and evaluation of online content which may endanger democracy, foster hatred and enhance radicalization. To that end, RAIDAR develops a data science platform for the partially automated analysis of large amounts of data from different sources, conducts research into various approaches for the automated classification of content which, from a criminal law perspective, can be attributed to hate speech and radicalization. RAIDAR is innovative in that it defines key figures, measurement parameters and methods for the quantitative and qualitative evaluation of hate speech and radicalization on the web.

Donor
logo.png
Objectives
  • Create a data science tool/platform for a partially automated analysis of large amounts of data - development of the prototypical data science platform RAIDAR for the quantitative and qualitative analysis of large amounts of digital data, to support the judiciary and to enable large-scale studies.

  • Versatile data and source analysis - creating the possibility of evaluating large amounts of data from different sources (online, external data carriers, etc.) and thus ensuring a wide-ranging area of ​​application.

  • Develop methods for the automated classification of content with regard to the relevant legal regulations - research and development of methods of artificial intelligence in the legal field, as well as "hatred on the web" and "radicalization".

  • Establish a German-language taxonomy on "hate on the net" and "radicalization" - development of a taxonomy in German, on the basis of which the data science tool can record and evaluate content relating to hate speech or of an extremist nature, and as a basis for others Studies, research or projects.

  • Ethical and legal evaluation - theoretical and empirical discussion of ethical and legal issues related to relevant technological issues in RAIDAR (e.g. the automated collection of data). In doing so, ethical limits in the context of artificial intelligence are reflected, and legal framework conditions analysed.

  • Quantitative study - utilize the RAIDAR platform in a quantitative study in the area of ​​“hate on the web” and “radicalization” on content that is relevant in terms of time and context.

Impact

The project outputs will reduce the workload of the Austrian Federal Ministry of Justice) by partially automated assistance systems based on artificial intelligence in the legal field. It will provide a concrete technology assessment of ethical limits and legal frameworks in the context of artificial intelligence for automated data collection. Moreover, the RAIDAR platform will be applied in a quantitative study in the area of "hate speech" and "radicalization" on temporally and contextually relevant content.

Lead Partner
Screenshot 2021-09-27 221021.png
Status

Ongoing