Towards a Conceptual Modeling Method for Artificial Neural Networks

Art der Arbeit:
  • Masterarbeit Wirtschaftsinformatik


Artificial neural networks (ANNs) denote a popular class of models used within machine learning. An ANN typically consists of multiple layers of simple processing units, so-called artificial neurons. Most current ANNs involve multiple layers of these processing units, hence the term deep learning is sometimes applied to describe them. Historically, they emerged from a neurophysiological inspiration to express the processing of mammal neurons in mathematical terms (cf. McCulloch and Pitts 1943). There exists a plethora of different approaches to the design of ANNs, some variations include the number of artificial neurons in a layer, the activation function applied, or the connection of artificial neurons between layers. From these variations have emerged several classes of ANN architectures, such as Multi-Layered Perceptrons (MLPs), Generative Adversial Networks (GANs), Convolutional Neural Networks (CNNs), or Recurrent Neural Networks (RNNs). It is conspicuous many papers, which discuss a particular ANN architecture,represent them in some diagrammatic form. This diagrammatic representation, however, does not follow any unified structure. This results in two challenges: First, ANNs are not visually comparable through an analysis of their diagrammatic representations. Second, the depicted diagrams of ANNs might lack relevant information, overseen by the original researchers. In short: It appears that the depiction of ANNs lack a conceptual modeling language.

The present thesis should adress this gap. Therefore, it is relevant to expound on the foundations and variations of ANNs as well as to explore the fundamentals of conceptual modeling languages. Based on an analysis of the design, evaluation, and application of ANNs, requirements for a corresponding modeling method should be derived. Thereupon, these insights should be used to specify a conceptual modeling method for ANNs.

Introductory Literature:

  • Aggarwal CC (2018) Neural Networks and Deep Learning: A Textbook. Springer International Publishing: Cham
  • Du K-L, Swamy MNS (2014) Neural Networks and Statistical Learning. Springer-Verlag: London
  • Frank U (2013) Domain-Specific Modeling Languages – Requirements Analysis and Design Guidelines. In: Reinhartz-Berger I, Sturm A, Clark T, Wand Y, Cohen S, Bettin J (eds.) Domain Engineering: Product Lines, Conceptual Models, and Languages. Springer: Cham, pp. 133-157
  • Kelleher JD (2019) Deep Learning. The MIT Press: Cambridge, MA, London
  • McCulloch WS, Pitts W (1943) A Logical Calculus of the Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics 5:115-133