Student Theses, Projects, and Internships

Fulda HPC Research Group

Overview

I supervise Bachelor’s and Master’s theses, M.Sc. research projects, and B.Sc. internships (Praxisprojekt) in the areas of Parallel Programming, High-Performance Computing (HPC), and Supercomputing at Fulda University of Applied Sciences.

My research focuses on making supercomputers more efficient and easier to program. Theses in my group are typically closely connected to active research projects, providing students with the opportunity to work on real-world systems and contribute to scientific publications.

Research areas I supervise in:

  • Parallel Programming Models
  • Asynchronous Many-Task (AMT) Programming
  • Dynamic (Adaptive/Malleable) Resource Management
  • Fault Tolerance (Resiliency)
  • Job Scheduling
  • AI-assisted Vibe Coding
  • Heterogeneous Computing Resources
  • Parallel I/O

If none of the listed topics match your interests, but you have an idea in a related area, feel free to reach out — I am happy to discuss custom proposals. Please get in touch early.


Topics

The following are initial topic proposals intended to define a research direction. These are starting points; the specific scope, research questions, and methodology will be developed collaboratively during our initial meetings. Students are encouraged to contribute their own ideas and refinements.

Parallel Programming Models

MPI vs. AMT: A Comparative Performance Study

Systematically compare MPI-based and AMT-based implementations of representative HPC benchmarks with respect to performance, scalability, and programmability. We are particularly interested in also exploring Julia as an alternative parallel programming system.

Asynchronous Many-Task (AMT) Programming

Load Balancing Heuristics for Irregular Workloads

Investigate and implement task migration strategies for AMT runtimes dealing with dynamic and irregular workloads. The focus is on developing heuristics that reduce overhead while maintaining good load balance.

Profiling and Visualization of AMT Execution

Develop tools to trace and visualize task execution and data movement in AMT runtimes. The goal is to help developers understand runtime behavior and identify performance bottlenecks.

Dynamic (Adaptive/Malleable) Resource Management

Malleable Job Scheduling Strategies for HPC Clusters

Design and evaluate scheduling algorithms that dynamically resize jobs at runtime. The goal is to improve overall system utilization by allowing the resource manager to shrink or expand running jobs based on demand from the job queue.

Performance Modeling for Adaptive Applications

Develop performance models that predict how applications respond to node additions or removals. These models will guide the resource manager in making better decisions about when and how to resize jobs.

Fault Tolerance (Resiliency)

Fault Tolerance in AMT Systems

Explore checkpointing and recovery mechanisms for task-based runtimes on large-scale clusters. The goal is to enable transparent recovery from node failures without losing significant computation progress.

Job Scheduling

Simulation Framework for HPC Job Schedulers

Build a discrete-event simulation environment to evaluate scheduling policies using real-world supercomputer logs. The framework should support both rigid and adaptive job models. The simulation builds on ElastiSim, an existing open-source HPC job scheduler simulator.

Fairness-aware Scheduling for Mixed Rigid/Adaptive Workloads

Design and evaluate scheduling policies that incentivize adaptive jobs while guaranteeing fairness for rigid jobs in a shared HPC cluster.

AI-assisted Vibe Coding

LLM-driven Generation of Parallel Programs

Evaluate the capability of large language models to generate correct and efficient parallel code (MPI, OpenMP, task-based) for HPC problems. Assess correctness, performance, and developer productivity.

Heterogeneous Computing Resources

Task Scheduling Across CPUs and GPUs in AMT Systems

Extend an existing AMT runtime to schedule tasks on heterogeneous hardware (CPU + GPU), implementing placement heuristics that minimize data movement and maximize throughput.

Parallel I/O

I/O-aware Task Scheduling for Data-intensive HPC Applications

Investigate how I/O access patterns of tasks can be used to guide scheduling decisions in AMT systems, reducing I/O bottlenecks on shared storage systems.


Theses (B.Sc. & M.Sc.)

Formal Requirements

Language: English or German
Length: B.Sc.: typically 40–60 pages  ·  M.Sc.: typically 60–100 pages
Format: LaTeX required  ·  Use the official department template (FBAI)
Code: Version-controlled (Git) and submitted alongside the thesis

Expectations

  • Independence: You are expected to work independently and proactively. I provide guidance and feedback, but the thesis is your work.
  • Regular meetings: We meet regularly to discuss progress, problems, and next steps.
  • Scientific writing: You are expected to read and cite relevant related work. Prefer primary sources: conference papers (ACM, IEEE, USENIX), journals, and technical reports.
  • Reproducibility: Experiments must be documented and reproducible. Raw data and scripts must be included in the submission.
  • AI tools: The use of AI writing tools (e.g., ChatGPT) must be transparently declared in accordance with the department’s current guidelines.

Process

1  ·  Application
Get in touch early. Send an email to jonas.posner@cs.hs-fulda.de with:
  • Thesis type (B.Sc./M.Sc.)
  • Preferred start date
  • Your experience in parallel programming
  • Which research area from my list interests you
  • A concrete initial idea for a topic within that area
Before applying, please review the completed theses to get an idea of the scope and quality expectations.
2  ·  Initial Meeting

We meet to discuss your background, the topic, and expectations on both sides. If we agree to proceed, we define a concrete scope and research question.

3  ·  Exposé

You write a short exposé (2 pages) covering: problem statement and motivation, objectives and research questions, planned methodology, and a rough timeline. This serves as the basis for the official registration.

4  ·  Official Registration
You register the thesis with the examination office with confirmation from the first examiner (Prof. Dr. Posner) and a second examiner. The official processing time begins from the registration date: B.Sc.: 3 months  ·  M.Sc.: 6 months.
Registration is typically possible towards the end of the lecture period: late January (winter semester, deadline: March 31) and late June (summer semester, deadline: September 30).
5  ·  Thesis Work

You carry out the research, implementation, and evaluation. We meet regularly to track progress. You are expected to bring topics to the meeting that you would like to discuss.

6  ·  Draft Review

You submit a full draft at least three weeks before the deadline. I provide written feedback on structure, content, and language.

7  ·  Submission

You must submit the final thesis to the SSC by the official deadline. All code and artefacts must be submitted alongside the thesis.

8  ·  Presentation & Defense
You present your work in a colloquium. The date is agreed upon after submission.
B.Sc.: ~20 min presentation + ~15 min Q&A
M.Sc.: ~30 min presentation + ~20 min Q&A
The presentation should cover motivation, approach, results, and critical reflection.

Research Projects

Type: M.Sc. module
Duration: 1 semester

The research project is a supervised research module in the Master’s programme. It can be done by a group of students.

Topics come from the same research areas as theses. If you are interested, reach out by email with your topic idea, a brief statement of your background, and your preferred semester. The same contact and application process as for theses applies.


Internship Report

Type: B.Sc. module (Praxisprojekt)
Guidelines: Internship guidelines from Prof. Dr. Rieger (PDF)

Key points:

  • The practical phase takes place at a company or research institution in industry.
  • The university assigns a supervising professor for the written report.
  • The internship report is submitted after the practical phase ends.
  • Assessment is based on both the report and the employment reference.

Note on supervision: I supervise internship reports if the topic in industry aligns with my research interests (Parallel Programming, HPC, Supercomputing, etc.). Please get in touch before your internship starts to confirm supervision. The application process is the same as for theses.


Completed Theses

The following theses were completed during my time as a researcher at the University of Kassel (2018-2025). I supervised and/or reviewed these works.

2026

  • Simulationsbasierte Analyse des Energieverbrauchs von HPC-Systemen mit elastischen Jobs und Knotenabschaltung — Mike Karabet (B.Sc., 2026) [PDF]

2025

  • Evaluierung LLM-generierten parallelen Codes in der Programmiersprache Chapel — Nils Schintze (B.Sc., 2025) [PDF]
  • Entwicklung und Evaluation eines task-basierten Laufzeitsystems mit dynamischer Ressourcenverwaltung — Tim Ellersiek (M.Sc., 2025) [PDF]
  • Evaluation of the multithreaded runtime system Itoyori using Task Bench — Torben Lahnor (M.Sc., 2025) [PDF]
  • Simulation and Evaluation of evolving Workloads - Kapil Karki (B.Sc., 2025)
  • Simulating Malleable Job Scheduling Algorithms using Real-World Supercomputer Trace Logs - Patrick Zojer (B.Sc., 2025) [PDF]
  • Elastisches Ressourcenmanagement: Vergleich von Asynchronous Many-Task (AMT) und Dynamic Processes with PSets (DPP) - Nick Bietendorf (B.Sc., 2025) [PDF]
  • Evaluation von Gemini–generierten End–To–End und Unit Tests für Webanwendungen - Marius Tews (B.Sc., 2025)

2024

  • Entwurf und Entwicklung einer Emulation von Materialflüssen auf virtuellen Fertigungsanlagen — Larson Schneider (B.Sc., 2024)
  • Ressourcenelastizität für das task-basierte parallele Programmiersystem APGAS — Raoul W. Goebel (B.Sc., 2024) [PDF]

2023

  • TasGPI: A Global Load Balancing Framework for C++ — Adrian Steinitz (M.Sc., 2023) [PDF]
  • Prototypische Entwicklung eines Schedulers für Malleable-Jobs — Janek Bürger (B.Sc., 2023) [PDF]
  • Weiterentwicklung und Evaluation von Scheduling-Algorithmen für elastische Jobs im High-Performance-Computing — Fabian Hupfeld (B.Sc., 2023) [PDF]
  • Benchmarking von Virtuellen Threads in Java 19 — Marco Spöth (B.Sc., 2023) [PDF]

2022

  • Performanceevaluierung des Java-Parallelisierungs Frameworks APGAS mit dem Benchmark-System Task Bench — Torben R. Lahnor (B.Sc., 2022) [PDF]

2020

  • Entwicklung einer Netzwerkschicht für ein Java-basiertes Programmiersystem aus dem Bereich des Hochleistungsrechnens — Steve Hildebrandt (M.Sc., 2020) [PDF]
  • Dynamisches Hinzufügen und Entfernen von Places innerhalb der Global Load-Balancing Runtime von APGAS — Jonas Scherbaum (M.Sc., 2020) [PDF]
  • Protokollierung und Visualisierung des Laufzeitverhaltens einer Taskpool-Implementierung — Jan Bingemann (B.Sc., 2020) [PDF]

2019

  • Vergleich zwischen APGAS und Akka — Lukas Ried (B.Sc., 2019)
  • Design and Evaluation of a Work Stealing-Based Fault Tolerance Scheme for Task Pools — Mia Reitz (M.Sc., 2019)
  • APGAS und Charm++ im Vergleich — Aron Bollmann (B.Sc., 2019)
  • Isolierung von APGAS Benchmarks unter Anwendung von Containern in einer HPC-Umgebung — Fabian Wurmbach (B.Sc., 2019) [PDF]
  • Nutzung von Fibers als Ersatz für Threads im Laufzeitsystem der APGAS-Bibliohek für Java — Matthias Hartmann (M.Sc., 2019)

2018

  • An Asynchronous Backup Scheme Tracking Work-Stealing for Reduction-Based Task Pools — Mia Reitz (B.Sc., 2018)
  • Benchmarks für das taskbasierte parallele Programmiersystem APGAS — Steve Hildebrandt (B.Sc., 2018)