TUD Logo

TUD Startseite » ... » Fakultät Informatik » Forschung » Wissenschaftliche Vorträge

Fakultät Informatik

against racism

Wissenschaftliche Vorträge

Emulation von Netzwerkverhalten für Skalierbarkeitstests IP-basierter Audio/Video-Kommunikationssysteme

Verteidigung im Promotionsverfahren von Dipl.-Inf. Robert Lübke (Institut für Systemarchitektur, Lehrstuhl Rechnernetze)

1.7.2015, 11:00 Uhr, APB 1004 (Ratssaal)

Das Testen von Audio-Video-Kommunikationssystemen stellt aus verschiedenen Gründen durchaus eine Herausforderung dar. Reale Systeme können im praktischen Einsatz riesige Ausmaße annehmen. Am Beispiel des Videokonferenzsystems für bis zu 1.000 Teilnehmer innerhalb einer Sitzung wird deutlich, dass Skalierbarkeitsuntersuchungen durchaus notwendig sind. Solch große Szenarien sind allerdings nur mit sehr großem Koordinationsaufwand nachzustellen und zu testen. Ein anderes Problem ist die sich immer stärker ausprägende Heterogenität der Nutzerendpunkte. Neben PCs werden auch Notebooks, Smartphones und Tablets mit unterschiedlichen Betriebssystemen verwendet. Auch der Aspekt der Nutzermobilität darf beim Testen nicht vernachlässigt werden. Weiterhin sind diese verschiedenen Endpunkte über unterschiedliche Netzwerkzugangstechnologien wie DSL, Glasfasernetze und Mobilfunk an das Internet angeschlossen. Diese Technologien weisen durchaus sehr unterschiedliche Netzwerkcharakteristiken auf, die sich letztlich auf die zu testende Anwendung auswirken. Um derartige Szenarien reproduzierbar nachbilden zu können, ist die detailgetreue Emulation des Verhaltens realer Rechnernetze wie z.B. Paketverzögerung, -verlust und -umordnung erforderlich. Die in dieser Arbeit vorgeschlagene Lösung für die angesprochenen Probleme ist eine Testplattform für IP-basierte Audio-Video-Kommunikationssysteme, die das Netzwerkverhalten realitätsnah emuliert. Damit soll der Koordinationsaufwand für den Tester reduziert werden, um schnell und einfach auch sehr große Skalierbarkeitstests durchführen zu können.


Routing unter Beachtung der Vertrauenswürdigkeit der Pfade

Präsentation der Bachelor-Arbeit von Christoph Hofmann (Institut für Systemarchitektur, Datenschutz und Datensicherheit und Institut für Technische Informatik, Eingebettete Systeme)

2.7.2015, 15:00 Uhr, APB 3080

Für jede Datenübertragung zwischen nicht direkt benachbarten Knoten eines Kommunikationsnetzes muss zunächst ein Pfad gefunden werden. Für diese Aufgabe gibt es verschiedene Routingansätze, darunter auch adaptive Ansätze, die bei der Auswahl des Pfades Informationen über die vorhandenen Pfade auswerten. Im Rahmen dieser Arbeit ist das Finden eines Pfades unter Beachtung von möglichen Angriffen im Kommunikationsnetz zu untersuchen. Als Topologie ist ein zweidimensionaler Torus anzunehmen. Weiterhin ist vorauszusetzen, dass der Empfang von Paketen durch Acknowledgments bestätigt wird und dass die Knoten Verfälschungen der eingehenden Pakete und Acknowledgments erkennen können, beispielsweise durch die Verwendung digitaler Signaturen.

Es ist zu diskutieren, wie Angriffe erkannt und wie diese Information beschrieben werden kann. Dabei soll insbesondere untersucht werden, wie Knoten lokal eine Bewertung der Vertrauenswürdigkeit der Pfade vornehmen und Entscheidungen bzgl. der Auswahl der Pfade treffen können. Die aufgezeigten Ansätze sind praktisch umzusetzen und ihre Effizienz zu bewerten.


Concepts for In-memory Event Tracing: Runtime Event Reduction with Hierarchical Memory Buffers

Verteidigung im Promotionsverfahren von Michael Wagner (Institut für Technische Informatik, Professur Rechnerarchitektur)

3.7.2015, 14:15 Uhr, APB 1004 (Ratssaal)

High Performance Computing (HPC) systems are getting more and more powerful but more and more complex, as well. Supportive environments, such as performance analysis tools are essential to assist developers in utilizing the computing resources of such complex systems. For long-running and large-scale parallel applications, event-based performance analysis faces three challenges, yet unsolved: the number of resulting trace files limits scalability, the huge amount of collected data overwhelms file system and analysis capabilities, and the measurement bias, in particular, due to intermediate memory buffer flushes prevents a correct and meaningful analysis. This thesis proposes concepts for an in-memory event tracing workflow to meet these challenges. These concepts include new enhanced encoding techniques to increase memory efficiency, novel strategies for runtime event reduction to dynamically adapt trace size during runtime, and the Hierarchical Memory Buffer data structure, which incorporates a multi-dimensional, hierarchical ordering of events by common metrics, such as time stamp, calling context, event class, and function call duration. These concepts allow a trace size reduction of up to three orders of magnitude and can keep an entire measurement within a single fixed-size memory buffer, while still providing a coarse but meaningful analysis of the application.


Concepts for In-memory Event Tracing: Runtime Event Reduction with Hierarchical Memory Buffers

Verteidigung im Promotionsverfahren von Michael Wagner

3.7.2015, 14:15 Uhr, APB 1004 (Ratssaal)

High Performance Computing (HPC) systems are getting more and more powerful but more and more complex, as well. Supportive environments, such as performance analysis tools are essential to assist developers in utilizing the computing resources of such complex systems. For long-running and large-scale parallel applications, event-based performance analysis faces three challenges, yet unsolved: the number of resulting trace files limits scalability, the huge amount of collected data overwhelms file system and analysis capabilities, and the measurement bias, in particular, due to intermediate memory buffer flushes prevents a correct and meaningful analysis. This thesis proposes concepts for an in-memory event tracing workflow to meet these challenges. These concepts include new enhanced encoding techniques to increase memory efficiency, novel strategies for runtime event reduction to dynamically adapt trace size during runtime, and the Hierarchical Memory Buffer data structure, which incorporates a multi-dimensional, hierarchical ordering of events by common metrics, such as time stamp, calling context, event class, and function call duration. These concepts allow a trace size reduction of up to three orders of magnitude and can keep an entire measurement within a single fixed-size memory buffer, while still providing a coarse but meaningful analysis of the application.



Generic Quality-Aware Refactoring and Co-Refactoring in Heterogeneous Model

Verteidigung im Promotionsverfahren von Reimann, Jan

9.7.2015, 11:00 Uhr, APB 1004 (Ratssaal)

Software has been subject to change, at all times, in order to make parts of it, for instance, more reusable, better to understand by humans, or to increase efficiency under a certain point of view. Restructurings of existing software can be complex. To prevent developers from doing this manually, they got tools at hand being able to apply such restructurings automatically. These automatic changes of existing software to improve quality while preserving its behaviour is called refactoring. Refactoring is well investigated for programming languages and mature tools exist for executing refactorings in integrated development environments (IDEs). In recent years, the development paradigm of Model-Driven Software Development (MDSD) became more and more popular and we experience a shift in the sense that development artefacts are considered as models which conform metamodels. This can be understood as abstraction, which resulted in the trend that a plethora of new so-called model-based Domain-Specific Languages (DSLs) arose. DSLs have become an integral part in the MDSD and it is obvious that models are subject to change, as well. Thus, refactoring support is required for DSLs in order to prevent users from doing it manually. The problem is that the amount of DSLs is huge and refactorings should not be implemented for new for each of them, since they are quite similar from an abstract viewing. Existing approaches abstract from the target language, which is not flexible enough because some assumptions about the languages have to be made and arbitrary DSLs are not supported. Furthermore, the relation between a strategy which finds model deficiencies that should be improved, a resolving refactoring, and the improved quality is only implicit. Focussing on a particular quality and only detecting those deficiencies deteriorating this quality is difficult, and elements of detected deficient structures cannot be referred to in the resolving refactoring. In addition, heterogeneous models in an IDE might be connected physically or logically, thus, they are dependent. Finding such connections is difficult and can hardly be achieved manually. Applying a restructuring in a model implied by a refactoring in a dependent model must also be a refactoring, in order to preserve the meaning. Thus, this kind of dependent refactorings require an appropriate abstraction mechanism, since they must be specified for dependent models of different DSLs. The first contribution, Role-Based Generic Model Refactoring, uses role models to abstract from refactorings instead of the target languages. Thus, participating structures in a refactoring can be specified generically by means of role models. As a consequence, arbitrary model-based DSLs are supported, since this approach does not make any assumptions regarding the target languages. Our second contribution, Role-Based Quality Smells, is a conceptual framework and correlates deficiencies, their deteriorated qualities, and resolving refactorings. Roles are used to abstract from the causing structures of a deficiency, which then are subject to resolving refactorings. The third contribution, Role-Based Co-Refactoring, employs the graph-logic isomorphism to detect dependencies between models. Dependent refactorings, which we call co-refactorings, are specified on the basis of roles for being independent from particular target DSLs. All introduced concepts are implemented in our tool Refactory. An evaluation in different scenarios complements the thesis. It shows that role models emerged as very powerful regarding the reuse of generic refactorings in arbitrary languages. Role models are suited as an interface for certain structures which are to be refactored, scanned for deficiencies, or co-refactored. All of the presented approaches benefit from it.



Suche im Ankündigungsarchiv


Abonnieren Sie die Vortragsankündigungen als News Feed: RSS

Stand: 1.7.2015, 1:15 Uhr
Autor: Webmaster