[ Software Research Lunch ]


Program for the winter quarter of 2018.


1/11: Organizational Lunch

Time: Thursday, January 11, 2018, 12 noon - 1pm
Location: Gates 415

Organizational lunch. Come sign up to give a talk during the quarter.

Food: Stefan


1/18: Bonsai: Synthesis-Based Reasoning for Type Systems

Time: Thursday, January 18, 2018, 12 noon - 1pm
Location: Gates 415

Speaker: Kartik Chandra

Abstract: We describe algorithms for symbolic reasoning about executable models of type systems, supporting three queries intended for designers of type systems. First, we check for type soundness bugs and synthesize a counterexample program if such a bug is found. Second, we compare two versions of a type system, synthesizing a program accepted by one but rejected by the other. Third, we minimize the size of synthesized counterexample programs.
These algorithms symbolically evaluate typecheckers and interpreters, producing formulas that characterize the set of programs that fail or succeed in the typechecker and the interpreter. However, symbolically evaluating interpreters poses efficiency challenges, which are caused by having to merge execution paths of the various possible input programs. Our main contribution is the Bonsai tree, a novel symbolic representation of programs and program states which addresses these challenges. Bonsai trees encode complex syntactic information in terms of logical constraints, enabling more efficient merging.
We implement these algorithms in the BONSAI tool, an assistant for type system designers. We perform case studies on how BONSAI helps test and explore a variety of type systems. BONSAI efficiently synthesizes counterexamples for soundness bugs that have been inaccessible to automatic tools, and is the first automated tool to find a counterexample for the recently discovered Scala soundness bug SI-9633.

Food: Todd


1/25: Improving Stochastic Search for Program Optimization

Time: Thursday, January 25, 2018, 12 noon - 1pm
Location: Gates 415

Speaker: Stefan Heule

Abstract: Writing optimizing compilers remains challenging as modern CPU architectures are incredibly complex and make it difficult to statically determine the performance of a program. Recently stochastic superoptimization has been proposed to randomly search the space of programs, guided by a cost function that estimates the performance of a proposed program during the search. Previous work on superoptimization has used a instruction latency based cost function, which fails to capture many important performance nuances. Instead, we propose a new cost function that runs the program on a test input several times, measuring its actual execution time. We address several technical challenges implementing this apparently simply idea. We find that the new cost function outperforms the latency based estimate on all metrics, sometimes by a wide margin. Perhaps surprisingly, we also show that for some benchmarks, the poorer latency estimate is still able to find programs almost as fast as the ones found by our more sophisticated cost function.

Food: Ben


2/1: Fine-grain Analysis of STOKE Synthesis

Time: Thursday, February 1, 2018, 12 noon - 1pm
Location: Gates 415

Speaker: Jason Koenig

Abstract: When STOKE is used to synthesize programs from scratch, the search sometimes becomes stuck in places that require two or more moves simultaneously to make progress. To avoid these long pauses, we added a new move type which rewrites a subgraph of the dataflow graph of the current program based on a trigger pattern, producing a large family of context sensitive moves. We find some programs synthesize significantly faster, whereas others substantially slow down. To determine which out of thousands of possible individual moves are responsible for these speedups and slowdowns, we introduce the idea of expected time to completion, assigning blame to moves that change this measure. This gives us a fine-grained look at the effect of moves on the search space. With some refinement to reduce the huge computational cost, we find this identifies a small set of moves for each program that are responsible for the differences with the baseline. This information suggests the balance condition in the theoretical basis for STOKE's search algorithm is damaged by certain moves, and that disabling just these moves will result in speed up some programs while not affecting (rather than slowing down) others.

Food: Karthik


2/8: Dynamic Tracing: Just-In-Time Specialization of Task Graphs for Dynamic Task-based Parallel Runtimes

Time: Thursday, February 8, 2018, 12 noon - 1pm
Location: Gates 415

Speaker: Wonchan Lee

Abstract: Many recent programming systems for both high performance computing and data center workloads generate task graphs to express computations that run on parallel and distributed machines. Due to the overhead associated in constructing these graphs the dependence analysis that generates them is often memoized and the resulting graph replayed. However, the complexity of the hardware and the programming systems can make it difficult to correctly reason about the generation, capture, and re-execution of task graphs. In this work, we introduce dynamic tracing, a formalism for describing how to safely and efficiently capture a trace of a dynamic dependence analysis that generates a task graph, and soundly replay it. We show that an implementation of dynamic tracing embedded in a modern task-based programming system allows already optimized applications to strong scale by an additional 4.9X, and on average 3.8X, at 256 nodes.

Food: James


2/15: Schlumberger Internship: Parallelizing an ILU0 solver using Legion

Time: Thursday, February 15, 2018, 12 noon - 1pm
Location: Gates 415

Speaker: Todd Warszawski

Abstract: In this talk I will discuss my project during my summer internship at Schlumberger. This project consisted of two parts: integrating Legion into an existing system and parallelizing the most expensive part of the computation in that system. First, I will talk about integrating Legion into Schlumberger's reservoir simulator, and discuss key optimizations that were necessary to achieve good performance in this mixed environment. Next, I will introduce the ILU0 solve, the most expensive part of the computation, and the benefits and drawbacks of the existing parallelization scheme. This solve is run many times on matrices with different values but the same structure, allowing an efficient parallelization strategy to be reused throughout the computation. Finally, I will present my approach to parallelize the solver by partitioning the work to minimize the critical path through the computation graph while creating tasks with enough work to hide the analysis overhead.

Food: Berkeley


2/22: A High Performance Cloud Framework Control Plane

Time: Thursday, February 22, 2018, 12 noon - 1pm
Location: Gates 415

Speaker: Hang Qu

Abstract: Existing cloud computing control planes do not scale to more than a few hundred cores, while frameworks without control planes scale but take seconds to reschedule a job. We propose an asynchronous control plane for cloud computing systems, in which a central controller can dynamically reschedule jobs but worker nodes never block on communication with the controller. By decoupling control plane traffic from program control flow in this way, an asynchronous control plane can scale to run millions of computations per second while being able to reschedule computations within milliseconds.

Food: Cristian


3/1: Extending Data-Driven Equivalence Checking

Time: Thursday, March 1, 2018, 12 noon - 1pm
Location: Gates 415

Speaker: Berkeley Churchill

Abstract: Equivalence checking -- the problem of formally verifying that two functions are equivalent -- has applications to compiler verification, regression verification and superoptimization. Data-driven equivalence checkers utilize test cases to learn conjectures about the behavior of two functions and then prove those conjectures with an SMT solver. However, previous data-driven equivalence checking algorithms have been limited to pairs of functions whose loops execute for the same number of iterations; thus, these algorithms have been unable to verify compiler optimizations such as loop unrolling, loop unpeeling and vectorization. This talk is on work-in-progress research to extend data-driven equivalence checking techniques to a broader set of problems.

Food: Andres


3/8: Leveraging Big Code to Improve Software Reliability

Time: Thursday, March 8, 2018, 12 noon - 1pm
Location: Gates 415

Speaker: Baishakhi Ray

Abstract: Software bugs cost millions of dollars to US economy. Improving software reliability has been one of the primary concerns of Software Engineering (SE) research over decades. Researchers developed different techniques, e.g., new languages, automatic bug finding tools, and code review processes to reduce software defects. However, the adoption of these methods in the real-world is still limited, partly because most of them require a significant amount of manual work from developers and have a steep learning curve. To automate the bug-finding process, in this talk, I will discuss how we can leverage a large number of open source projects collected in software forges like GitHub. Thanks to such rich source of Software Engineering data that has become available to the researchers, we can now learn from common coding mistakes and how to fix them. We can then leverage such data-driven knowledge to build new bug-finding and fixing tools to improve software reliability.

Food: Makai


3/15: p4pktgen: Automated Test Case Generation for P4 Programs

Time: Thursday, March 15, 2018, 12 noon - 1pm
Location: Gates 415

Speaker: Andres Nötzli

Abstract: With the rise of programmable network switches, network infrastructure is becoming more flexible and more capable than ever before. Programming languages such as P4 lower the barrier for changing the inner workings of network switches and offer a uniform experience across different devices. However, this programmability also brings the risk of introducing hard-to-catch bugs at a level that was previously covered by well-tested devices with a fixed set of capabilities. Subtle discrepancies between different implementations pose a risk of introducing bugs at a layer that is opaque to the user. To reap the benefit of programmable hardware and keep---or improve upon---the reliability of traditional approaches, new tools are needed. This talk is about p4pktgen, a tool for automatically generating test cases for P4 programs using symbolic execution. These test cases can be used to validate that P4 programs act as intended on a device.

Food: Stefan