[ Software Research Lunch ]


Program for the spring quarter of 2017.


4/7: Organizational Lunch

Time: Friday, April 7, 2017, 12 noon - 1pm
Location: Gates 463a

Organizational lunch. Come sign up to give a talk during the quarter.

Food: Stefan


4/14: Developing Bug-Free Machine Learning Systems With Formal Mathematics

Time: Friday, April 14, 2017, 12 noon - 1pm
Location: Gates 463a

Speaker: Daniel Selsam

Abstract: Noisy data, non-convex objectives, model misspecification, and numerical instability can all cause undesired behaviors in machine learning systems. As a result, detecting actual implementation errors can be extremely difficult. We demonstrate a methodology based on an interactive proof assistant, in which one writes an implementation of a machine learning system along with a formal theorem stating that the implementation is correct. The process of proving this theorem interactively in the proof assistant exposes all implementation errors since any error in the program would cause the proof to fail. As a case study, we implement a new system, Certigrad, for optimizing over stochastic computation graphs and generate a machine-checkable proof that the gradients sampled by the system are unbiased estimates of the true mathematical gradients. We train a variational autoencoder using Certigrad and find that it is nearly as efficient as TensorFlow.

Food: Wonyeol


4/21: Cracking Multi-Language Transformations

Time: Friday, April 21, 2017, 12 noon - 1pm
Location: Gates 463a

Speaker: James Koppel

Abstract: Programming languages have many similarities, and so, when writing a source-to-source transformation on one language, it would be nice to reuse code from a similar transformation for a different language. This is a fundamentally difficult problem, and previous attempts have either resorted to reimplementing the same transformation for many languages, or at best reducing multiple languages to a common intermediate representations, which necessarily destroys information and produces poor source-to-source results.
We present a new representation for programs called *incremental parametric syntax*, and show how it enables us to construct source-to-source transformations so that we can implement them once, and run them on each of C, Java, JavaScript, Lua, and Python. Instead of constructing a common representation for the languages, incremental parametric syntax allows us to instead construct a family of representations sharing common components, each specialized to a single-language, and to semi-automatically generate them from existing syntax definitions. Our evaluation shows that (1) once a transformation is written, relatively little work is required to configure it for a new language (2) transformations built this way output readable code which preserve the structure of the original, according to participants in our human study, and (3) despite dealing with many languages, our transformations can still handle language corner-cases, and pass 90% of compiler test suites.

Food: Andres


4/28: SpaceSearch: A Library for Building and Verifying Solver-Aided Tools

Time: Friday, April 28, 2017, 12 noon - 1pm
Location: Gates 463a

Speaker: Stefan Heule

Abstract: Many verification tools build on automated solvers. These tools reduce problems in a specific application domain (e.g., compiler optimization validation) to queries that can be discharged with a highly optimized solver. But the correctness of the reductions themselves is rarely verified in practice, limiting the confidence that the solver's output establishes the desired domain-level property.
This paper presents SpaceSearch, a new library for developing solver-aided tools within a proof assistant. A user builds their solver-aided tool in Coq against the SpaceSearch interface, and the user then verifies that the results provided by the interface are sufficient to establish the tool's desired high-level properties. Once verified, the tool can be extracted to an implementation in a solver-aided language (e.g., Rosette), where SpaceSearch provides an efficient instantiation of the SpaceSearch interface with calls to an underlying SMT solver. This combines the strong correctness guarantees of developing a tool in a proof assistant with the high performance of modern SMT solvers. This paper also introduces new optimizations for such verified solver-aided tools, including parallelization and incrementalization.
We evaluate SpaceSearch by building and verifying two solver-aided tools. The first, SaltShaker, checks that RockSalt's x86 semantics for a given instruction agrees with STOKE's x86 semantics. SaltShaker identified 7 bugs in RockSalt and 1 bug in STOKE. After these systems were patched by their developers, SaltShaker verified the semantics' agreement on 15,255 instruction instantiations in under 2h. The second tool, BGProof, is a verified version of an existing Border Gateway Protocol (BGP) router configuration checker. Like the existing checker, BGProof scales to checking industrial configurations spanning over 240 KLOC, identifying 19 configuration inconsistencies with no false positives. However, the correctness of BGProof has been formally proven, and we found 2 bugs in the unverified implementation. These results demonstrate that SpaceSearch is a practical approach to developing efficient, verified solver-aided tools.

Food: Todd


5/5: Verifying Accurate Floating-Point Programs

Time: Friday, May 5, 2017, 12 noon - 1pm
Location: Gates 463a

Speaker: Wonyeol Lee

Abstract: I will discuss some recent work-in-progress on verifying floating-point programs. I will explain several useful properties of floating-point arithmetic, and then talk about how to use them to prove better error bounds of Intel's transcendental functions.

Food: Berkeley


5/12: Seam: Provably Safe Local Edits on Graphs

Time: Friday, May 12, 2017, 12 noon - 1pm
Location: Gates 463a

Speaker: Manolis Papadakis

Abstract: In this talk I will present Seam, a domain-specific language for describing graph-like data structures and programming local modification operations over them. Based on our preliminary results, programs written in Seam are competitive with hand-written implementations in terms of performance, while being an order of magnitude shorter, easier to maintain and extend, and amenable to static verification. I will focus on the design of our verification method for Seam operations: We leverage an SMT solver to verify that operations are memory-safe and maintain referential integrity and user-specified invariants. Our verification method is sound and precise (complete modulo termination of the SMT solver).

Food: Pratiksha


5/19: Moving Fast with High Reliability: Program Analysis at Uber

Time: Friday, May 19, 2017, 12 noon - 1pm
Location: Gates 463a

Speaker: Manu Sridharan

Abstract: At Uber, software reliability is of critical importance: outages can leave riders stranded and drivers without a way to earn a living. At the same time, Uber needs to be able to move fast in developing new features and products. Our belief is that static program analysis can play a key role in reducing the tension between these seemingly conflicting needs. In this talk, I will describe the philosophy of how analysis tools are deployed at Uber and how code is developed to be analyzable. I will present some initial experience reports from deployed analyses, plans for future analyses, and some open problems that may be interesting to the broader research community.

Food: Lázaro


5/26: Synthesizing Program Input Grammars

Time: Friday, May 26, 2017, 12 noon - 1pm
Location: Gates 463a

Speaker: Osbert Bastani

Abstract: Programs with highly structured inputs (e.g., parsers and interpreters) can be challenging for fuzzers since most randomly generated inputs have invalid syntax. One solution is to handwrite a grammar encoding valid program inputs, but this work must be performed for every program that is tested. We describe a new approach where we automatically infer the program input grammar from a small set of user-provided example inputs and blackbox access to the program, and then use this inferred grammar to fuzz the program. Our approach works out-of-the-box, without any program-specific tuning or configuration. We show that our fuzzer outperforms two grammar-unaware fuzzers on a number of large programs.

Food: Cristian


6/2: Just-in-time Compilation for Reprogrammable Hardware

Time: Friday, June 2, 2017, 12 noon - 1pm
Location: Gates 463a

Speaker: Eric Schkufza

Abstract: Compilation for reprogrammable hardware is painfully slow, on the order of hours for non-trivial applications. This renders the compile-test-debug cycle untenable for all but a small handful of domain experts. While this situation was never acceptable, it was at least tolerable in a time when reprogrammable hardware was a niche target for application developers. Today reprogrammable hardware is on the verge of ubiquity in both public and private clouds. But unless the programming model for reprogrammable hardware changes, the average programmer will remain incapable of taking advantage of it.
In this talk, I'll describe a new approach to hardware compilation, which adopts a known solution from the software domain. Rather than attempt to reduce the latency of the compiler, we instead hide it behind an interpreter in a JIT. Code is executed immediately in a hardware interpreter, while the long running compilation job is performed in the background. When compilation is finished, the results are patched into hardware, and from the user's perspective, the code simply gets faster. I'll talk through the high-level design of the system, describe some neat applications which fall out of the implementation and give a quick demo. We'll write some code, we WON'T wait hours, and we'll get some buttons on an fpga to turn some lights on and off.

Food: Manolis


6/9: A Distributed Multi-GPU System for Fast Graph Processing

Time: Friday, June 9, 2017, 12 noon - 1pm
Location: Gates 463a

Speaker: Zhihao Jia

Abstract: Because most graph algorithms have low computational intensity, graph processing systems are limited by memory bandwidth. Most existing systems store the entire graph in the shared or distributed main DRAM memory of multicore CPU nodes. GPU’s have much higher memory bandwidth than today’s CPU architectures and thus the potential for better graph processing performance. However, GPU clusters have significantly more complex and deeper memory hierarchies than CPU clusters, which must be accounted for in a graph processing system’s design if it is to actually benefit from the greater bandwidth of GPUs.
We present Lux, a distributed multi-GPU system that achieves fast graph processing by exploiting the aggregate memory bandwidth of multiple GPUs and taking advantage of locality in the memory hierarchy of multi-GPU clusters. Lux provides two execution models that optimize algorithmic efficiency and enable important GPU optimizations, respectively. Lux also uses a novel dynamic load balancing strategy that is cheap and achieves very good load balance across multiple GPUs on a node. In addition, we present a performance model that provides insight into improving the performance of Lux applications in different situations. Experiments show that Lux achieves up to 20× speedup over the state-of-the-art shared memory systems and up to two orders of magnitude speedup over distributed systems.

Food: Guy