[ Software Research Lunch ]


Program for the winter quarter of 2019.


1/10: Organizational Lunch

Time: Thursday, January 10, 2019, 12 noon - 1pm
Location: Gates 415

Organizational lunch. Come sign up to give a talk during the quarter.

Food: Todd


1/17: An introduction to Shaz Qadeer and his attempts to reason about concurrent systems

Time: Thursday, January 17, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Shaz Qadeer

Abstract: I am a Research Scientist at Facebook. Before joining Facebook, I worked at Microsoft Azure and Microsoft Research. I am interested in building secure distributed systems through appropriate and comprehensive application of advanced programming machinery---languages, runtimes, and (testing and verification) tools. In this brief lunch talk, I will present an overview of two research projects I have focused on for the last decade. The first project is a domain-specific programming framework for developing safe and reliable concurrent asynchronous programs (https://github.com/p-org/P). The second project is a program verifier that aids the construction of a correct concurrent program by attempting to derive it via stepwise refinement from an abstract atomic action that is obviously correct because it does nothing (http://pub.ist.ac.at/~bkragl/papers/cav2018-slides.pdf). The goal of this talk is to introduce myself and my team at Facebook to you in the hope that some of you will become future collaborators and colleagues.

Food: Todd


1/24: Incremental, Intuitive, Formally-Analyzable Software Development with Scenario-Based Programming

Time: Thursday, January 24, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Assaf Marron

Abstract: Scenario-based programming (SBP), also termed behavioral programming, is an emerging approach for creating executable specifications for reactive systems where each artifact is a scenario that specifies a separate aspect of overall system behavior. Each scenario declares desired and undesired reactions to certain events and conditions or sequences thereof. New and refined requirements are often implemented incrementally using new scenarios, with little or no change to existing ones. The full system behavior emerges from parallel coordinated execution of all scenarios. SBP advantages include structural alignment with requirements, intuitiveness, incrementality, succinctness of expression, and amenability to formal analysis and verification. First introduced with the visual language of live sequence charts (LSC) by Harel, Damm and Marelly, SBP principles are language independent, and are available also in many other languages and formalisms, including Java, C++, JavaScript, Erlang, certain DSLs, statecharts, and more. SBP executable specifications can serve in models for simulation and analysis, already at the earliest stages of system development, as well as in the final running code of system components. In this talk I will introduce the principles of scenario-based programming, and describe recent research results. As time permits I will touch upon our current research into the wise computing vision, where we try to automate unique engineering skills that are used in tasks that presently are carried out only by human experts. This capability can enable earlier discovery of certain flaws in requirements specifications and in system design.

Food: Manolis


1/31: Redundancy-Free Computation Graphs for Graph Neural Networks

Time: Thursday, January 31, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Zhihao Jia

Abstract: Graph Neural Networks (GNNs) have revolutionized machine learning on graphs. GNNs are based on repeated aggregations of information across node’s neighbors, and because many neighbors are shared between different nodes, this leads to many repeated and inefficient computations. We propose HAG, a new GNN graph representation that explicitly avoids repeated computations by managing intermediate aggregation results hierarchically, which reduces redundant operations and eliminates unnecessary data transfers in GNN training. We introduce a cost model to quantitatively evaluate the runtime performance of different HAGs and use a novel HAG search algorithm to find optimized HAGs and provide strong theoretical guarantees. Experiments show that the HAG representation significantly outperforms the standard GNN graph representation by increasing the end-to-end training throughput by up to 2.8× and reducing the aggregations and data transfers in GNN training by up to 6.3× and 5.6×, while maintaining the original model accuracy.

Food: Karthik


2/7: The search for efficient neural networks for the edge

Time: Thursday, February 7, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Bichen Wu

Abstract: Efficient deep neural networks are playing an increasingly more important role in the age of AIoT (AI + IoT), in which people hope to deploy intelligent sensors and systems at scale. Many applications require deploying neural networks to embedded processors or dedicated accelerators with limited computational capacity. However, optimizing neural networks for high accuracy and efficiency on target devices is difficult due to the vast design space and high computational cost of training neural networks. In this talk, we discuss our recent works addressing this challenge: 1) we introduce DNAS, a differentiable algorithm for hardware-aware neural architecture search. While the search cost is two-orders-of-magnitude lower than previous works, models searched by DNAS surpass the state-of-the-art models designed manually and automatically. 2) Synetgy, through co-design of neural nets and FPGA accelerators, our work achieves 16.9x speedup compared with the previous state-of-the-art.

Food: Zhihao


2/14: Partition Driven Auto-Parallelization

Time: Thursday, February 14, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Wonchan Lee

Abstract: Although data partitioning is required to enable data parallelism on distributed memory systems, data partitions are not first class objects in most distributed programming models. Thus, auto-parallelizers targeting these programming models end up hard-coding a particular partitioning strategy in the parallelized output. As a result, auto-parallelized programs are often not easily configured and composed. Proving that an auto-parallelized program is semantically equivalent to the original program is also difficult as it falls back to a whole-program equivalence proof. We present a partition driven approach to auto-parallelization. The auto-parallelizer in our approach transforms a sequential program into a parallelized form using first-class partitions and infers constraints that those partitions must satisfy to preserve the original semantics. Inferred constraints can be satisfied by auto-generated partitions or those provided by the programmer. We demonstrate that our auto-parallelizer can produce programs that are composable and yet have scalability comparable to hand-optimized programs.

Food: Jason


2/21: Syntax-Guided Synthesis in CVC4

Time: Thursday, February 21, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Andres Nötzli

Abstract: Syntax-guided synthesis (SyGuS) is a recent standard for program synthesis, successfully used for a number of applications in formal verification and programming languages. Most SyGuS solvers use a satisfiability modulo theories (SMT) to check potential solutions. CVC4 is an SMT solver that can itself act as an efficient synthesizer. It won four out of five tracks in the annual SyGuS competition last year.
In this talk, I am giving a brief introduction to the SyGuS format and highlight some applications. Then, I’ll present different techniques that we use in CVC4 to solve SyGuS problems efficiently. The emphasis will be on the different strategies that CVC4 uses to enumerate candidate solutions and how it chooses between them.

Food: Todd


2/28: Exploring Approximations for Floating-Point Arithmetic using UppSAT

Time: Thursday, February 28, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Aleksandar Zeljic

Abstract: We consider the problem of solving floating-point constraints obtained from software verification. We present UppSAT - a new implementation of a systematic approximation refinement framework as an abstract SMT solver. Provided with an approximation and a decision procedure (implemented in an off-the-shelf SMT solver), UppSAT yields an approximating SMT solver. Additionally, UppSAT includes a library of predefined approximation components which can be combined and extended to define new encodings, orderings and solving strategies. We propose that UppSAT can be used as a sandbox for easy and flexible exploration of new approximations. To substantiate this, we explore several approximations of floating-point arithmetic into reduced precision floating-point arithmetic, real-arithmetic, and fixed-point arithmetic (encoded into the theory of bit-vectors in practice). In an experimental evaluation we compare performance of approximating solvers obtained by combining various encodings and decision procedures (based on existing, state-of-the-art SMT solvers for floating-point, real, and bit-vector arithmetic).

Food: Oded


3/7: Duality and complexity in abstract interpretation and induction (work in progress)

Time: Thursday, March 7, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Oded Padon

Abstract: Finding inductive invariants is a core problem for formal verification. I will discuss an ongoing attempt to attack this problem from three directions, which also relate to each other. The talk will be about a work in progress, so discussion and suggestions are most welcomed. We'll hopefully discuss the following directions:

1. Trying to characterize a measure of complexity or learnability for inductive invariants. That is, trying to distinguish between a simple proof by induction and a complex one (e.g., induction "width" or "depth" vs. "length"), with the hope that a simple proof is also easier to discover. Some inspiration here is taken from learning theory and analysis of Boolean functions, where for example some classes of Boolean functions are known to be learnable in a precise sense (e.g., Fourier-sparse, CDNF, etc.).

2. Exploring a duality between counterexample traces and inductive invariants, and attempting to develop a "primal-dual" abstract interpretation algorithm. Viewing a proof by induction as an incremental sequence of "proof steps", we'll see there is a way to define a duality between error traces and inductive invariants that makes them appear symmetric. This may be an interesting way to view existing invariant search algorithms such as IC3/PDR, and may suggest new new search algorithms. Inspiration here is taken from model checking approaches such as IC3, and from the way DPLL/CDCL performs a "primal-dual" search for models and proofs.

3. User interaction in abstract interpretation. Most state of the art automated verification tools do not provide useful information to their users when they fail, i.e., diverge or terminate without either a proof or a counterexample. The above directions may suggest a model for user interaction, where an automated proof search procedure can present useful information to a user, and also receive user guidance, without requiring the user to understand the innerworkings of the tool.


3/14: Optimizing DNN Computation with Relaxed Graph Substitutions

Time: Thursday, March 14, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Zhihao Jia

Abstract: Existing deep learning frameworks optimize the computation graph of a DNN model by performing greedy rule-based graph transformations, which generally only consider transformations that strictly improve runtime performance. We propose relaxed graph substitutions that enable the exploration of complex graph optimizations by relaxing the strict performance improvement constraint, which greatly increases the space of semantically equivalent computation graphs that can be discovered by repeated application of a suitable set of graph transformations. We introduce a backtracking search algorithm over a set of relaxed graph substitutions to find optimized networks and use a flow-based graph split algorithm to recursively split a computation graph into smaller subgraphs to allow efficient search. We implement relaxed graph substitutions in a system called MetaFlow and show that MetaFlow improves the inference and training performance by 1.1-1.6× and 1.1-1.2× respectively over existing deep learning frameworks.

Food: yoni206@gmail.com