[ Software Research Lunch ]


The Stanford Software Research Lunch is a weekly event on Thursday where students and researchers present their latest work to peers. Talks are open to anybody, but regular attendees are expected to give a presentation on their work.

Mailing list: software-research-lunch@lists.stanford.edu (subscribe via mailman)

Calendar: ical | Add to Google Calendar

Format: The lunch is held every week during fall, winter and spring quarter. The first week of every quarter is an organizational lunch where people can sign up to give a talk. If you'd like to give a talk, contact Andrew Wu.

Past quarters: Winter 2019, Fall 2018, Spring 2018, Winter 2018, Fall 2017, Spring 2017, Winter 2017, Fall 2016.

Ordering Food: For suggestions for those ordering food for the lunch, see here.


4/4: Organizational Lunch

Time: Thursday, April 4, 2019, 12 noon - 1pm
Location: Gates 415

Organizational lunch. Come enjoy food and sign up to give a talk during the quarter.

Food: Andrew


4/11: Guiding High-Performance SAT Solvers with Unsat-Core Predictions

Time: Thursday, April 11, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Daniel Selsam

Abstract: The NeuroSAT neural network architecture was introduced for predicting properties of propositional formulae. When trained to predict the satisfiability of toy problems, it was shown to find solutions and unsatisfiable cores on its own. However, the authors saw "no obvious path" to using the architecture to improve the state-of-the-art. In this work, we train a simplified NeuroSAT architecture to directly predict the unsatisfiable cores of real problems, and modify several state-of-the art SAT solvers to periodically replace their variable activity scores with NeuroSAT's prediction of how likely the variables are to appear in an unsatisfiable core. The modified MiniSat solves 10% more problems on SAT-COMP 2018 within the standard 5,000 second timeout than the original does. The modified Glucose 4.1 solves 11% more problems than the original, while the modified Z3 solves 6% more. The gains are even greater when the training is specialized for a specific distribution of problems; on a benchmark of hard problems from a scheduling domain, the modified Glucose solves 20% more problems than the original does within a one-hour timeout. Our results demonstrate that NeuroSAT can provide effective guidance to high-performance SAT solvers on real problems.

Food: Yoni


4/18: The Marabou Framework for Verification and Analysis of Deep Neural Networks

Time: Thursday, April 18, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Aleksandar Zeljic

Abstract: Deep neural networks are revolutionizing the way complex systems are designed. Consequently, there is a pressing need for tools and techniques for network analysis and certification. To help in addressing that need, we present Marabou, a framework for verifying deep neural networks. Marabou is an SMT-based tool that can answer queries about a network's properties by transforming these queries into constraint satisfaction problems. It can accommodate networks with different activation functions and topologies, and it performs high-level reasoning on the network that can curtail the search space and improve performance. It also supports parallel execution to further enhance scalability. Marabou accepts multiple input formats, including protocol buffer files generated by the popular TensorFlow framework for neural networks. We describe the system architecture and main components, evaluate the technique and discuss ongoing work.

Food: Jason


4/25: Learning Representations of Source Code from Structure & Context

Time: Thursday, April 25, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Michele Catasta

Abstract: Inspired by the extremely successful TRANSFORMER architecture from Natural Language Processing, we propose a novel architecture that extends to source code and its underlying structure induced by the Abstract Syntax Tree representation. This model is able to efficiently compose local and global co-occurrence patterns to achieve deep contextual embeddings of both structural and contextual features of source code. Differently from other approaches in the field of machine learning on source code, we obtained state-of-the-art results on standard prediction tasks (such as VarNaming and MethodNaming) without leveraging any hand-crafted features or augmented representation over the standard AST. We argue that this advancement brings us one step closer to cracking the arduous problem of capturing the semantic similarity of code fragments across different programming languages. Joint work with Dylan Bourgeois and Jure Leskovec.

Food: Makai


5/2: Duality and complexity in abstract interpretation and induction

Time: Thursday, May 2, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Oded Padon

Abstract: Finding inductive invariants is a core problem for formal verification. I will continue my talk from last quarter and discuss an ongoing attempt to attack this problem from a new angle. The talk will be accessible for those who did not attend the previous talk, and will focus on duality inspired invariant search algorithms. This is a work in progress, so discussion and comments will be much appreciated.
The new angle is rooted in the following three related perspectives:
1. Trying to characterize a measure of complexity or learnability for inductive invariants. That is, trying to distinguish between a simple proof by inductionand a complex one (e.g., induction "width" or "depth" vs. "length"), with the hope that a simple proof is also easier to discover. Some inspiration here is taken from learning theory and analysis of Boolean functions, where for example some classes of Boolean functions are known to be learnable in a precise sense (e.g., Fourier-sparse, CDNF, etc.).
2. Exploring a duality between counterexample traces and inductive invariants, and attempting to develop a "primal-dual" abstract interpretation algorithm. Viewing a proof by induction as an incremental sequence of "proof steps", we'll see there is a way to define a duality between error traces and inductive invariants that makes them appear symmetric. This may be an interesting way to view existing invariant search algorithms such as IC3/PDR, and may suggest new search algorithms. Inspiration here is taken from model checking approaches such as IC3, and from the way DPLL/CDCL performs a "primal-dual" search for models and proofs.
3. User interaction in abstract interpretation. Most state of the art automated verification tools do not provide useful information to their users when they fail, i.e., diverge or terminate without either a proof or a counterexample. The above directions may suggest a model for user interaction, where an automated proof search procedure can present useful information to a user, and also receive user guidance, without requiring the user to understand the innerworkings of the tool.

Food: Andrew


5/9: Separating First Order Structures with Quantified Formula

Time: Thursday, May 9, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Jason Koenig

Abstract: We define the problem of quantified, first order separability between structures as finding a (possibly) quantified first order formula which is satisfied for a set of positive structures and not satisfied for another set of negative structures. This problem arises naturally in the context of invariant inference, where for example separators of reachable and unsafe states are candidate invariants of the system. We show this problem is NP-complete, by reduction to and from SAT. This reduction leads to a practical algorithm which is successful in inferring most invariants from real protocols, including those that contain mixed universal and existential quantifiers which are not handled by existing techniques. We also give an analysis of failure modes of this algorithm and possible future research directions to mitigate these failures.

Food: Florian


5/16: Cascade --- A Just-in-Time Compiler For Verilog

Time: Thursday, May 16, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Eric Schkufza

Abstract: FPGAs can exceed the performance of general-purpose CPUs by several orders of magnitude and offer dramatically lower cost and time to market than ASICs. While the benefits are substantial, programming an FPGA can be an extremely slow process. Trivial programs can take several minutes to compile using a traditional compiler, and complex designs can take hours or longer. Cascade is a novel solution to this problem, the world's first just-in-time compiler for Verilog. Cascade executes code immediately in a software simulator, and performs compilation in the background. When compilation is finished, the code is moved into hardware, and from the user’s perspective it simply gets faster over time. Cascade's ability to move code back and forth between software and hardware also makes it the first platform to provide generic support for the execution of unsynthesizable Verilog from hardware. The effects are substantial. Cascade encourages more frequent compilation, reduces the time required for developers to produce working hardware designs, and transforms HDL development into something which closely resembles writing JavaScript or Python. It takes the first steps towards bridging the gap between programming software and programming hardware.
In this talk, I will present highlights from our recent ASPLOS paper and current work on using Cascade as a mechanism for supporting general purpose FPGA virtualization.

Food: Andrew


5/23: Verifying Context-Free API Protocols

Time: Thursday, May 23, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Kostas Ferles

Abstract: This paper addresses the problem of verifying the correct usage of API protocols that are expressible in a context-free language. While prior work on typestate analysis has focused on API protocols that are expressible using a finite state automaton, such techniques cannot be used for verifying the correct usage of APIs that require matching calls to a pair of methods, such as the acquire and release functions exposed by a re-entrant lock API. Given a program P using API A and a context-free grammar specification for A, our method checks whether or not P uses A correctly. Our method is based on the paradigm of counterexample-guided abstraction refinement and uses a novel context-free grammar (CFG) abstraction of the program that over-approximates the sequence of API calls that are feasible in P. Our approach essentially reduces the problem of checking correct API usage to an inclusion check between two languages and lazily refines the program's CFG abstraction using nested sequence interpolants.
We have implemented the proposed approach in a tool called CFPChecker and evaluate it in two ways: First, we compare our method against state-of-the-art safety checkers on challenging microbenchmarks by manually instrumenting the program so that the protocol is obeyed iff the instrumented version is safe. Second, we evaluate our approach on realistic usage scenarios of real-world Java APIs. Our evaluation shows that CFPChecker expands the scope of API protocols that can be automatically verified and that it is expressive enough to prove the correct usage of Java APIs in realistic usage scenarios extracted from real-world clients.

Food: Andres


5/30: Optimizing Deep Learning Computation with Automatic Generation of Graph Substitutions

Time: Thursday, May 30, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Zhihao Jia

Abstract: Current deep learning frameworks optimize the computation graph of a deep neural network (DNN) by using graph transformations manually designed by human experts. This approach misses many possible graph optimizations and is difficult to scale, as new DNN operators are introduced on a regular basis. We propose XFlow, the first DNN computation graph optimizer that automatically generates graph substitutions. XFlow takes as input a list of operator specifications and automatically generates candidate graph substitutions using the given operators as basic building blocks. All generated graph substitutions are formally verified against the operator specifications using an automated theorem prover. To optimize a given DNN computation graph, XFlow performs a cost-based search, applying the graph substitutions to find an optimized computation graph, which can be then used by existing DNN frameworks. Our evaluation shows that XFlow outperforms existing frameworks by up to 3x, while requiring significantly less human effort. For example, TensorFlow currently contains 30,000 lines of manual optimization rules, while the operator specifications needed by XFlow are only 800 lines of code.


6/6: Towards Programmable Intelligence

Time: Thursday, June 6, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Osbert Bastani

Abstract: Machine learning has revolutionized our ability to solve challenging software problems involving perceptual inputs such as images, videos, and natural language. However, research has focused on developing learned components such as deep neural networks in isolation; as a consequence, integrating learned components into software systems largely remains an ad hoc process. In this talk, I will first introduce SkyQuery, a programming language that users can use to query video feeds from a fleet of small drones. In a case study on traffic analysis, we show how the user can write queries to identify car parking patterns or monitor pedestrian behavior. Furthermore, I will describe a number of design challenges that have arisen that we are working on addressing, including safe learning-based control, uncertainty quantification, and interactive query synthesis.


6/13: DRAT-based Bit-Vector Proofs in CVC4

Time: Thursday, June 13, 2019, 12 noon - 1pm
Location: Gates 415

Speaker: Alex Ozdemir

Abstract: Many state-of-the-art Satisfiability Modulo Theories (SMT) solvers for the theory of fixed-size bit-vectors employ an approach called bit-blasting, where a given formula is translated into a Boolean satisfibility (SAT) problem and delegated to a SAT solver. Consequently, producing bit-vector proofs in an SMT solver requires incorporating SAT proofs into its proof infrastructure. In this paper, we describe three approaches for integrating DRAT proofs generated by an off-the-shelf SAT solver into the proof infrastructure of the SMT solver CVC4 and explore their strengths and weaknesses. We implemented all three approaches uing CryptoMiniSat as the SAT back-end for its bit-blasting engine and evaluated performance in terms of proof-production and proof-checking.