[ Software Research Lunch ]


The Stanford Software Research Lunch is a weekly event on Thursday where students and researchers present their latest work to peers. Talks are open to anybody, but regular attendees are expected to give a presentation on their work.

Mailing list: software-research-lunch@lists.stanford.edu (subscribe via mailman)

Calendar: ical

Format: The lunch is held every week during fall, winter and spring quarter. The first week of every quarter is an organizational lunch where people can sign up to give a talk. If you'd like to give a talk, please contact Matthew Sotoudeh or Anjiang Wei.

Past quarters: Fall 2023, Spring 2023, Winter 2023, Fall 2022, Winter 2021, Fall 2020, Winter 2020, Fall 2019, Spring 2019, Winter 2019, Fall 2018, Spring 2018, Winter 2018, Fall 2017, Spring 2017, Winter 2017, Fall 2016.

Upcoming quarters: Spring 2024.

Ordering Food: For suggestions for those ordering food for the lunch, see here.


4/4: Computation-Centric Networking

Time: Thursday, April 4, 2024, 12 noon - 1pm
Location: Gates 415

Speaker: Akshay Srivatsan

Abstract: We propose putting computation at the center of what networked computers and cloud services do for their users. We envision a shared representation of a computation: a deterministic procedure, run in an environment of well-specified dependencies. This suggests an end-to-end argument for serverless computing, shifting the service model from “renting CPUs by the second” to “providing the unambiguously correct result of a computation.” Accountability to these higher-level abstractions could permit agility and innovation on other axes.

Food:


4/11: Deegen: a compiler-compiler for high performance VMs at low engineering cost

Time: Thursday, April 11, 2024, 12 noon - 1pm
Location: Gates 415

Speaker: Haoran Xu

Abstract: Building a high-performance VM for a dynamic language has traditionally required a huge amount of time, money, and expertise. To reduce the high engineering cost, we present Deegen, a compiler-compiler that generates a high-performance VM automatically from a semantic description of the bytecodes. The generated VM has three execution tiers, similar to the state-of-the-art VMs like V8 or JSC: an optimized interpreter, a baseline JIT, and an optimizing JIT (the work for the optimizing JIT is still in progress). This allows the user to get a high-performance VM for their own language at an engineering cost similar to writing an interpreter. To demonstrate Deegen's capability in the real world, we implemented LuaJIT Remake (LJR), a standard-compliant VM for Lua 5.1. Across a variety of benchmarks, we demonstrated that LJR's interpreter significantly outperforms LuaJIT's interpreter, and LJR's baseline JIT generates high-quality code with a negligible compilation cost.

Food:


4/18: Zero-Knowledge, Maximum Security: Hardening Blockchain with Formal Methods

Time: Thursday, April 18, 2024, 12 noon - 1pm
Location: Gates 415

Speaker: Yu Feng

Abstract: Zero-knowledge proofs are powerful cryptographic protocols for enhancing privacy and scalability in blockchains. However, ensuring the correctness and security of zero-knowledge proofs is a challenging task due to its complex nature. This talk aims to address this challenge by leveraging formal methods with a pipeline of increasing confidence, ranging from a domain-specific solver for detecting under-constrained circuits (PLDI'23), to formal verification for functional correctness using refinement types (Oakland'24). By applying formal methods with complementary strength, we have been working on rigorous teques to detect vulnerabilities, verify correctness, and enhance the resilience of zero-knowledge proof systems against attacks.

Food:


4/25: TBD (Anjiang Wei)

Time: Thursday, April 25, 2024, 12 noon - 1pm
Location: Gates 415

Speaker: Anjiang Wei

Food:


5/2: Composing distributed computations through task and kernel fusion

Time: Thursday, May 2, 2024, 12 noon - 1pm
Location: Gates 415

Speaker: Rohan Yadav

Abstract: We introduce Diffuse, a system that dynamically performs task and kernel fusion in distributed, task-based runtime systems. The key component of Diffuse is an intermediate representation of distributed computation that enables the necessary analyses for the fusion of distributed tasks to be performed in a scalable manner. We pair task fusion with a JIT compiler to fuse together the kernels within fused tasks. We show empirically that Diffuse’s intermediate representation is general enough to be a target for two real-world, task-based libraries (cuNumeric and Legate Sparse), letting Diffuse find optimization opportunities across function and library boundaries. Diffuse accelerates unmodified applications developed by composing task-based libraries by 1.86x on average (geo-mean), and by between 0.93x–10.7x on up to 128 GPUs. Diffuse also finds optimization opportunities missed by the original application developers, enabling high-level Python programs to match or exceed the performance of an explicitly parallel MPI library.

Food:


5/9: TBD (Alexander J. Root)

Time: Thursday, May 9, 2024, 12 noon - 1pm
Location: Gates 415

Speaker: Alexander J. Root

Food:


5/16: TBD (Sophie Andrews)

Time: Thursday, May 16, 2024, 12 noon - 1pm
Location: Gates 415

Speaker: Sophie Andrews

Food:


5/23: TBD (Charles Yuan (Tentative))

Time: Thursday, May 23, 2024, 12 noon - 1pm
Location: Gates 415

Speaker: Charles Yuan (Tentative)

Food:


6/6: TBD (David Broman)

Time: Thursday, June 6, 2024, 12 noon - 1pm
Location: Gates 415

Speaker: David Broman

Food: