\documentclass[7x10]{TimesAPriori_MIT}%%7x10 % TODO: % \usepackage[utf8]{inputenc} %% \usepackage{setspace} %% \doublespacing \usepackage{listings} \usepackage{verbatim} \usepackage{amssymb} \usepackage{lmodern} % better typewriter font for code %\usepackage{wrapfig} \usepackage{multirow} \usepackage{tcolorbox} \usepackage{color} %\usepackage{ifthen} \usepackage{upquote} \usepackage[all]{xy} \usepackage{url} \definecolor{lightgray}{gray}{1} \newcommand{\black}[1]{{\color{black} #1}} %\newcommand{\gray}[1]{{\color{lightgray} #1}} \newcommand{\gray}[1]{{\color{gray} #1}} \def\racketEd{0} \def\pythonEd{1} \def\edition{1} % material that is specific to the Racket edition of the book \newcommand{\racket}[1]{{\if\edition\racketEd{#1}\fi}} % would like a command for: \if\edition\racketEd\color{olive} % and : \fi\color{black} %\newcommand{\pythonColor}[0]{\color{purple}} \newcommand{\pythonColor}[0]{} % material that is specific to the Python edition of the book \newcommand{\python}[1]{{\if\edition\pythonEd\pythonColor #1\fi}} \makeatletter \newcommand{\captionabove}[2][]{% \vskip-\abovecaptionskip \vskip+\belowcaptionskip \ifx\@nnil#1\@nnil \caption{#2}% \else \caption[#1]{#2}% \fi \vskip+\abovecaptionskip \vskip-\belowcaptionskip } %% For multiple indices: %\usepackage{multind} moved this to the file TimesAPriori_MIT.cls. -Jeremy \makeindex{subject} %\makeindex{authors} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \if\edition\racketEd \lstset{% language=Lisp, basicstyle=\ttfamily\small, morekeywords={lambda,match,goto,if,else,then,struct,Integer,Boolean,Vector,Void,Any,while,begin,define,public,override,class}, deletekeywords={read,mapping,vector}, escapechar=|, columns=flexible, %moredelim=[is][\color{red}]{~}{~}, showstringspaces=false } \fi \if\edition\pythonEd \lstset{% language=Python, basicstyle=\ttfamily\small, morekeywords={match,case,bool,int,let,begin,if,else,closure}, deletekeywords={}, escapechar=|, columns=flexible, %moredelim=[is][\color{red}]{~}{~}, showstringspaces=false } \fi %%% Any shortcut own defined macros place here %% sample of author macro: \input{defs} \newtheorem{exercise}[theorem]{Exercise} \numberwithin{theorem}{chapter} \numberwithin{definition}{chapter} \numberwithin{equation}{chapter} % Adjusted settings \setlength{\columnsep}{4pt} %% \begingroup %% \setlength{\intextsep}{0pt}% %% \setlength{\columnsep}{0pt}% %% \begin{wrapfigure}{r}{0.5\textwidth} %% \centering\includegraphics[width=\linewidth]{example-image-a} %% \caption{Basic layout} %% \end{wrapfigure} %% \lipsum[1] %% \endgroup \newbox\oiintbox \setbox\oiintbox=\hbox{$\lower2pt\hbox{\huge$\displaystyle\circ$} \hskip-13pt\displaystyle\int\hskip-7pt\int_{S}\ $} \def\oiint{\copy\oiintbox} \def\boldnabla{\hbox{\boldmath$\displaystyle\nabla$}} %\usepackage{showframe} \def\ShowFrameLinethickness{0.125pt} \addbibresource{book.bib} \if\edition\pythonEd \addbibresource{python.bib} \fi \begin{document} \frontmatter %\HalfTitle{Essentials of Compilation \\ An Incremental Approach in \python{Python}\racket{Racket}} \HalfTitle{Essentials of Compilation} \halftitlepage \clearemptydoublepage \Title{Essentials of Compilation} \Booksubtitle{An Incremental Approach in \python{Python}\racket{Racket}} %\edition{First Edition} \BookAuthor{Jeremy G. Siek} \imprint{The MIT Press\\ Cambridge, Massachusetts\\ London, England} \begin{copyrightpage} \textcopyright\ 2023 Jeremy G. Siek \\[2ex] This work is subject to a Creative Commons CC-BY-ND-NC license. \\[2ex] Subject to such license, all rights are reserved. \\[2ex] \includegraphics{CCBY-logo} The MIT Press would like to thank the anonymous peer reviewers who provided comments on drafts of this book. The generous work of academic experts is essential for establishing the authority and quality of our publications. We acknowledge with gratitude the contributions of these otherwise uncredited readers. This book was set in Times LT Std Roman by the author. Printed and bound in the United States of America. {\if\edition\racketEd Library of Congress Cataloging-in-Publication Data\\ \ \\ Names: Siek, Jeremy, author. \\ Title: Essentials of compilation : an incremental approach in Racket / Jeremy G. Siek. \\ Description: Cambridge, Massachusetts : The MIT Press, [2023] | Includes bibliographical references and index. \\ Identifiers: LCCN 2022015399 (print) | LCCN 2022015400 (ebook) | ISBN 9780262047760 (hardcover) | ISBN 9780262373272 (epub) | ISBN 9780262373289 (pdf) \\ Subjects: LCSH: Racket (Computer program language) | Compilers (Computer programs) \\ Classification: LCC QA76.73.R33 S54 2023 (print) | LCC QA76.73.R33 (ebook) | DDC 005.13/3--dc23/eng/20220705 \\ LC record available at https://lccn.loc.gov/2022015399\\ LC ebook record available at https://lccn.loc.gov/2022015400\\ \ \\ \fi} 10 9 8 7 6 5 4 3 2 1 %% Jeremy G. Siek. Available for free viewing %% or personal downloading under the %% \href{https://creativecommons.org/licenses/by-nc-nd/2.0/uk/}{CC-BY-NC-ND} %% license. %% Copyright in this monograph has been licensed exclusively to The MIT %% Press, \url{http://mitpress.mit.edu}, which will be releasing the final %% version to the public in 2022. All inquiries regarding rights should %% be addressed to The MIT Press, Rights and Permissions Department. %% \textcopyright\ [YEAR] Massachusetts Institute of Technology %% All rights reserved. No part of this book may be reproduced in any %% form by any electronic or mechanical means (including photocopying, %% recording, or information storage and retrieval) without permission in %% writing from the publisher. %% This book was set in LaTeX by Jeremy G. Siek. Printed and bound in the %% United States of America. %% Library of Congress Cataloging-in-Publication Data is available. %% ISBN: %% 10\quad9\quad8\quad7\quad6\quad5\quad4\quad3\quad2\quad1 \end{copyrightpage} \dedication{This book is dedicated to Katie, my partner in everything, my children, who grew up during the writing of this book, and the programming language students at Indiana University, whose thoughtful questions made this a better book.} %% \begin{epigraphpage} %% \epigraph{First Epigraph line goes here}{Mention author name if any, %% \textit{Book Name if any}} %% \epigraph{Second Epigraph line goes here}{Mention author name if any} %% \end{epigraphpage} \tableofcontents %\listoffigures %\listoftables %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter*{Preface} \addcontentsline{toc}{fmbm}{Preface} There is a magical moment when a programmer presses the \emph{run} button and the software begins to execute. Somehow a program written in a high-level language is running on a computer that is capable only of shuffling bits. Here we reveal the wizardry that makes that moment possible. Beginning with the groundbreaking work of Backus and colleagues in the 1950s, computer scientists developed techniques for constructing programs called \emph{compilers} that automatically translate high-level programs into machine code. We take you on a journey through constructing your own compiler for a small but powerful language. Along the way we explain the essential concepts, algorithms, and data structures that underlie compilers. We develop your understanding of how programs are mapped onto computer hardware, which is helpful in reasoning about properties at the junction of hardware and software, such as execution time, software errors, and security vulnerabilities. For those interested in pursuing compiler construction as a career, our goal is to provide a stepping-stone to advanced topics such as just-in-time compilation, program analysis, and program optimization. For those interested in designing and implementing programming languages, we connect language design choices to their impact on the compiler and the generated code. A compiler is typically organized as a sequence of stages that progressively translate a program to the code that runs on hardware. We take this approach to the extreme by partitioning our compiler into a large number of \emph{nanopasses}, each of which performs a single task. This enables the testing of each pass in isolation and focuses our attention, making the compiler far easier to understand. The most familiar approach to describing compilers is to dedicate each chapter to one pass. The problem with that approach is that it obfuscates how language features motivate design choices in a compiler. We instead take an \emph{incremental} approach in which we build a complete compiler in each chapter, starting with a small input language that includes only arithmetic and variables. We add new language features in subsequent chapters, extending the compiler as necessary. Our choice of language features is designed to elicit fundamental concepts and algorithms used in compilers. \begin{itemize} \item We begin with integer arithmetic and local variables in chapters~\ref{ch:trees-recur} and \ref{ch:Lvar}, where we introduce the fundamental tools of compiler construction: \emph{abstract syntax trees} and \emph{recursive functions}. {\if\edition\pythonEd\pythonColor \item In chapter~\ref{ch:parsing} we learn how to use the Lark parser framework to create a parser for the language of integer arithmetic and local variables. We learn about the parsing algorithms inside Lark, including Earley and LALR(1). % \fi} \item In chapter~\ref{ch:register-allocation-Lvar} we apply \emph{graph coloring} to assign variables to machine registers. \item Chapter~\ref{ch:Lif} adds conditional expressions, which motivates an elegant recursive algorithm for translating them into conditional \code{goto} statements. \item Chapter~\ref{ch:Lwhile} adds loops\racket{ and mutable variables}. This elicits the need for \emph{dataflow analysis} in the register allocator. \item Chapter~\ref{ch:Lvec} adds heap-allocated tuples, motivating \emph{garbage collection}. \item Chapter~\ref{ch:Lfun} adds functions as first-class values without lexical scoping, similar to functions in the C programming language~\citep{Kernighan:1988nx}. The reader learns about the procedure call stack and \emph{calling conventions} and how they interact with register allocation and garbage collection. The chapter also describes how to generate efficient tail calls. \item Chapter~\ref{ch:Llambda} adds anonymous functions with lexical scoping, that is, \emph{lambda} expressions. The reader learns about \emph{closure conversion}, in which lambdas are translated into a combination of functions and tuples. % Chapter about classes and objects? \item Chapter~\ref{ch:Ldyn} adds \emph{dynamic typing}. Prior to this point the input languages are statically typed. The reader extends the statically typed language with an \code{Any} type that serves as a target for compiling the dynamically typed language. %% {\if\edition\pythonEd\pythonColor %% \item Chapter~\ref{ch:Lobject} adds support for \emph{objects} and %% \emph{classes}. %% \fi} \item Chapter~\ref{ch:Lgrad} uses the \code{Any} type introduced in chapter~\ref{ch:Ldyn} to implement a \emph{gradually typed language} in which different regions of a program may be static or dynamically typed. The reader implements runtime support for \emph{proxies} that allow values to safely move between regions. \item Chapter~\ref{ch:Lpoly} adds \emph{generics} with autoboxing, leveraging the \code{Any} type and type casts developed in chapters \ref{ch:Ldyn} and \ref{ch:Lgrad}. \end{itemize} There are many language features that we do not include. Our choices balance the incidental complexity of a feature versus the fundamental concepts that it exposes. For example, we include tuples and not records because although they both elicit the study of heap allocation and garbage collection, records come with more incidental complexity. Since 2009, drafts of this book have served as the textbook for sixteen-week compiler courses for upper-level undergraduates and first-year graduate students at the University of Colorado and Indiana University. % Students come into the course having learned the basics of programming, data structures and algorithms, and discrete mathematics. % At the beginning of the course, students form groups of two to four people. The groups complete approximately one chapter every two weeks, starting with chapter~\ref{ch:Lvar} and including chapters according to the students interests while respecting the dependencies between chapters shown in figure~\ref{fig:chapter-dependences}. Chapter~\ref{ch:Lfun} (functions) depends on chapter~\ref{ch:Lvec} (tuples) only in the implementation of efficient tail calls. % The last two weeks of the course involve a final project in which students design and implement a compiler extension of their choosing. The last few chapters can be used in support of these projects. Many chapters include a challenge problem that we assign to the graduate students. For compiler courses at universities on the quarter system (about ten weeks in length), we recommend completing the course through chapter~\ref{ch:Lvec} or chapter~\ref{ch:Lfun} and providing some scaffolding code to the students for each compiler pass. % The course can be adapted to emphasize functional languages by skipping chapter~\ref{ch:Lwhile} (loops) and including chapter~\ref{ch:Llambda} (lambda). The course can be adapted to dynamically typed languages by including chapter~\ref{ch:Ldyn}. % %% \python{A course that emphasizes object-oriented languages would %% include Chapter~\ref{ch:Lobject}.} This book has been used in compiler courses at California Polytechnic State University, Portland State University, Rose–Hulman Institute of Technology, University of Freiburg, University of Massachusetts Lowell, and the University of Vermont. \begin{figure}[tp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{tikzpicture}[baseline=(current bounding box.center)] \node (C1) at (0,1.5) {\small Ch.~\ref{ch:trees-recur} Preliminaries}; \node (C2) at (4,1.5) {\small Ch.~\ref{ch:Lvar} Variables}; \node (C3) at (8,1.5) {\small Ch.~\ref{ch:register-allocation-Lvar} Registers}; \node (C4) at (0,0) {\small Ch.~\ref{ch:Lif} Conditionals}; \node (C5) at (4,0) {\small Ch.~\ref{ch:Lvec} Tuples}; \node (C6) at (8,0) {\small Ch.~\ref{ch:Lfun} Functions}; \node (C9) at (0,-1.5) {\small Ch.~\ref{ch:Lwhile} Loops}; \node (C8) at (4,-1.5) {\small Ch.~\ref{ch:Ldyn} Dynamic}; \node (C7) at (8,-1.5) {\small Ch.~\ref{ch:Llambda} Lambda}; \node (C10) at (4,-3) {\small Ch.~\ref{ch:Lgrad} Gradual Typing}; \node (C11) at (8,-3) {\small Ch.~\ref{ch:Lpoly} Generics}; \path[->] (C1) edge [above] node {} (C2); \path[->] (C2) edge [above] node {} (C3); \path[->] (C3) edge [above] node {} (C4); \path[->] (C4) edge [above] node {} (C5); \path[->,style=dotted] (C5) edge [above] node {} (C6); \path[->] (C5) edge [above] node {} (C7); \path[->] (C6) edge [above] node {} (C7); \path[->] (C4) edge [above] node {} (C8); \path[->] (C4) edge [above] node {} (C9); \path[->] (C7) edge [above] node {} (C10); \path[->] (C8) edge [above] node {} (C10); \path[->] (C10) edge [above] node {} (C11); \end{tikzpicture} \fi} {\if\edition\pythonEd\pythonColor \begin{tikzpicture}[baseline=(current bounding box.center)] \node (Prelim) at (0,1.5) {\small Ch.~\ref{ch:trees-recur} Preliminaries}; \node (Var) at (4,1.5) {\small Ch.~\ref{ch:Lvar} Variables}; \node (Parse) at (8,1.5) {\small Ch.~\ref{ch:parsing} Parsing}; \node (Reg) at (0,0) {\small Ch.~\ref{ch:register-allocation-Lvar} Registers}; \node (Cond) at (4,0) {\small Ch.~\ref{ch:Lif} Conditionals}; \node (Loop) at (8,0) {\small Ch.~\ref{ch:Lwhile} Loops}; \node (Fun) at (0,-1.5) {\small Ch.~\ref{ch:Lfun} Functions}; \node (Tuple) at (4,-1.5) {\small Ch.~\ref{ch:Lvec} Tuples}; \node (Dyn) at (8,-1.5) {\small Ch.~\ref{ch:Ldyn} Dynamic}; % \node (CO) at (0,-3) {\small Ch.~\ref{ch:Lobject} Objects}; \node (Lam) at (0,-3) {\small Ch.~\ref{ch:Llambda} Lambda}; \node (Gradual) at (4,-3) {\small Ch.~\ref{ch:Lgrad} Gradual Typing}; \node (Generic) at (8,-3) {\small Ch.~\ref{ch:Lpoly} Generics}; \path[->] (Prelim) edge [above] node {} (Var); \path[->] (Var) edge [above] node {} (Reg); \path[->] (Var) edge [above] node {} (Parse); \path[->] (Reg) edge [above] node {} (Cond); \path[->] (Cond) edge [above] node {} (Tuple); \path[->,style=dotted] (Tuple) edge [above] node {} (Fun); \path[->] (Cond) edge [above] node {} (Fun); \path[->] (Tuple) edge [above] node {} (Lam); \path[->] (Fun) edge [above] node {} (Lam); \path[->] (Cond) edge [above] node {} (Dyn); \path[->] (Cond) edge [above] node {} (Loop); \path[->] (Lam) edge [above] node {} (Gradual); \path[->] (Dyn) edge [above] node {} (Gradual); % \path[->] (Dyn) edge [above] node {} (CO); \path[->] (Gradual) edge [above] node {} (Generic); \end{tikzpicture} \fi} \end{tcolorbox} \caption{Diagram of chapter dependencies.} \label{fig:chapter-dependences} \end{figure} \racket{We use the \href{https://racket-lang.org/}{Racket} language both for the implementation of the compiler and for the input language, so the reader should be proficient with Racket or Scheme. There are many excellent resources for learning Scheme and Racket~\citep{Dybvig:1987aa,Abelson:1996uq,Friedman:1996aa,Felleisen:2001aa,Felleisen:2013aa,Flatt:2014aa}.} % \python{This edition of the book uses \href{https://www.python.org/}{Python} both for the implementation of the compiler and for the input language, so the reader should be proficient with Python. There are many excellent resources for learning Python~\citep{Lutz:2013vp,Barry:2016vj,Sweigart:2019vn,Matthes:2019vs}.}% % The support code for this book is in the GitHub repository at the following location: \begin{center}\small\texttt https://github.com/IUCompilerCourse/ \end{center} The compiler targets x86 assembly language~\citep{Intel:2015aa}, so it is helpful but not necessary for the reader to have taken a computer systems course~\citep{Bryant:2010aa}. We introduce the parts of x86-64 assembly language that are needed in the compiler. % We follow the System V calling conventions~\citep{Bryant:2005aa,Matz:2013aa}, so the assembly code that we generate works with the runtime system (written in C) when it is compiled using the GNU C compiler (\code{gcc}) on Linux and MacOS operating systems on Intel hardware. % On the Windows operating system, \code{gcc} uses the Microsoft x64 calling convention~\citep{Microsoft:2018aa,Microsoft:2020aa}. So the assembly code that we generate does \emph{not} work with the runtime system on Windows. One workaround is to use a virtual machine with Linux as the guest operating system. \section*{Acknowledgments} The tradition of compiler construction at Indiana University goes back to research and courses on programming languages by Daniel Friedman in the 1970s and 1980s. One of his students, Kent Dybvig, implemented Chez Scheme~\citep{Dybvig:2006aa}, an efficient, production-quality compiler for Scheme. Throughout the 1990s and 2000s, Dybvig taught the compiler course and continued the development of Chez Scheme. % The compiler course evolved to incorporate novel pedagogical ideas while also including elements of real-world compilers. One of Friedman's ideas was to split the compiler into many small passes. Another idea, called ``the game,'' was to test the code generated by each pass using interpreters. Dybvig, with help from his students Dipanwita Sarkar and Andrew Keep, developed infrastructure to support this approach and evolved the course to use even smaller nanopasses~\citep{Sarkar:2004fk,Keep:2012aa}. Many of the compiler design decisions in this book are inspired by the assignment descriptions of \citet{Dybvig:2010aa}. In the mid 2000s, a student of Dybvig named Abdulaziz Ghuloum observed that the front-to-back organization of the course made it difficult for students to understand the rationale for the compiler design. Ghuloum proposed the incremental approach~\citep{Ghuloum:2006bh} on which this book is based. I thank the many students who served as teaching assistants for the compiler course at IU including Carl Factora, Ryan Scott, Cameron Swords, and Chris Wailes. I thank Andre Kuhlenschmidt for work on the garbage collector and x86 interpreter, Michael Vollmer for work on efficient tail calls, and Michael Vitousek for help with the first offering of the incremental compiler course at IU. I thank professors Bor-Yuh Chang, John Clements, Jay McCarthy, Joseph Near, Ryan Newton, Nate Nystrom, Peter Thiemann, Andrew Tolmach, and Michael Wollowski for teaching courses based on drafts of this book and for their feedback. I thank the National Science Foundation for the grants that helped to support this work: Grant Numbers 1518844, 1763922, and 1814460. I thank Ronald Garcia for helping me survive Dybvig's compiler course in the early 2000s and especially for finding the bug that sent our garbage collector on a wild goose chase! \mbox{}\\ \noindent Jeremy G. Siek \\ Bloomington, Indiana \mainmatter %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Preliminaries} \label{ch:trees-recur} \setcounter{footnote}{0} In this chapter we review the basic tools needed to implement a compiler. Programs are typically input by a programmer as text, that is, a sequence of characters. The program-as-text representation is called \emph{concrete syntax}. We use concrete syntax to concisely write down and talk about programs. Inside the compiler, we use \emph{abstract syntax trees} (ASTs) to represent programs in a way that efficiently supports the operations that the compiler needs to perform.\index{subject}{concrete syntax}\index{subject}{abstract syntax}\index{subject}{abstract syntax tree}\index{subject}{AST}\index{subject}{program} The process of translating concrete syntax to abstract syntax is called \emph{parsing}\index{subject}{parsing}\python{\ and is studied in chapter~\ref{ch:parsing}}. \racket{This book does not cover the theory and implementation of parsing. We refer the readers interested in parsing to the thorough treatment of parsing by \citet{Aho:2006wb}.}% % \racket{A parser is provided in the support code for translating from concrete to abstract syntax.}% % \python{For now we use Python's \code{ast} module to translate from concrete to abstract syntax.} ASTs can be represented inside the compiler in many different ways, depending on the programming language used to write the compiler. % \racket{We use Racket's \href{https://docs.racket-lang.org/guide/define-struct.html}{\code{struct}} feature to represent ASTs (section~\ref{sec:ast}).} % \python{We use Python classes and objects to represent ASTs, especially the classes defined in the standard \code{ast} module for the Python source language.} % We use grammars to define the abstract syntax of programming languages (section~\ref{sec:grammar}) and pattern matching to inspect individual nodes in an AST (section~\ref{sec:pattern-matching}). We use recursive functions to construct and deconstruct ASTs (section~\ref{sec:recursion}). This chapter provides a brief introduction to these components. \racket{\index{subject}{struct}} \python{\index{subject}{class}\index{subject}{object}} \section{Abstract Syntax Trees} \label{sec:ast} Compilers use abstract syntax trees to represent programs because they often need to ask questions such as, for a given part of a program, what kind of language feature is it? What are its subparts? Consider the program on the left and the diagram of its AST on the right~\eqref{eq:arith-prog}. This program is an addition operation that has two subparts, a \racket{read}\python{input} operation and a negation. The negation has another subpart, the integer constant \code{8}. By using a tree to represent the program, we can easily follow the links to go from one part of a program to its subparts. \begin{center} \begin{minipage}{0.4\textwidth} {\if\edition\racketEd \begin{lstlisting} (+ (read) (- 8)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} input_int() + -8 \end{lstlisting} \fi} \end{minipage} \begin{minipage}{0.4\textwidth} \begin{equation} \begin{tikzpicture} \node[draw] (plus) at (0 , 0) {\key{+}}; \node[draw] (read) at (-1, -1) {\racket{\footnotesize\key{read}}\python{\key{input\_int()}}}; \node[draw] (minus) at (1 , -1) {$\key{-}$}; \node[draw] (8) at (1 , -2) {\key{8}}; \draw[->] (plus) to (read); \draw[->] (plus) to (minus); \draw[->] (minus) to (8); \end{tikzpicture} \label{eq:arith-prog} \end{equation} \end{minipage} \end{center} We use the standard terminology for trees to describe ASTs: each rectangle above is called a \emph{node}. The arrows connect a node to its \emph{children}, which are also nodes. The top-most node is the \emph{root}. Every node except for the root has a \emph{parent} (the node of which it is the child). If a node has no children, it is a \emph{leaf} node; otherwise it is an \emph{internal} node. \index{subject}{node} \index{subject}{children} \index{subject}{root} \index{subject}{parent} \index{subject}{leaf} \index{subject}{internal node} %% Recall that an \emph{symbolic expression} (S-expression) is either %% \begin{enumerate} %% \item an atom, or %% \item a pair of two S-expressions, written $(e_1 \key{.} e_2)$, %% where $e_1$ and $e_2$ are each an S-expression. %% \end{enumerate} %% An \emph{atom} can be a symbol, such as \code{`hello}, a number, the %% null value \code{'()}, etc. We can create an S-expression in Racket %% simply by writing a backquote (called a quasi-quote in Racket) %% followed by the textual representation of the S-expression. It is %% quite common to use S-expressions to represent a list, such as $a, b %% ,c$ in the following way: %% \begin{lstlisting} %% `(a . (b . (c . ()))) %% \end{lstlisting} %% Each element of the list is in the first slot of a pair, and the %% second slot is either the rest of the list or the null value, to mark %% the end of the list. Such lists are so common that Racket provides %% special notation for them that removes the need for the periods %% and so many parenthesis: %% \begin{lstlisting} %% `(a b c) %% \end{lstlisting} %% The following expression creates an S-expression that represents AST %% \eqref{eq:arith-prog}. %% \begin{lstlisting} %% `(+ (read) (- 8)) %% \end{lstlisting} %% When using S-expressions to represent ASTs, the convention is to %% represent each AST node as a list and to put the operation symbol at %% the front of the list. The rest of the list contains the children. So %% in the above case, the root AST node has operation \code{`+} and its %% two children are \code{`(read)} and \code{`(- 8)}, just as in the %% diagram \eqref{eq:arith-prog}. %% To build larger S-expressions one often needs to splice together %% several smaller S-expressions. Racket provides the comma operator to %% splice an S-expression into a larger one. For example, instead of %% creating the S-expression for AST \eqref{eq:arith-prog} all at once, %% we could have first created an S-expression for AST %% \eqref{eq:arith-neg8} and then spliced that into the addition %% S-expression. %% \begin{lstlisting} %% (define ast1.4 `(- 8)) %% (define ast1_1 `(+ (read) ,ast1.4)) %% \end{lstlisting} %% In general, the Racket expression that follows the comma (splice) %% can be any expression that produces an S-expression. {\if\edition\racketEd We define a Racket \code{struct} for each kind of node. For this chapter we require just two kinds of nodes: one for integer constants (aka literals\index{subject}{literals}) and one for primitive operations. The following is the \code{struct} definition for integer constants.\footnote{All the AST structures are defined in the file \code{utilities.rkt} in the support code.} \begin{lstlisting} (struct Int (value)) \end{lstlisting} An integer node contains just one thing: the integer value. We establish the convention that \code{struct} names, such as \code{Int}, are capitalized. To create an AST node for the integer $8$, we write \INT{8}. \begin{lstlisting} (define eight (Int 8)) \end{lstlisting} We say that the value created by \INT{8} is an \emph{instance} of the \code{Int} structure. The following is the \code{struct} definition for primitive operations. \begin{lstlisting} (struct Prim (op args)) \end{lstlisting} A primitive operation node includes an operator symbol \code{op} and a list of child arguments called \code{args}. For example, to create an AST that negates the number $8$, we write the following. \begin{lstlisting} (define neg-eight (Prim '- (list eight))) \end{lstlisting} Primitive operations may have zero or more children. The \code{read} operator has zero: \begin{lstlisting} (define rd (Prim 'read '())) \end{lstlisting} The addition operator has two children: \begin{lstlisting} (define ast1_1 (Prim '+ (list rd neg-eight))) \end{lstlisting} We have made a design choice regarding the \code{Prim} structure. Instead of using one structure for many different operations (\code{read}, \code{+}, and \code{-}), we could have instead defined a structure for each operation, as follows: \begin{lstlisting} (struct Read ()) (struct Add (left right)) (struct Neg (value)) \end{lstlisting} The reason that we choose to use just one structure is that many parts of the compiler can use the same code for the different primitive operators, so we might as well just write that code once by using a single structure. % \fi} {\if\edition\pythonEd\pythonColor We use a Python \code{class} for each kind of node. The following is the class definition for constants (aka literals\index{subject}{literals}) from the Python \code{ast} module. \begin{lstlisting} class Constant: def __init__(self, value): self.value = value \end{lstlisting} An integer constant node includes just one thing: the integer value. To create an AST node for the integer $8$, we write \INT{8}. \begin{lstlisting} eight = Constant(8) \end{lstlisting} We say that the value created by \INT{8} is an \emph{instance} of the \code{Constant} class. The following is the class definition for unary operators. \begin{lstlisting} class UnaryOp: def __init__(self, op, operand): self.op = op self.operand = operand \end{lstlisting} The specific operation is specified by the \code{op} parameter. For example, the class \code{USub} is for unary subtraction. (More unary operators are introduced in later chapters.) To create an AST that negates the number $8$, we write the following. \begin{lstlisting} neg_eight = UnaryOp(USub(), eight) \end{lstlisting} The call to the \code{input\_int} function is represented by the \code{Call} and \code{Name} classes. \begin{lstlisting} class Call: def __init__(self, func, args): self.func = func self.args = args class Name: def __init__(self, id): self.id = id \end{lstlisting} To create an AST node that calls \code{input\_int}, we write \begin{lstlisting} read = Call(Name('input_int'), []) \end{lstlisting} Finally, to represent the addition in \eqref{eq:arith-prog}, we use the \code{BinOp} class for binary operators. \begin{lstlisting} class BinOp: def __init__(self, left, op, right): self.op = op self.left = left self.right = right \end{lstlisting} Similar to \code{UnaryOp}, the specific operation is specified by the \code{op} parameter, which for now is just an instance of the \code{Add} class. So to create the AST node that adds negative eight to some user input, we write the following. \begin{lstlisting} ast1_1 = BinOp(read, Add(), neg_eight) \end{lstlisting} \fi} To compile a program such as \eqref{eq:arith-prog}, we need to know that the operation associated with the root node is addition and we need to be able to access its two children. \racket{Racket}\python{Python} provides pattern matching to support these kinds of queries, as we see in section~\ref{sec:pattern-matching}. We often write down the concrete syntax of a program even when we actually have in mind the AST, because the concrete syntax is more concise. We recommend that you always think of programs as abstract syntax trees. \section{Grammars} \label{sec:grammar} \index{subject}{integer} %\index{subject}{constant} A programming language can be thought of as a \emph{set} of programs. The set is infinite (that is, one can always create larger programs), so one cannot simply describe a language by listing all the programs in the language. Instead we write down a set of rules, a \emph{context-free grammar}, for building programs. Grammars are often used to define the concrete syntax of a language, but they can also be used to describe the abstract syntax. We write our rules in a variant of Backus-Naur form (BNF)~\citep{Backus:1960aa,Knuth:1964aa}. \index{subject}{Backus-Naur form}\index{subject}{BNF} As an example, we describe a small language, named \LangInt{}, that consists of integers and arithmetic operations.\index{subject}{grammar} \index{subject}{context-free grammar} The first grammar rule for the abstract syntax of \LangInt{} says that an instance of the \racket{\code{Int} structure}\python{\code{Constant} class} is an expression: \begin{equation} \Exp ::= \INT{\Int} \label{eq:arith-int} \end{equation} % Each rule has a left-hand side and a right-hand side. If you have an AST node that matches the right-hand side, then you can categorize it according to the left-hand side. % Symbols in typewriter font, such as \racket{\code{Int}}\python{\code{Constant}}, are \emph{terminal} symbols and must literally appear in the program for the rule to be applicable.\index{subject}{terminal} % Our grammars do not mention \emph{white space}, that is, delimiter characters like spaces, tabs, and new lines. White space may be inserted between symbols for disambiguation and to improve readability. \index{subject}{white space} % A name such as $\Exp$ that is defined by the grammar rules is a \emph{nonterminal}. \index{subject}{nonterminal} % The name $\Int$ is also a nonterminal, but instead of defining it with a grammar rule, we define it with the following explanation. An $\Int$ is a sequence of decimals ($0$ to $9$), possibly starting with $-$ (for negative integers), such that the sequence of decimals % \racket{represents an integer in the range $-2^{62}$ to $2^{62}-1$. This enables the representation of integers using 63 bits, which simplifies several aspects of compilation. % Thus, these integers correspond to the Racket \texttt{fixnum} datatype on a 64-bit machine.} % \python{represents an integer in the range $-2^{63}$ to $2^{63}-1$. This enables the representation of integers using 64 bits, which simplifies several aspects of compilation. In contrast, integers in Python have unlimited precision, but the techniques needed to handle unlimited precision fall outside the scope of this book.} The second grammar rule is the \READOP{} operation, which receives an input integer from the user of the program. \begin{equation} \Exp ::= \READ{} \label{eq:arith-read} \end{equation} The third rule categorizes the negation of an $\Exp$ node as an $\Exp$. \begin{equation} \Exp ::= \NEG{\Exp} \label{eq:arith-neg} \end{equation} We can apply these rules to categorize the ASTs that are in the \LangInt{} language. For example, by rule \eqref{eq:arith-int}, \INT{8} is an $\Exp$, and then by rule \eqref{eq:arith-neg} the following AST is an $\Exp$. \begin{center} \begin{minipage}{0.5\textwidth} \NEG{\INT{\code{8}}} \end{minipage} \begin{minipage}{0.25\textwidth} \begin{equation} \begin{tikzpicture} \node[draw, circle] (minus) at (0, 0) {$\text{--}$}; \node[draw, circle] (8) at (0, -1.2) {$8$}; \draw[->] (minus) to (8); \end{tikzpicture} \label{eq:arith-neg8} \end{equation} \end{minipage} \end{center} The next two grammar rules are for addition and subtraction expressions: \begin{align} \Exp &::= \ADD{\Exp}{\Exp} \label{eq:arith-add}\\ \Exp &::= \SUB{\Exp}{\Exp} \label{eq:arith-sub} \end{align} We can now justify that the AST \eqref{eq:arith-prog} is an $\Exp$ in \LangInt{}. We know that \READ{} is an $\Exp$ by rule \eqref{eq:arith-read}, and we have already categorized \NEG{\INT{\code{8}}} as an $\Exp$, so we apply rule \eqref{eq:arith-add} to show that \[ \ADD{\READ{}}{\NEG{\INT{\code{8}}}} \] is an $\Exp$ in the \LangInt{} language. If you have an AST for which these rules do not apply, then the AST is not in \LangInt{}. For example, the program \racket{\code{(* (read) 8)}} \python{\code{input\_int() * 8}} is not in \LangInt{} because there is no rule for the \key{*} operator. Whenever we define a language with a grammar, the language includes only those programs that are justified by the grammar rules. {\if\edition\pythonEd\pythonColor The language \LangInt{} includes a second nonterminal $\Stmt$ for statements. There is a statement for printing the value of an expression \[ \Stmt{} ::= \PRINT{\Exp} \] and a statement that evaluates an expression but ignores the result. \[ \Stmt{} ::= \EXPR{\Exp} \] \fi} {\if\edition\racketEd The last grammar rule for \LangInt{} states that there is a \code{Program} node to mark the top of the whole program: \[ \LangInt{} ::= \PROGRAM{\code{\textquotesingle()}}{\Exp} \] The \code{Program} structure is defined as follows: \begin{lstlisting} (struct Program (info body)) \end{lstlisting} where \code{body} is an expression. In further chapters, the \code{info} part is used to store auxiliary information, but for now it is just the empty list. \fi} {\if\edition\pythonEd\pythonColor The last grammar rule for \LangInt{} states that there is a \code{Module} node to mark the top of the whole program: \[ \LangInt{} ::= \PROGRAM{}{\Stmt^{*}} \] The asterisk $*$ indicates a list of the preceding grammar item, in this case a list of statements. % The \code{Module} class is defined as follows: \begin{lstlisting} class Module: def __init__(self, body): self.body = body \end{lstlisting} where \code{body} is a list of statements. \fi} It is common to have many grammar rules with the same left-hand side but different right-hand sides, such as the rules for $\Exp$ in the grammar of \LangInt{}. As shorthand, a vertical bar can be used to combine several right-hand sides into a single rule. The concrete syntax for \LangInt{} is shown in figure~\ref{fig:r0-concrete-syntax} and the abstract syntax for \LangInt{} is shown in figure~\ref{fig:r0-syntax}.% % \racket{The \code{read-program} function provided in \code{utilities.rkt} of the support code reads a program from a file (the sequence of characters in the concrete syntax of Racket) and parses it into an abstract syntax tree. Refer to the description of \code{read-program} in appendix~\ref{appendix:utilities} for more details.} % \python{The \code{parse} function in Python's \code{ast} module converts the concrete syntax (represented as a string) into an abstract syntax tree.} \newcommand{\LintGrammarRacket}{ \begin{array}{rcl} \Type &::=& \key{Integer} \\ \Exp{} &::=& \Int{} \MID \CREAD \MID \CNEG{\Exp} \MID \CADD{\Exp}{\Exp} \MID \CSUB{\Exp}{\Exp} \end{array} } \newcommand{\LintASTRacket}{ \begin{array}{rcl} \Type &::=& \key{Integer} \\ \Exp{} &::=& \INT{\Int} \MID \READ{} \\ &\MID& \NEG{\Exp} \MID \ADD{\Exp}{\Exp} \MID \SUB{\Exp}{\Exp} \end{array} } \newcommand{\LintGrammarPython}{ \begin{array}{rcl} \Exp &::=& \Int \MID \key{input\_int}\LP\RP \MID \key{-}\;\Exp \MID \Exp \; \key{+} \; \Exp \MID \Exp \; \key{-} \; \Exp \MID \LP\Exp\RP \\ \Stmt &::=& \key{print}\LP \Exp \RP \MID \Exp \end{array} } \newcommand{\LintASTPython}{ \begin{array}{rcl} \Exp{} &::=& \INT{\Int} \MID \READ{} \\ &\MID& \UNIOP{\key{USub()}}{\Exp} \MID \BINOP{\Exp}{\key{Add()}}{\Exp}\\ &\MID& \BINOP{\Exp}{\key{Sub()}}{\Exp}\\ \Stmt{} &::=& \PRINT{\Exp} \MID \EXPR{\Exp} \end{array} } \begin{figure}[tp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \[ \begin{array}{l} \LintGrammarRacket \\ \begin{array}{rcl} \LangInt{} &::=& \Exp \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \LintGrammarPython \\ \begin{array}{rcl} \LangInt{} &::=& \Stmt^{*} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The concrete syntax of \LangInt{}.} \label{fig:r0-concrete-syntax} \end{figure} \begin{figure}[tp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \[ \begin{array}{l} \LintASTRacket{} \\ \begin{array}{rcl} \LangInt{} &::=& \PROGRAM{\code{'()}}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \LintASTPython\\ \begin{array}{rcl} \LangInt{} &::=& \PROGRAM{}{\Stmt^{*}} \end{array} \end{array} \] \fi} \end{tcolorbox} \python{ \index{subject}{Constant@\texttt{Constant}} \index{subject}{UnaryOp@\texttt{UnaryOp}} \index{subject}{USub@\texttt{USub}} \index{subject}{inputint@\texttt{input\_int}} \index{subject}{Call@\texttt{Call}} \index{subject}{Name@\texttt{Name}} \index{subject}{BinOp@\texttt{BinOp}} \index{subject}{Add@\texttt{Add}} \index{subject}{Sub@\texttt{Sub}} \index{subject}{print@\texttt{print}} \index{subject}{Expr@\texttt{Expr}} \index{subject}{Module@\texttt{Module}} } \caption{The abstract syntax of \LangInt{}.} \label{fig:r0-syntax} \end{figure} \section{Pattern Matching} \label{sec:pattern-matching} As mentioned in section~\ref{sec:ast}, compilers often need to access the parts of an AST node. \racket{Racket}\python{As of version 3.10, Python} provides the \texttt{match} feature to access the parts of a value. Consider the following example: \index{subject}{match} \index{subject}{pattern matching} \begin{center} \begin{minipage}{1.0\textwidth} {\if\edition\racketEd \begin{lstlisting} (match ast1_1 [(Prim op (list child1 child2)) (print op)]) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} match ast1_1: case BinOp(child1, op, child2): print(op) \end{lstlisting} \fi} \end{minipage} \end{center} {\if\edition\racketEd % In this example, the \texttt{match} form checks whether the AST \eqref{eq:arith-prog} is a binary operator and binds its parts to the three pattern variables \texttt{op}, \texttt{child1}, and \texttt{child2}. In general, a match clause consists of a \emph{pattern} and a \emph{body}.\index{subject}{pattern} Patterns are recursively defined to be a pattern variable, a structure name followed by a pattern for each of the structure's arguments, or an S-expression (a symbol, list, etc.). (See chapter 12 of The Racket Guide\footnote{See \url{https://docs.racket-lang.org/guide/match.html}.} and chapter 9 of The Racket Reference\footnote{See \url{https://docs.racket-lang.org/reference/match.html}.} for complete descriptions of \code{match}.) % The body of a match clause may contain arbitrary Racket code. The pattern variables can be used in the scope of the body, such as \code{op} in \code{(print op)}. % \fi} % % {\if\edition\pythonEd\pythonColor % In the example above, the \texttt{match} form checks whether the AST \eqref{eq:arith-prog} is a binary operator and binds its parts to the three pattern variables (\texttt{child1}, \texttt{op}, and \texttt{child2}). In general, each \code{case} consists of a \emph{pattern} and a \emph{body}.\index{subject}{pattern} Patterns are recursively defined to be one of the following: a pattern variable, a class name followed by a pattern for each of its constructor's arguments, or other literals\index{subject}{literals} such as strings or lists. % The body of each \code{case} may contain arbitrary Python code. The pattern variables can be used in the body, such as \code{op} in \code{print(op)}. % \fi} A \code{match} form may contain several clauses, as in the following function \code{leaf} that recognizes when an \LangInt{} node is a leaf in the AST. The \code{match} proceeds through the clauses in order, checking whether the pattern can match the input AST. The body of the first clause that matches is executed. The output of \code{leaf} for several ASTs is shown on the right side of the following: \begin{center} \begin{minipage}{0.6\textwidth} {\if\edition\racketEd \begin{lstlisting} (define (leaf arith) (match arith [(Int n) #t] [(Prim 'read '()) #t] [(Prim '- (list e1)) #f] [(Prim '+ (list e1 e2)) #f] [(Prim '- (list e1 e2)) #f])) (leaf (Prim 'read '())) (leaf (Prim '- (list (Int 8)))) (leaf (Int 8)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def leaf(arith): match arith: case Constant(n): return True case Call(Name('input_int'), []): return True case UnaryOp(USub(), e1): return False case BinOp(e1, Add(), e2): return False case BinOp(e1, Sub(), e2): return False print(leaf(Call(Name('input_int'), []))) print(leaf(UnaryOp(USub(), eight))) print(leaf(Constant(8))) \end{lstlisting} \fi} \end{minipage} \vrule \begin{minipage}{0.25\textwidth} {\if\edition\racketEd \begin{lstlisting} #t #f #t \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} True False True \end{lstlisting} \fi} \end{minipage} \index{subject}{True@\TRUE{}} \index{subject}{False@\FALSE{}} \end{center} When constructing a \code{match} expression, we refer to the grammar definition to identify which nonterminal we are expecting to match against, and then we make sure that (1) we have one \racket{clause}\python{case} for each alternative of that nonterminal and (2) the pattern in each \racket{clause}\python{case} corresponds to the corresponding right-hand side of a grammar rule. For the \code{match} in the \code{leaf} function, we refer to the grammar for \LangInt{} shown in figure~\ref{fig:r0-syntax}. The $\Exp$ nonterminal has five alternatives, so the \code{match} has five \racket{clauses}\python{cases}. The pattern in each \racket{clause}\python{case} corresponds to the right-hand side of a grammar rule. For example, the pattern \ADDP{\code{e1}}{\code{e2}} corresponds to the right-hand side $\ADD{\Exp}{\Exp}$. When translating from grammars to patterns, replace nonterminals such as $\Exp$ with pattern variables of your choice (such as \code{e1} and \code{e2}). \section{Recursive Functions} \label{sec:recursion} \index{subject}{recursive function} Programs are inherently recursive. For example, an expression is often made of smaller expressions. Thus, the natural way to process an entire program is to use a recursive function. As a first example of such a recursive function, we define the function \code{is\_exp} as shown in figure~\ref{fig:exp-predicate}, to take an arbitrary value and determine whether or not it is an expression in \LangInt{}. % We say that a function is defined by \emph{structural recursion} if it is defined using a sequence of match \racket{clauses}\python{cases} that correspond to a grammar and the body of each \racket{clause}\python{case} makes a recursive call on each child node.\footnote{This principle of structuring code according to the data definition is advocated in the book \emph{How to Design Programs} by \citet{Felleisen:2001aa}.} \python{We define a second function, named \code{stmt}, that recognizes whether a value is a \LangInt{} statement.} \python{Finally, } figure~\ref{fig:exp-predicate} \racket{also} contains the definition of \code{is\_Lint}, which determines whether an AST is a program in \LangInt{}. In general, we can write one recursive function to handle each nonterminal in a grammar.\index{subject}{structural recursion} Of the two examples at the bottom of the figure, the first is in \LangInt{} and the second is not. \begin{figure}[tp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define (is_exp ast) (match ast [(Int n) #t] [(Prim 'read '()) #t] [(Prim '- (list e)) (is_exp e)] [(Prim '+ (list e1 e2)) (and (is_exp e1) (is_exp e2))] [(Prim '- (list e1 e2)) (and (is_exp e1) (is_exp e2))] [else #f])) (define (is_Lint ast) (match ast [(Program '() e) (is_exp e)] [else #f])) (is_Lint (Program '() ast1_1) (is_Lint (Program '() (Prim '* (list (Prim 'read '()) (Prim '+ (list (Int 8))))))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def is_exp(e): match e: case Constant(n): return True case Call(Name('input_int'), []): return True case UnaryOp(USub(), e1): return is_exp(e1) case BinOp(e1, Add(), e2): return is_exp(e1) and is_exp(e2) case BinOp(e1, Sub(), e2): return is_exp(e1) and is_exp(e2) case _: return False def stmt(s): match s: case Expr(Call(Name('print'), [e])): return is_exp(e) case Expr(e): return is_exp(e) case _: return False def is_Lint(p): match p: case Module(body): return all([stmt(s) for s in body]) case _: return False print(is_Lint(Module([Expr(ast1_1)]))) print(is_Lint(Module([Expr(BinOp(read, Sub(), UnaryOp(Add(), Constant(8))))]))) \end{lstlisting} \fi} \end{tcolorbox} \caption{Example of recursive functions for \LangInt{}. These functions recognize whether an AST is in \LangInt{}.} \label{fig:exp-predicate} \end{figure} %% You may be tempted to merge the two functions into one, like this: %% \begin{center} %% \begin{minipage}{0.5\textwidth} %% \begin{lstlisting} %% (define (Lint ast) %% (match ast %% [(Int n) #t] %% [(Prim 'read '()) #t] %% [(Prim '- (list e)) (Lint e)] %% [(Prim '+ (list e1 e2)) (and (Lint e1) (Lint e2))] %% [(Program '() e) (Lint e)] %% [else #f])) %% \end{lstlisting} %% \end{minipage} %% \end{center} %% % %% Sometimes such a trick will save a few lines of code, especially when %% it comes to the \code{Program} wrapper. Yet this style is generally %% \emph{not} recommended because it can get you into trouble. %% % %% For example, the above function is subtly wrong: %% \lstinline{(Lint (Program '() (Program '() (Int 3))))} %% returns true when it should return false. \section{Interpreters} \label{sec:interp_Lint} \index{subject}{interpreter} The behavior of a program is defined by the specification of the programming language. % \racket{For example, the Scheme language is defined in the report by \citet{SPERBER:2009aa}. The Racket language is defined in its reference manual~\citep{plt-tr}.} % \python{For example, the Python language is defined in the Python language reference~\citep{PSF21:python_ref} and the CPython interpreter~\citep{PSF21:cpython}.} % In this book we use interpreters to specify each language that we consider. An interpreter that is designated as the definition of a language is called a \emph{definitional interpreter}~\citep{reynolds72:_def_interp}. \index{subject}{definitional interpreter} We warm up by creating a definitional interpreter for the \LangInt{} language. This interpreter serves as a second example of structural recursion. The definition of the \code{interp\_Lint} function is shown in figure~\ref{fig:interp_Lint}. % \racket{The body of the function is a match on the input program followed by a call to the \lstinline{interp_exp} auxiliary function, which in turn has one match clause per grammar rule for \LangInt{} expressions.} % \python{The body of the function matches on the \code{Module} AST node and then invokes \code{interp\_stmt} on each statement in the module. The \code{interp\_stmt} function includes a case for each grammar rule of the \Stmt{} nonterminal, and it calls \code{interp\_exp} on each subexpression. The \code{interp\_exp} function includes a case for each grammar rule of the \Exp{} nonterminal. We use several auxiliary functions such as \code{add64} and \code{input\_int} that are defined in the support code for this book.} \begin{figure}[tp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define (interp_exp e) (match e [(Int n) n] [(Prim 'read '()) (define r (read)) (cond [(fixnum? r) r] [else (error 'interp_exp "read expected an integer" r)])] [(Prim '- (list e)) (define v (interp_exp e)) (fx- 0 v)] [(Prim '+ (list e1 e2)) (define v1 (interp_exp e1)) (define v2 (interp_exp e2)) (fx+ v1 v2)] [(Prim '- (list e1 e2)) (define v1 (interp_exp e1)) (define v2 (interp_exp e2)) (fx- v1 v2)])) (define (interp_Lint p) (match p [(Program '() e) (interp_exp e)])) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def interp_exp(e): match e: case BinOp(left, Add(), right): l = interp_exp(left); r = interp_exp(right) return add64(l, r) case BinOp(left, Sub(), right): l = interp_exp(left); r = interp_exp(right) return sub64(l, r) case UnaryOp(USub(), v): return neg64(interp_exp(v)) case Constant(value): return value case Call(Name('input_int'), []): return input_int() def interp_stmt(s): match s: case Expr(Call(Name('print'), [arg])): print(interp_exp(arg)) case Expr(value): interp_exp(value) def interp_Lint(p): match p: case Module(body): for s in body: interp_stmt(s) \end{lstlisting} \fi} \end{tcolorbox} \caption{Interpreter for the \LangInt{} language.} \label{fig:interp_Lint} \end{figure} Let us consider the result of interpreting a few \LangInt{} programs. The following program adds two integers: {\if\edition\racketEd \begin{lstlisting} (+ 10 32) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} print(10 + 32) \end{lstlisting} \fi} % \noindent The result is \key{42}, the answer to life, the universe, and everything: \code{42}!\footnote{\emph{The Hitchhiker's Guide to the Galaxy} by Douglas Adams.} % We wrote this program in concrete syntax, whereas the parsed abstract syntax is {\if\edition\racketEd \begin{lstlisting} (Program '() (Prim '+ (list (Int 10) (Int 32)))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Module([Expr(Call(Name('print'), [BinOp(Constant(10), Add(), Constant(32))]))]) \end{lstlisting} \fi} The following program demonstrates that expressions may be nested within each other, in this case nesting several additions and negations. {\if\edition\racketEd \begin{lstlisting} (+ 10 (- (+ 12 20))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} print(10 + -(12 + 20)) \end{lstlisting} \fi} % \noindent What is the result of this program? {\if\edition\racketEd As mentioned previously, the \LangInt{} language does not support arbitrarily large integers but only $63$-bit integers, so we interpret the arithmetic operations of \LangInt{} using fixnum arithmetic in Racket. Suppose that \[ n = 999999999999999999 \] which indeed fits in $63$ bits. What happens when we run the following program in our interpreter? \begin{lstlisting} (+ (+ (+ |$n$| |$n$|) (+ |$n$| |$n$|)) (+ (+ |$n$| |$n$|) (+ |$n$| |$n$|))))) \end{lstlisting} It produces the following error: \begin{lstlisting} fx+: result is not a fixnum \end{lstlisting} We establish the convention that if running the definitional interpreter on a program produces an error, then the meaning of that program is \emph{unspecified}\index{subject}{unspecified behavior} unless the error is a \code{trapped-error}. A compiler for the language is under no obligation regarding programs with unspecified behavior; it does not have to produce an executable, and if it does, that executable can do anything. On the other hand, if the error is a \code{trapped-error}, then the compiler must produce an executable and it is required to report that an error occurred. To signal an error, exit with a return code of \code{255}. The interpreters in chapters \ref{ch:Ldyn} and \ref{ch:Lgrad} and in section \ref{sec:arrays} use \code{trapped-error}. \fi} % TODO: how to deal with too-large integers in the Python interpreter? %% This convention applies to the languages defined in this %% book, as a way to simplify the student's task of implementing them, %% but this convention is not applicable to all programming languages. %% The last feature of the \LangInt{} language, the \READOP{} operation, prompts the user of the program for an integer. Recall that program \eqref{eq:arith-prog} requests an integer input and then subtracts \code{8}. So, if we run {\if\edition\racketEd \begin{lstlisting} (interp_Lint (Program '() ast1_1)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} interp_Lint(Module([Expr(Call(Name('print'), [ast1_1]))])) \end{lstlisting} \fi} \noindent and if the input is \code{50}, the result is \code{42}. We include the \READOP{} operation in \LangInt{} so that a clever student cannot implement a compiler for \LangInt{} that simply runs the interpreter during compilation to obtain the output and then generates the trivial code to produce the output.\footnote{Yes, a clever student did this in the first instance of this course!} The job of a compiler is to translate a program in one language into a program in another language so that the output program behaves the same way as the input program. This idea is depicted in the following diagram. Suppose we have two languages, $\mathcal{L}_1$ and $\mathcal{L}_2$, and a definitional interpreter for each language. Given a compiler that translates from language $\mathcal{L}_1$ to $\mathcal{L}_2$ and given any program $P_1$ in $\mathcal{L}_1$, the compiler must translate it into some program $P_2$ such that interpreting $P_1$ and $P_2$ on their respective interpreters with same input $i$ yields the same output $o$. \begin{equation} \label{eq:compile-correct} \begin{tikzpicture}[baseline=(current bounding box.center)] \node (p1) at (0, 0) {$P_1$}; \node (p2) at (3, 0) {$P_2$}; \node (o) at (3, -2.5) {$o$}; \path[->] (p1) edge [above] node {compile} (p2); \path[->] (p2) edge [right] node {interp\_$\mathcal{L}_2$($i$)} (o); \path[->] (p1) edge [left] node {interp\_$\mathcal{L}_1$($i$)} (o); \end{tikzpicture} \end{equation} \python{We establish the convention that if running the definitional interpreter on a program produces an error, then the meaning of that program is \emph{unspecified}\index{subject}{unspecified behavior} unless the exception raised is a \code{TrappedError}. A compiler for the language is under no obligation regarding programs with unspecified behavior; it does not have to produce an executable, and if it does, that executable can do anything. On the other hand, if the error is a \code{TrappedError}, then the compiler must produce an executable and it is required to report that an error occurred. To signal an error, exit with a return code of \code{255}. The interpreters in chapters \ref{ch:Ldyn} and \ref{ch:Lgrad} and in section \ref{sec:arrays} use \code{TrappedError}.} In the next section we see our first example of a compiler. \section{Example Compiler: A Partial Evaluator} \label{sec:partial-evaluation} In this section we consider a compiler that translates \LangInt{} programs into \LangInt{} programs that may be more efficient. The compiler eagerly computes the parts of the program that do not depend on any inputs, a process known as \emph{partial evaluation}~\citep{Jones:1993uq}.\index{subject}{partialevaluation@partial evaluation} For example, given the following program {\if\edition\racketEd \begin{lstlisting} (+ (read) (- (+ 5 3))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} print(input_int() + -(5 + 3) ) \end{lstlisting} \fi} \noindent our compiler translates it into the program {\if\edition\racketEd \begin{lstlisting} (+ (read) -8) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} print(input_int() + -8) \end{lstlisting} \fi} Figure~\ref{fig:pe-arith} gives the code for a simple partial evaluator for the \LangInt{} language. The output of the partial evaluator is a program in \LangInt{}. In figure~\ref{fig:pe-arith}, the structural recursion over $\Exp$ is captured in the \code{pe\_exp} function, whereas the code for partially evaluating the negation and addition operations is factored into three auxiliary functions: \code{pe\_neg}, \code{pe\_add} and \code{pe\_sub}. The input to these functions is the output of partially evaluating the children. The \code{pe\_neg}, \code{pe\_add} and \code{pe\_sub} functions check whether their arguments are integers and if they are, perform the appropriate arithmetic. Otherwise, they create an AST node for the arithmetic operation. \begin{figure}[tp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define (pe_neg r) (match r [(Int n) (Int (fx- 0 n))] [else (Prim '- (list r))])) (define (pe_add r1 r2) (match* (r1 r2) [((Int n1) (Int n2)) (Int (fx+ n1 n2))] [(_ _) (Prim '+ (list r1 r2))])) (define (pe_sub r1 r2) (match* (r1 r2) [((Int n1) (Int n2)) (Int (fx- n1 n2))] [(_ _) (Prim '- (list r1 r2))])) (define (pe_exp e) (match e [(Int n) (Int n)] [(Prim 'read '()) (Prim 'read '())] [(Prim '- (list e1)) (pe_neg (pe_exp e1))] [(Prim '+ (list e1 e2)) (pe_add (pe_exp e1) (pe_exp e2))] [(Prim '- (list e1 e2)) (pe_sub (pe_exp e1) (pe_exp e2))])) (define (pe_Lint p) (match p [(Program '() e) (Program '() (pe_exp e))])) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def pe_neg(r): match r: case Constant(n): return Constant(neg64(n)) case _: return UnaryOp(USub(), r) def pe_add(r1, r2): match (r1, r2): case (Constant(n1), Constant(n2)): return Constant(add64(n1, n2)) case _: return BinOp(r1, Add(), r2) def pe_sub(r1, r2): match (r1, r2): case (Constant(n1), Constant(n2)): return Constant(sub64(n1, n2)) case _: return BinOp(r1, Sub(), r2) def pe_exp(e): match e: case BinOp(left, Add(), right): return pe_add(pe_exp(left), pe_exp(right)) case BinOp(left, Sub(), right): return pe_sub(pe_exp(left), pe_exp(right)) case UnaryOp(USub(), v): return pe_neg(pe_exp(v)) case Constant(value): return e case Call(Name('input_int'), []): return e def pe_stmt(s): match s: case Expr(Call(Name('print'), [arg])): return Expr(Call(Name('print'), [pe_exp(arg)])) case Expr(value): return Expr(pe_exp(value)) def pe_P_int(p): match p: case Module(body): new_body = [pe_stmt(s) for s in body] return Module(new_body) \end{lstlisting} \fi} \end{tcolorbox} \caption{A partial evaluator for \LangInt{}.} \label{fig:pe-arith} \end{figure} To gain some confidence that the partial evaluator is correct, we can test whether it produces programs that produce the same result as the input programs. That is, we can test whether it satisfies the diagram of \eqref{eq:compile-correct}. % {\if\edition\racketEd The following code runs the partial evaluator on several examples and tests the output program. The \texttt{parse-program} and \texttt{assert} functions are defined in appendix~\ref{appendix:utilities}.\\ \begin{minipage}{1.0\textwidth} \begin{lstlisting} (define (test_pe p) (assert "testing pe_Lint" (equal? (interp_Lint p) (interp_Lint (pe_Lint p))))) (test_pe (parse-program `(program () (+ 10 (- (+ 5 3)))))) (test_pe (parse-program `(program () (+ 1 (+ 3 1))))) (test_pe (parse-program `(program () (- (+ 3 (- 5)))))) \end{lstlisting} \end{minipage} \fi} % TODO: python version of testing the PE \begin{exercise}\normalfont\normalsize Create three programs in the \LangInt{} language and test whether partially evaluating them with \code{pe\_Lint} and then interpreting them with \code{interp\_Lint} gives the same result as directly interpreting them with \code{interp\_Lint}. \end{exercise} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Integers and Variables} \label{ch:Lvar} \setcounter{footnote}{0} This chapter covers compiling a subset of \racket{Racket}\python{Python} to x86-64 assembly code~\citep{Intel:2015aa}. The subset, named \LangVar{}, includes integer arithmetic and local variables. We often refer to x86-64 simply as x86. The chapter first describes the \LangVar{} language (section~\ref{sec:s0}) and then introduces x86 assembly (section~\ref{sec:x86}). Because x86 assembly language is large, we discuss only the instructions needed for compiling \LangVar{}. We introduce more x86 instructions in subsequent chapters. After introducing \LangVar{} and x86, we reflect on their differences and create a plan to break down the translation from \LangVar{} to x86 into a handful of steps (section~\ref{sec:plan-s0-x86}). The rest of the chapter gives detailed hints regarding each step. We aim to give enough hints that the well-prepared reader, together with a few friends, can implement a compiler from \LangVar{} to x86 in a short time. To suggest the scale of this first compiler, we note that the instructor solution for the \LangVar{} compiler is approximately \racket{500}\python{300} lines of code. \section{The \LangVar{} Language} \label{sec:s0} \index{subject}{variable} The \LangVar{} language extends the \LangInt{} language with variables. The concrete syntax of the \LangVar{} language is defined by the grammar presented in figure~\ref{fig:Lvar-concrete-syntax}, and the abstract syntax is presented in figure~\ref{fig:Lvar-syntax}. The nonterminal \Var{} may be any \racket{Racket}\python{Python} identifier. As in \LangInt{}, \READOP{} is a nullary operator, \key{-} is a unary operator, and \key{+} is a binary operator. Similarly to \LangInt{}, the abstract syntax of \LangVar{} includes the \racket{\key{Program} struct}\python{\key{Module} instance} to mark the top of the program. %% The $\itm{info}$ %% field of the \key{Program} structure contains an \emph{association %% list} (a list of key-value pairs) that is used to communicate %% auxiliary data from one compiler pass the next. Despite the simplicity of the \LangVar{} language, it is rich enough to exhibit several compilation techniques. \newcommand{\LvarGrammarRacket}{ \begin{array}{rcl} \Exp &::=& \Var \MID \CLET{\Var}{\Exp}{\Exp} \end{array} } \newcommand{\LvarASTRacket}{ \begin{array}{rcl} \Exp &::=& \VAR{\Var} \MID \LET{\Var}{\Exp}{\Exp} \end{array} } \newcommand{\LvarGrammarPython}{ \begin{array}{rcl} \Exp &::=& \Var{} \\ \Stmt &::=& \Var\mathop{\key{=}}\Exp \end{array} } \newcommand{\LvarASTPython}{ \begin{array}{rcl} \Exp{} &::=& \VAR{\Var{}} \\ \Stmt{} &::=& \ASSIGN{\VAR{\Var}}{\Exp} \end{array} } \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintGrammarRacket{}} \\ \hline \LvarGrammarRacket{} \\ \begin{array}{rcl} \LangVarM{} &::=& \Exp \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintGrammarPython} \\ \hline \LvarGrammarPython \\ \begin{array}{rcl} \LangVarM{} &::=& \Stmt^{*} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The concrete syntax of \LangVar{}.} \label{fig:Lvar-concrete-syntax} \end{figure} \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintASTRacket{}} \\ \hline \LvarASTRacket \\ \begin{array}{rcl} \LangVarM{} &::=& \PROGRAM{\code{'()}}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintASTPython}\\ \hline \LvarASTPython \\ \begin{array}{rcl} \LangVarM{} &::=& \PROGRAM{}{\Stmt^{*}} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangVar{}.} \label{fig:Lvar-syntax} \end{figure} {\if\edition\racketEd Let us dive further into the syntax and semantics of the \LangVar{} language. The \key{let} feature defines a variable for use within its body and initializes the variable with the value of an expression. The abstract syntax for \key{let} is shown in figure~\ref{fig:Lvar-syntax}. The concrete syntax for \key{let} is \begin{lstlisting} (let ([|$\itm{var}$| |$\itm{exp}$|]) |$\itm{exp}$|) \end{lstlisting} For example, the following program initializes \code{x} to $32$ and then evaluates the body \code{(+ 10 x)}, producing $42$. \begin{lstlisting} (let ([x (+ 12 20)]) (+ 10 x)) \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor % The \LangVar{} language includes an assignment statement, which defines a variable for use in later statements and initializes the variable with the value of an expression. The abstract syntax for assignment is defined in figure~\ref{fig:Lvar-syntax}. The concrete syntax for assignment is \index{subject}{Assign@\texttt{Assign}} \begin{lstlisting} |$\itm{var}$| = |$\itm{exp}$| \end{lstlisting} For example, the following program initializes the variable \code{x} to $32$ and then prints the result of \code{10 + x}, producing $42$. \begin{lstlisting} x = 12 + 20 print(10 + x) \end{lstlisting} \fi} {\if\edition\racketEd % When there are multiple \key{let}s for the same variable, the closest enclosing \key{let} is used. That is, variable definitions overshadow prior definitions. Consider the following program with two \key{let}s that define two variables named \code{x}. Can you figure out the result? \begin{lstlisting} (let ([x 32]) (+ (let ([x 10]) x) x)) \end{lstlisting} For the purposes of depicting which variable occurrences correspond to which definitions, the following shows the \code{x}'s annotated with subscripts to distinguish them. Double-check that your answer for the previous program is the same as your answer for this annotated version of the program. \begin{lstlisting} (let ([x|$_1$| 32]) (+ (let ([x|$_2$| 10]) x|$_2$|) x|$_1$|)) \end{lstlisting} The initializing expression is always evaluated before the body of the \key{let}, so in the following, the \key{read} for \code{x} is performed before the \key{read} for \code{y}. Given the input $52$ then $10$, the following produces $42$ (not $-42$). \begin{lstlisting} (let ([x (read)]) (let ([y (read)]) (+ x (- y)))) \end{lstlisting} \fi} \subsection{Extensible Interpreters via Method Overriding} \label{sec:extensible-interp} \index{subject}{method overriding} To prepare for discussing the interpreter of \LangVar{}, we explain why we implement it in an object-oriented style. Throughout this book we define many interpreters, one for each language that we study. Because each language builds on the prior one, there is a lot of commonality between these interpreters. We want to write down the common parts just once instead of many times. A naive interpreter for \LangVar{} would handle the \racket{cases for variables and \code{let}} \python{case for variables} but dispatch to an interpreter for \LangInt{} in the rest of the cases. The following code sketches this idea. (We explain the \code{env} parameter in section~\ref{sec:interp-Lvar}.) \begin{center} {\if\edition\racketEd \begin{minipage}{0.45\textwidth} \begin{lstlisting} (define ((interp_Lint env) e) (match e [(Prim '- (list e1)) (fx- 0 ((interp_Lint env) e1))] ...)) \end{lstlisting} \end{minipage} \begin{minipage}{0.45\textwidth} \begin{lstlisting} (define ((interp_Lvar env) e) (match e [(Var x) (dict-ref env x)] [(Let x e body) (define v ((interp_Lvar env) e)) (define env^ (dict-set env x v)) ((interp_Lvar env^) body)] [else ((interp_Lint env) e)])) \end{lstlisting} \end{minipage} \fi} {\if\edition\pythonEd\pythonColor \begin{minipage}{0.45\textwidth} \begin{lstlisting} def interp_Lint(e, env): match e: case UnaryOp(USub(), e1): return - interp_Lint(e1, env) ... \end{lstlisting} \end{minipage} \begin{minipage}{0.45\textwidth} \begin{lstlisting} def interp_Lvar(e, env): match e: case Name(id): return env[id] case _: return interp_Lint(e, env) \end{lstlisting} \end{minipage} \fi} \end{center} The problem with this naive approach is that it does not handle situations in which an \LangVar{} feature, such as a variable, is nested inside an \LangInt{} feature, such as the \code{-} operator, as in the following program. {\if\edition\racketEd \begin{lstlisting} (Let 'y (Int 10) (Prim '- (list (Var 'y)))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{minipage}{1.0\textwidth} \begin{lstlisting} y = 10 print(-y) \end{lstlisting} \end{minipage} \fi} \noindent If we invoke \code{interp\_Lvar} on this program, it dispatches to \code{interp\_Lint} to handle the \code{-} operator, but then it recursively calls \code{interp\_Lint} again on its argument. Because there is no case for \code{Var} in \code{interp\_Lint}, we get an error! To make our interpreters extensible we need something called \emph{open recursion}\index{subject}{open recursion}, in which the tying of the recursive knot is delayed until the functions are composed. Object-oriented languages provide open recursion via method overriding. The following code uses method overriding to interpret \LangInt{} and \LangVar{} using % \racket{the \href{https://docs.racket-lang.org/guide/classes.html}{\code{class}} \index{subject}{class} feature of Racket.}% % \python{a Python \code{class} definition.} % We define one class for each language and define a method for interpreting expressions inside each class. The class for \LangVar{} inherits from the class for \LangInt{}, and the method \code{interp\_exp} in \LangVar{} overrides the \code{interp\_exp} in \LangInt{}. Note that the default case of \code{interp\_exp} in \LangVar{} uses \code{super} to invoke \code{interp\_exp}, and because \LangVar{} inherits from \LangInt{}, that dispatches to the \code{interp\_exp} in \LangInt{}. \begin{center} \hspace{-20pt} {\if\edition\racketEd \begin{minipage}{0.45\textwidth} \begin{lstlisting} (define interp-Lint-class (class object% (define/public ((interp_exp env) e) (match e [(Prim '- (list e)) (fx- 0 ((interp_exp env) e))] ...)) ...)) \end{lstlisting} \end{minipage} \begin{minipage}{0.45\textwidth} \begin{lstlisting} (define interp-Lvar-class (class interp-Lint-class (define/override ((interp_exp env) e) (match e [(Var x) (dict-ref env x)] [(Let x e body) (define v ((interp_exp env) e)) (define env^ (dict-set env x v)) ((interp_exp env^) body)] [else (super (interp_exp env) e)])) ... )) \end{lstlisting} \end{minipage} \fi} {\if\edition\pythonEd\pythonColor \begin{minipage}{0.45\textwidth} \begin{lstlisting} class InterpLint: def interp_exp(e): match e: case UnaryOp(USub(), e1): return neg64(self.interp_exp(e1)) ... ... \end{lstlisting} \end{minipage} \begin{minipage}{0.45\textwidth} \begin{lstlisting} def InterpLvar(InterpLint): def interp_exp(e): match e: case Name(id): return env[id] case _: return super().interp_exp(e) ... \end{lstlisting} \end{minipage} \fi} \end{center} We return to the troublesome example, repeated here: {\if\edition\racketEd \begin{lstlisting} (Let 'y (Int 10) (Prim '- (Var 'y))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} y = 10 print(-y) \end{lstlisting} \fi} \noindent We can invoke the \code{interp\_exp} method for \LangVar{}% \racket{on this expression,} \python{on the \code{-y} expression,} % which we call \code{e0}, by creating an object of the \LangVar{} class and calling the \code{interp\_exp} method {\if\edition\racketEd \begin{lstlisting} ((send (new interp-Lvar-class) interp_exp '()) e0) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} InterpLvar().interp_exp(e0) \end{lstlisting} \fi} \noindent To process the \code{-} operator, the default case of \code{interp\_exp} in \LangVar{} dispatches to the \code{interp\_exp} method in \LangInt{}. But then for the recursive method call, it dispatches to \code{interp\_exp} in \LangVar{}, where the \code{Var} node is handled correctly. Thus, method overriding gives us the open recursion that we need to implement our interpreters in an extensible way. \subsection{Definitional Interpreter for \LangVar{}} \label{sec:interp-Lvar} Having justified the use of classes and methods to implement interpreters, we revisit the definitional interpreter for \LangInt{} shown in figure~\ref{fig:interp-Lint-class} and then extend it to create an interpreter for \LangVar{}, shown in figure~\ref{fig:interp-Lvar}. % \python{We change the \code{interp\_stmt} method in the interpreter for \LangInt{} to take two extra parameters named \code{env}, which we discuss in the next paragraph, and \code{cont} for \emph{continuation}, which is the technical name for what comes after a particular point in a program. The \code{cont} parameter is the list of statements that that follow the current statement. Note that \code{interp\_stmts} invokes \code{interp\_stmt} on the first statement and passes the rest of the statements as the argument for \code{cont}. This organization enables each statement to decide what if anything should be evaluated after it, for example, allowing a \code{return} statement to exit early from a function (see Chapter~\ref{ch:Lfun}).} The interpreter for \LangVar{} adds two new cases for variables and \racket{\key{let}}\python{assignment}. For \racket{\key{let}}\python{assignment}, we need a way to communicate the value bound to a variable to all the uses of the variable. To accomplish this, we maintain a mapping from variables to values called an \emph{environment}\index{subject}{environment}. % We use % \racket{an association list (alist) }% % \python{a Python \href{https://docs.python.org/3.10/library/stdtypes.html\#mapping-types-dict}{dictionary} }% % to represent the environment. % \racket{Figure~\ref{fig:alist} gives a brief introduction to alists and the \code{racket/dict} package.} % The \code{interp\_exp} function takes the current environment, \code{env}, as an extra parameter. When the interpreter encounters a variable, it looks up the corresponding value in the environment. If the variable is not in the environment (because the variable was not defined) then the lookup will fail and the interpreter will halt with an error. Recall that the compiler is not obligated to compile such programs (Section~\ref{sec:interp_Lint}).\footnote{In Chapter~\ref{ch:Lif} we introduce type checking rules that prohibit access to undefined variables.} % \racket{When the interpreter encounters a \key{Let}, it evaluates the initializing expression, extends the environment with the result value bound to the variable, using \code{dict-set}, then evaluates the body of the \key{Let}.} % \python{When the interpreter encounters an assignment, it evaluates the initializing expression and then associates the resulting value with the variable in the environment.} \begin{figure}[tp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define interp-Lint-class (class object% (super-new) (define/public ((interp_exp env) e) (match e [(Int n) n] [(Prim 'read '()) (define r (read)) (cond [(fixnum? r) r] [else (error 'interp_exp "expected an integer" r)])] [(Prim '- (list e)) (fx- 0 ((interp_exp env) e))] [(Prim '+ (list e1 e2)) (fx+ ((interp_exp env) e1) ((interp_exp env) e2))] [(Prim '- (list e1 e2)) (fx- ((interp_exp env) e1) ((interp_exp env) e2))])) (define/public (interp_program p) (match p [(Program '() e) ((interp_exp '()) e)])) )) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} class InterpLint: def interp_exp(self, e, env): match e: case BinOp(left, Add(), right): l = self.interp_exp(left, env) r = self.interp_exp(right, env) return add64(l, r) case BinOp(left, Sub(), right): l = self.interp_exp(left, env) r = self.interp_exp(right, env) return sub64(l, r) case UnaryOp(USub(), v): return neg64(self.interp_exp(v, env)) case Constant(value): return value case Call(Name('input_int'), []): return int(input()) def interp_stmt(self, s, env, cont): match s: case Expr(Call(Name('print'), [arg])): val = self.interp_exp(arg, env) print(val, end='') return self.interp_stmts(cont, env) case Expr(value): self.interp_exp(value, env) return self.interp_stmts(cont, env) case _: raise Exception('error in interp_stmt, unexpected ' + repr(s)) def interp_stmts(self, ss, env): match ss: case []: return 0 case [s, *ss]: return self.interp_stmt(s, env, ss) def interp(self, p): match p: case Module(body): self.interp_stmts(body, {}) def interp_Lint(p): return InterpLint().interp(p) \end{lstlisting} \fi} \end{tcolorbox} \caption{Interpreter for \LangInt{} as a class.} \label{fig:interp-Lint-class} \end{figure} \begin{figure}[tp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define interp-Lvar-class (class interp-Lint-class (super-new) (define/override ((interp_exp env) e) (match e [(Var x) (dict-ref env x)] [(Let x e body) (define new-env (dict-set env x ((interp_exp env) e))) ((interp_exp new-env) body)] [else ((super interp_exp env) e)])) )) (define (interp_Lvar p) (send (new interp-Lvar-class) interp_program p)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} class InterpLvar(InterpLint): def interp_exp(self, e, env): match e: case Name(id): return env[id] case _: return super().interp_exp(e, env) def interp_stmt(self, s, env, cont): match s: case Assign([lhs], value): env[lhs.id] = self.interp_exp(value, env) return self.interp_stmts(cont, env) case _: return super().interp_stmt(s, env, cont) def interp_Lvar(p): return InterpLvar().interp(p) \end{lstlisting} \fi} \end{tcolorbox} \caption{Interpreter for the \LangVar{} language.} \label{fig:interp-Lvar} \end{figure} {\if\edition\racketEd \begin{figure}[tp] %\begin{wrapfigure}[26]{r}[0.75in]{0.55\textwidth} \small \begin{tcolorbox}[title=Association Lists as Dictionaries] An \emph{association list} (called an alist) is a list of key-value pairs. For example, we can map people to their ages with an alist \index{subject}{alist}\index{subject}{association list} \begin{lstlisting}[basicstyle=\ttfamily] (define ages '((jane . 25) (sam . 24) (kate . 45))) \end{lstlisting} The \emph{dictionary} interface is for mapping keys to values. Every alist implements this interface. \index{subject}{dictionary} The package \href{https://docs.racket-lang.org/reference/dicts.html}{\code{racket/dict}} provides many functions for working with dictionaries, such as \begin{description} \item[$\LP\key{dict-ref}\,\itm{dict}\,\itm{key}\RP$] returns the value associated with the given $\itm{key}$. \item[$\LP\key{dict-set}\,\itm{dict}\,\itm{key}\,\itm{val}\RP$] returns a new dictionary that maps $\itm{key}$ to $\itm{val}$ and otherwise is the same as $\itm{dict}$. \item[$\LP\code{in-dict}\,\itm{dict}\RP$] returns the \href{https://docs.racket-lang.org/reference/sequences.html}{sequence} of keys and values in $\itm{dict}$. For example, the following creates a new alist in which the ages are incremented: \end{description} \vspace{-10pt} \begin{lstlisting}[basicstyle=\ttfamily] (for/list ([(k v) (in-dict ages)]) (cons k (add1 v))) \end{lstlisting} \end{tcolorbox} %\end{wrapfigure} \caption{Association lists implement the dictionary interface.} \label{fig:alist} \end{figure} \fi} The goal for this chapter is to implement a compiler that translates any program $P_1$ written in the \LangVar{} language into an x86 assembly program $P_2$ such that $P_2$ exhibits the same behavior when run on a computer as the $P_1$ program interpreted by \code{interp\_Lvar}. That is, they output the same integer $n$. We depict this correctness criteria in the following diagram: \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (p1) at (0, 0) {$P_1$}; \node (p2) at (4, 0) {$P_2$}; \node (o) at (4, -2) {$n$}; \path[->] (p1) edge [above] node {\footnotesize compile} (p2); \path[->] (p1) edge [left] node {\footnotesize\code{interp\_Lvar}} (o); \path[->] (p2) edge [right] node {\footnotesize\code{interp\_x86int}} (o); \end{tikzpicture} \] Next we introduce the \LangXInt{} subset of x86 that suffices for compiling \LangVar{}. \section{The \LangXInt{} Assembly Language} \label{sec:x86} \index{subject}{x86} Figure~\ref{fig:x86-int-concrete} defines the concrete syntax for \LangXInt{}. We use the AT\&T syntax expected by the GNU assembler. % A program begins with a \code{main} label followed by a sequence of instructions. The \key{globl} directive makes the \key{main} procedure externally visible so that the operating system can call it. % An x86 program is stored in the computer's memory. For our purposes, the computer's memory is a mapping of 64-bit addresses to 64-bit values. The computer has a \emph{program counter} (PC)\index{subject}{program counter}\index{subject}{PC} stored in the \code{rip} register that points to the address of the next instruction to be executed. For most instructions, the program counter is incremented after the instruction is executed so that it points to the next instruction in memory. Most x86 instructions take two operands, each of which is an integer constant (called an \emph{immediate value}\index{subject}{immediate value}), a \emph{register}\index{subject}{register}, or a memory location. \newcommand{\allregisters}{\key{rsp} \MID \key{rbp} \MID \key{rax} \MID \key{rbx} \MID \key{rcx} \MID \key{rdx} \MID \key{rsi} \MID \key{rdi} \MID \\ && \key{r8} \MID \key{r9} \MID \key{r10} \MID \key{r11} \MID \key{r12} \MID \key{r13} \MID \key{r14} \MID \key{r15}} \newcommand{\GrammarXInt}{ \begin{array}{rcl} \Reg &::=& \allregisters{} \\ \Arg &::=& \key{\$}\Int \MID \key{\%}\Reg \MID \Int\key{(}\key{\%}\Reg\key{)}\\ \Instr &::=& \key{addq} \; \Arg\key{,} \Arg \MID \key{subq} \; \Arg\key{,} \Arg \MID \key{negq} \; \Arg \MID \key{movq} \; \Arg\key{,} \Arg \MID \\ && \key{pushq}\;\Arg \MID \key{popq}\;\Arg \MID \key{callq} \; \mathit{label} \MID \key{retq} \MID \key{jmp}\,\itm{label} \MID \\ && \itm{label}\key{:}\; \Instr \end{array} } \begin{figure}[tp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \[ \begin{array}{l} \GrammarXInt \\ \begin{array}{lcl} \LangXIntM{} &::= & \key{.globl main}\\ & & \key{main:} \; \Instr\ldots \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{lcl} \Reg &::=& \allregisters{} \\ \Arg &::=& \key{\$}\Int \MID \key{\%}\Reg \MID \Int\key{(}\key{\%}\Reg\key{)}\\ \Instr &::=& \key{addq} \; \Arg\key{,} \Arg \MID \key{subq} \; \Arg\key{,} \Arg \MID \key{negq} \; \Arg \MID \key{movq} \; \Arg\key{,} \Arg \MID \\ && \key{callq} \; \mathit{label} \MID \key{pushq}\;\Arg \MID \key{popq}\;\Arg \MID \key{retq} \\ \LangXIntM{} &::= & \key{.globl main}\\ & & \key{main:} \; \Instr^{*} \end{array} \] \fi} \end{tcolorbox} \caption{The syntax of the \LangXInt{} assembly language (AT\&T syntax).} \label{fig:x86-int-concrete} \end{figure} A register is a special kind of variable that holds a 64-bit value. There are 16 general-purpose registers in the computer; their names are given in figure~\ref{fig:x86-int-concrete}. A register is written with a percent sign, \key{\%}, followed by the register name, for example \key{\%rax}. An immediate value is written using the notation \key{\$}$n$ where $n$ is an integer. % % An access to memory is specified using the syntax $n(\key{\%}r)$, which obtains the address stored in register $r$ and then adds $n$ bytes to the address. The resulting address is used to load or to store to memory depending on whether it occurs as a source or destination argument of an instruction. An arithmetic instruction such as $\key{addq}\,s\key{,}\,d$ reads from the source $s$ and destination $d$, applies the arithmetic operation, and then writes the result to the destination $d$. \index{subject}{instruction} % The move instruction $\key{movq}\,s\key{,}\,d$ reads from $s$ and stores the result in $d$. % The $\key{callq}\,\itm{label}$ instruction jumps to the procedure specified by the label, and $\key{retq}$ returns from a procedure to its caller. % We discuss procedure calls in more detail further in this chapter and in chapter~\ref{ch:Lfun}. % The last letter \key{q} indicates that these instructions operate on quadwords, which are 64-bit values. % \racket{The instruction $\key{jmp}\,\itm{label}$ updates the program counter to the address of the instruction immediately after the specified label.} Appendix~\ref{sec:x86-quick-reference} contains a reference for all the x86 instructions used in this book. Figure~\ref{fig:p0-x86} depicts an x86 program that computes \racket{\code{(+ 10 32)}}\python{10 + 32}. The instruction \lstinline{movq $10, %rax} puts $10$ into register \key{rax}, and then \lstinline{addq $32, %rax} adds $32$ to the $10$ in \key{rax} and puts the result, $42$, into \key{rax}. % The last instruction \key{retq} finishes the \key{main} function by returning the integer in \key{rax} to the operating system. The operating system interprets this integer as the program's exit code. By convention, an exit code of 0 indicates that a program has completed successfully, and all other exit codes indicate various errors. % \racket{However, in this book we return the result of the program as the exit code.} \begin{figure}[tbp] \begin{minipage}{0.45\textwidth} \begin{tcolorbox}[colback=white] \begin{lstlisting} .globl main main: movq $10, %rax addq $32, %rax retq \end{lstlisting} \end{tcolorbox} \end{minipage} \caption{An x86 program that computes \racket{\code{(+ 10 32)}}\python{10 + 32}.} \label{fig:p0-x86} \end{figure} We exhibit the use of memory for storing intermediate results in the next example. Figure~\ref{fig:p1-x86} lists an x86 program that computes \racket{\code{(+ 52 (- 10))}}\python{52 + -10}. This program uses a region of memory called the \emph{procedure call stack} (\emph{stack} for short). \index{subject}{stack}\index{subject}{procedure call stack} The stack consists of a separate \emph{frame}\index{subject}{frame} for each procedure call. The memory layout for an individual frame is shown in figure~\ref{fig:frame}. The register \key{rsp} is called the \emph{stack pointer}\index{subject}{stack pointer} and contains the address of the item at the top of the stack. In general, we use the term \emph{pointer}\index{subject}{pointer} for something that contains an address. The stack grows downward in memory, so we increase the size of the stack by subtracting from the stack pointer. In the context of a procedure call, the \emph{return address}\index{subject}{return address} is the location of the instruction that immediately follows the call instruction on the caller side. The function call instruction, \code{callq}, pushes the return address onto the stack prior to jumping to the procedure. The register \key{rbp} is the \emph{base pointer}\index{subject}{base pointer} and is used to access variables that are stored in the frame of the current procedure call. The base pointer of the caller is stored immediately after the return address. Figure~\ref{fig:frame} shows the memory layout of a frame with storage for $n$ variables, which are numbered from $1$ to $n$. Variable $1$ is stored at address $-8\key{(\%rbp)}$, variable $2$ at $-16\key{(\%rbp)}$, and so on. \begin{figure}[tbp] \begin{minipage}{0.66\textwidth} \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} start: movq $10, -8(%rbp) negq -8(%rbp) movq -8(%rbp), %rax addq $52, %rax jmp conclusion .globl main main: pushq %rbp movq %rsp, %rbp subq $16, %rsp jmp start conclusion: addq $16, %rsp popq %rbp retq \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} .globl main main: pushq %rbp movq %rsp, %rbp subq $16, %rsp movq $10, -8(%rbp) negq -8(%rbp) movq -8(%rbp), %rax addq $52, %rax addq $16, %rsp popq %rbp retq \end{lstlisting} \fi} \end{tcolorbox} \end{minipage} \caption{An x86 program that computes \racket{\code{(+ 52 (- 10))}}\python{52 + -10}.} \label{fig:p1-x86} \end{figure} \begin{figure}[tbp] \begin{minipage}{0.66\textwidth} \begin{tcolorbox}[colback=white] \centering \begin{tabular}{|r|l|} \hline Position & Contents \\ \hline $8$(\key{\%rbp}) & return address \\ $0$(\key{\%rbp}) & old \key{rbp} \\ $-8$(\key{\%rbp}) & variable $1$ \\ $-16$(\key{\%rbp}) & variable $2$ \\ \ldots & \ldots \\ $0$(\key{\%rsp}) & variable $n$\\ \hline \end{tabular} \end{tcolorbox} \end{minipage} \caption{Memory layout of a frame.} \label{fig:frame} \end{figure} In the program shown in figure~\ref{fig:p1-x86}, consider how control is transferred from the operating system to the \code{main} function. The operating system issues a \code{callq main} instruction that pushes its return address on the stack and then jumps to \code{main}. In x86-64, the stack pointer \code{rsp} must be divisible by 16 bytes prior to the execution of any \code{callq} instruction, so that when control arrives at \code{main}, the \code{rsp} is 8 bytes out of alignment (because the \code{callq} pushed the return address). The first three instructions are the typical \emph{prelude}\index{subject}{prelude} for a procedure. The instruction \code{pushq \%rbp} first subtracts $8$ from the stack pointer \code{rsp} and then saves the base pointer of the caller at address \code{rsp} on the stack. The next instruction \code{movq \%rsp, \%rbp} sets the base pointer to the current stack pointer, which is pointing to the location of the old base pointer. The instruction \code{subq \$16, \%rsp} moves the stack pointer down to make enough room for storing variables. This program needs one variable ($8$ bytes), but we round up to 16 bytes so that \code{rsp} is 16-byte-aligned, and then we are ready to make calls to other functions. \racket{The last instruction of the prelude is \code{jmp start}, which transfers control to the instructions that were generated from the expression \racket{\code{(+ 52 (- 10))}}\python{52 + -10}.} \racket{The first instruction under the \code{start} label is} % \python{The first instruction after the prelude is} % \code{movq \$10, -8(\%rbp)}, which stores $10$ in variable $1$. % The instruction \code{negq -8(\%rbp)} changes the contents of variable $1$ to $-10$. % The next instruction moves the $-10$ from variable $1$ into the \code{rax} register. Finally, \code{addq \$52, \%rax} adds $52$ to the value in \code{rax}, updating its contents to $42$. \racket{The three instructions under the label \code{conclusion} are the typical \emph{conclusion}\index{subject}{conclusion} of a procedure.} % \python{The \emph{conclusion}\index{subject}{conclusion} of the \code{main} function consists of the last three instructions.} % The first two restore the \code{rsp} and \code{rbp} registers to their states at the beginning of the procedure. In particular, \key{addq \$16, \%rsp} moves the stack pointer to point to the old base pointer. Then \key{popq \%rbp} restores the old base pointer to \key{rbp} and adds $8$ to the stack pointer. The last instruction, \key{retq}, jumps back to the procedure that called this one and adds $8$ to the stack pointer. Our compiler needs a convenient representation for manipulating x86 programs, so we define an abstract syntax for x86, shown in figure~\ref{fig:x86-int-ast}. We refer to this language as \LangXInt{}. % {\if\edition\pythonEd\pythonColor% The main difference between this and the concrete syntax of \LangXInt{} (figure~\ref{fig:x86-int-concrete}) is that labels, instruction names, and register names are explicitly represented by strings. \fi} % {\if\edition\racketEd The main difference between this and the concrete syntax of \LangXInt{} (figure~\ref{fig:x86-int-concrete}) is that labels are not allowed in front of every instruction. Instead instructions are grouped into \emph{basic blocks}\index{subject}{basic block} with a label associated with every basic block; this is why the \key{X86Program} struct includes an alist mapping labels to basic blocks. The reason for this organization becomes apparent in chapter~\ref{ch:Lif} when we introduce conditional branching. The \code{Block} structure includes an $\itm{info}$ field that is not needed in this chapter but becomes useful in chapter~\ref{ch:register-allocation-Lvar}. For now, the $\itm{info}$ field should contain an empty list. \fi} % Regarding the abstract syntax for \code{callq}, the \code{Callq} AST node includes an integer for representing the arity of the function, that is, the number of arguments, which is helpful to know during register allocation (chapter~\ref{ch:register-allocation-Lvar}). \newcommand{\allastregisters}{\skey{rsp} \MID \skey{rbp} \MID \skey{rax} \MID \skey{rbx} \MID \skey{rcx} \MID \skey{rdx} \MID \skey{rsi} \MID \skey{rdi} \MID \\ && \skey{r8} \MID \skey{r9} \MID \skey{r10} \MID \skey{r11} \MID \skey{r12} \MID \skey{r13} \MID \skey{r14} \MID \skey{r15}} \newcommand{\ASTXIntRacket}{ \begin{array}{lcl} \Reg &::=& \allregisters{} \\ \Arg &::=& \IMM{\Int} \MID \REG{\Reg} \MID \DEREF{\Reg}{\Int} \\ \Instr &::=& \BININSTR{\code{addq}}{\Arg}{\Arg} \MID \BININSTR{\code{subq}}{\Arg}{\Arg}\\ &\MID& \UNIINSTR{\code{negq}}{\Arg} \MID \BININSTR{\code{movq}}{\Arg}{\Arg}\\ &\MID& \PUSHQ{\Arg} \MID \POPQ{\Arg} \\ &\MID& \CALLQ{\itm{label}}{\itm{int}} \MID \RETQ{} \MID \JMP{\itm{label}} \\ \Block &::= & \BLOCK{\itm{info}}{\LP\Instr\ldots\RP} \end{array} } \newcommand{\ASTXIntPython}{ \begin{array}{lcl} \Reg &::=& \allregisters{} \\ \Arg &::=& \IMM{\Int} \MID \REG{\Reg} \MID \DEREF{\Reg}{\Int} \\ \Instr &::=& \BININSTR{\skey{addq}}{\Arg}{\Arg} \MID \BININSTR{\skey{subq}}{\Arg}{\Arg}\\ &\MID& \UNIINSTR{\skey{negq}}{\Arg} \MID \BININSTR{\skey{movq}}{\Arg}{\Arg}\\ &\MID& \PUSHQ{\Arg} \MID \POPQ{\Arg} \\ &\MID& \CALLQ{\itm{label}}{\itm{int}} \MID \RETQ{} \MID \JMP{\itm{label}} \\ \Block &::= & \Instr^{+} \end{array} } \begin{figure}[tp] \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[\arraycolsep=3pt \begin{array}{l} \ASTXIntRacket \\ \begin{array}{lcl} \LangXIntM{} &::= & \XPROGRAM{\itm{info}}{\LP\LP\itm{label} \,\key{.}\, \Block \RP\ldots\RP} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{lcl} \Reg &::=& \allastregisters{} \\ \Arg &::=& \IMM{\Int} \MID \REG{\Reg} \MID \DEREF{\Reg}{\Int} \\ \Instr &::=& \BININSTR{\scode{addq}}{\Arg}{\Arg} \MID \BININSTR{\scode{subq}}{\Arg}{\Arg} \\ &\MID& \BININSTR{\scode{movq}}{\Arg}{\Arg} \MID \UNIINSTR{\scode{negq}}{\Arg}\\ &\MID& \PUSHQ{\Arg} \MID \POPQ{\Arg} \\ &\MID& \CALLQ{\itm{label}}{\itm{int}} \MID \RETQ{} \MID \JMP{\itm{label}} \\ \LangXIntM{} &::= & \XPROGRAM{}{\Instr^{*}}{} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangXInt{} assembly.} \label{fig:x86-int-ast} \end{figure} \section{Planning the Trip to x86} \label{sec:plan-s0-x86} To compile one language to another, it helps to focus on the differences between the two languages because the compiler will need to bridge those differences. What are the differences between \LangVar{} and x86 assembly? Here are some of the most important ones: \begin{enumerate} \item x86 arithmetic instructions typically have two arguments and update the second argument in place. In contrast, \LangVar{} arithmetic operations take two arguments and produce a new value. An x86 instruction may have at most one memory-accessing argument. Furthermore, some x86 instructions place special restrictions on their arguments. \item An argument of an \LangVar{} operator can be a deeply nested expression, whereas x86 instructions restrict their arguments to be integer constants, registers, and memory locations. {\if\edition\racketEd \item The order of execution in x86 is explicit in the syntax, which is a sequence of instructions and jumps to labeled positions, whereas in \LangVar{} the order of evaluation is a left-to-right depth-first traversal of the abstract syntax tree. \fi} \item A program in \LangVar{} can have any number of variables, whereas x86 has 16 registers and the procedure call stack. {\if\edition\racketEd \item Variables in \LangVar{} can shadow other variables with the same name. In x86, registers have unique names, and memory locations have unique addresses. \fi} \end{enumerate} We ease the challenge of compiling from \LangVar{} to x86 by breaking down the problem into several steps, which deal with these differences one at a time. Each of these steps is called a \emph{pass} of the compiler.\index{subject}{pass}\index{subject}{compiler pass} % This term indicates that each step passes over, or traverses, the AST of the program. % Furthermore, we follow the nanopass approach, which means that we strive for each pass to accomplish one clear objective rather than two or three at the same time. % We begin by sketching how we might implement each pass and give each pass a name. We then figure out an ordering of the passes and the input/output language for each pass. The very first pass has \LangVar{} as its input language, and the last pass has \LangXInt{} as its output language. In between these two passes, we can choose whichever language is most convenient for expressing the output of each pass, whether that be \LangVar{}, \LangXInt{}, or a new \emph{intermediate language} of our own design. Finally, to implement each pass we write one recursive function per nonterminal in the grammar of the input language of the pass. \index{subject}{intermediate language} Our compiler for \LangVar{} consists of the following passes: % \begin{description} {\if\edition\racketEd \item[\key{uniquify}] deals with the shadowing of variables by renaming every variable to a unique name. \fi} \item[\key{remove\_complex\_operands}] ensures that each subexpression of a primitive operation or function call is a variable or integer, that is, an \emph{atomic} expression. We refer to nonatomic expressions as \emph{complex}. This pass introduces temporary variables to hold the results of complex subexpressions.\index{subject}{atomic expression}\index{subject}{complex expression}% {\if\edition\racketEd \item[\key{explicate\_control}] makes the execution order of the program explicit. It converts the abstract syntax tree representation into a graph in which each node is a labeled sequence of statements and the edges are \code{goto} statements. \fi} \item[\key{select\_instructions}]\index{subject}{select instructions} handles the difference between \LangVar{} operations and x86 instructions. This pass converts each \LangVar{} operation to a short sequence of instructions that accomplishes the same task. \item[\key{assign\_homes}] replaces variables with registers or stack locations. \end{description} % {\if\edition\racketEd % Our treatment of \code{remove\_complex\_operands} and \code{explicate\_control} as separate passes is an example of the nanopass approach.\footnote{For analogous decompositions of the translation into continuation passing style, see the work of \citet{Lawall:1993} and \citet{Hatcliff:1994ea}.} The traditional approach is to combine them into a single step~\citep{Aho:2006wb}. % \fi} The next question is, in what order should we apply these passes? This question can be challenging because it is difficult to know ahead of time which orderings will be better (that is, will be easier to implement, produce more efficient code, and so on), and therefore ordering often involves trial and error. Nevertheless, we can plan ahead and make educated choices regarding the ordering. \racket{What should be the ordering of \key{explicate\_control} with respect to \key{uniquify}? The \key{uniquify} pass should come first because \key{explicate\_control} changes all the \key{let}-bound variables to become local variables whose scope is the entire program, which would confuse variables with the same name.} % \racket{We place \key{remove\_complex\_operands} before \key{explicate\_control} because the later removes the \key{let} form, but it is convenient to use \key{let} in the output of \key{remove\_complex\_operands}.} % \racket{The ordering of \key{uniquify} with respect to \key{remove\_complex\_operands} does not matter, so we arbitrarily choose \key{uniquify} to come first.} The \key{select\_instructions} and \key{assign\_homes} passes are intertwined. % In chapter~\ref{ch:Lfun} we learn that in x86, registers are used for passing arguments to functions and that it is preferable to assign parameters to their corresponding registers. This suggests that it would be better to start with the \key{select\_instructions} pass, which generates the instructions for argument passing, before performing register allocation. % On the other hand, by selecting instructions first we may run into a dead end in \key{assign\_homes}. Recall that only one argument of an x86 instruction may be a memory access, but \key{assign\_homes} might be forced to assign both arguments to memory locations. % A sophisticated approach is to repeat the two passes until a solution is found. However, to reduce implementation complexity we recommend placing \key{select\_instructions} first, followed by the \key{assign\_homes}, and then a third pass named \key{patch\_instructions} that uses a reserved register to fix outstanding problems. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.90] \node (Lvar) at (0,2) {\large \LangVar{}}; \node (Lvar-2) at (3,2) {\large \LangVar{}}; \node (Lvar-3) at (7,2) {\large \LangVarANF{}}; %\node (Cvar-1) at (6,0) {\large \LangCVar{}}; \node (Cvar-2) at (0,0) {\large \LangCVar{}}; \node (x86-2) at (0,-2) {\large \LangXVar{}}; \node (x86-3) at (3,-2) {\large \LangXVar{}}; \node (x86-4) at (7,-2) {\large \LangXInt{}}; \node (x86-5) at (11,-2) {\large \LangXInt{}}; \path[->,bend left=15] (Lvar) edge [above] node {\ttfamily\footnotesize uniquify} (Lvar-2); \path[->,bend left=15] (Lvar-2) edge [above] node {\ttfamily\footnotesize remove\_complex\_operands} (Lvar-3); \path[->,bend left=15] (Lvar-3) edge [right] node {\ttfamily\footnotesize\ \ explicate\_control} (Cvar-2); \path[->,bend right=15] (Cvar-2) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend right=15] (x86-2) edge [below] node {\ttfamily\footnotesize assign\_homes} (x86-3); \path[->,bend left=15] (x86-3) edge [above] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend left=15] (x86-4) edge [above] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \fi} {\if\edition\pythonEd\pythonColor \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.85] \node (Lvar) at (0,2) {\large \LangVar{}}; \node (Lvar-2) at (4,2) {\large \LangVarANF{}}; \node (x86-1) at (0,0) {\large \LangXVar{}}; \node (x86-2) at (4,0) {\large \LangXVar{}}; \node (x86-3) at (8,0) {\large \LangXInt{}}; \node (x86-4) at (12,0) {\large \LangXInt{}}; \path[->,bend left=15] (Lvar) edge [above] node {\ttfamily\footnotesize remove\_complex\_operands} (Lvar-2); \path[->,bend left=15] (Lvar-2) edge [left] node {\ttfamily\footnotesize select\_instructions\ \ } (x86-1); \path[->,bend right=15] (x86-1) edge [below] node {\ttfamily\footnotesize assign\_homes} (x86-2); \path[->,bend left=15] (x86-2) edge [above] node {\ttfamily\footnotesize patch\_instructions} (x86-3); \path[->,bend right=15] (x86-3) edge [below] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-4); \end{tikzpicture} \fi} \end{tcolorbox} \caption{Diagram of the passes for compiling \LangVar{}. } \label{fig:Lvar-passes} \end{figure} Figure~\ref{fig:Lvar-passes} presents the ordering of the compiler passes and identifies the input and output language of each pass. % The output of the \key{select\_instructions} pass is the \LangXVar{} language, which extends \LangXInt{} with an unbounded number of program-scope variables and removes the restrictions regarding instruction arguments. % The last pass, \key{prelude\_and\_conclusion}, places the program instructions inside a \code{main} function with instructions for the prelude and conclusion. % \racket{In the next section we discuss the \LangCVar{} intermediate language that serves as the output of \code{explicate\_control}.} % The remainder of this chapter provides guidance on the implementation of each of the compiler passes represented in figure~\ref{fig:Lvar-passes}. %% The output of \key{uniquify} and \key{remove-complex-operands} %% are programs that are still in the \LangVar{} language, though the %% output of the later is a subset of \LangVar{} named \LangVarANF{} %% (section~\ref{sec:remove-complex-opera-Lvar}). %% % %% The output of \code{explicate\_control} is in an intermediate language %% \LangCVar{} designed to make the order of evaluation explicit in its %% syntax, which we introduce in the next section. The %% \key{select-instruction} pass translates from \LangCVar{} to %% \LangXVar{}. The \key{assign-homes} and %% \key{patch-instructions} %% passes input and output variants of x86 assembly. \newcommand{\CvarGrammarRacket}{ \begin{array}{lcl} \Atm &::=& \Int \MID \Var \\ \Exp &::=& \Atm \MID \CREAD{} \MID \CNEG{\Atm} \MID \CADD{\Atm}{\Atm} \MID \CSUB{\Atm}{\Atm}\\ \Stmt &::=& \CASSIGN{\Var}{\Exp} \\ \Tail &::= & \CRETURN{\Exp} \MID \Stmt~\Tail \end{array} } \newcommand{\CvarASTRacket}{ \begin{array}{lcl} \Atm &::=& \INT{\Int} \MID \VAR{\Var} \\ \Exp &::=& \Atm \MID \READ{} \MID \NEG{\Atm} \\ &\MID& \ADD{\Atm}{\Atm} \MID \SUB{\Atm}{\Atm}\\ \Stmt &::=& \ASSIGN{\VAR{\Var}}{\Exp} \\ \Tail &::= & \RETURN{\Exp} \MID \SEQ{\Stmt}{\Tail} \end{array} } {\if\edition\racketEd \subsection{The \LangCVar{} Intermediate Language} The output of \code{explicate\_control} is similar to the C language~\citep{Kernighan:1988nx} in that it has separate syntactic categories for expressions and statements, so we name it \LangCVar{}. This style of intermediate language is also known as \emph{three-address code}, to emphasize that the typical form of a statement such as \CASSIGN{\key{x}}{\CADD{\key{y}}{\key{z}}} involves three addresses: \code{x}, \code{y}, and \code{z}~\citep{Aho:2006wb}. The concrete syntax for \LangCVar{} is shown in figure~\ref{fig:c0-concrete-syntax}, and the abstract syntax for \LangCVar{} is shown in figure~\ref{fig:c0-syntax}. % The \LangCVar{} language supports the same operators as \LangVar{} but the arguments of operators are restricted to atomic expressions. Instead of \key{let} expressions, \LangCVar{} has assignment statements that can be executed in sequence using the \key{Seq} form. A sequence of statements always ends with \key{Return}, a guarantee that is baked into the grammar rules for \itm{tail}. The naming of this nonterminal comes from the term \emph{tail position}\index{subject}{tail position}, which refers to an expression that is the last one to execute within a function or program. A \LangCVar{} program consists of an alist mapping labels to tails. This is more general than necessary for the present chapter, as we do not yet introduce \key{goto} for jumping to labels, but it saves us from having to change the syntax in chapter~\ref{ch:Lif}. For now there is just one label, \key{start}, and the whole program is its tail. % The $\itm{info}$ field of the \key{CProgram} form, after the \code{explicate\_control} pass, contains an alist that associates the symbol \key{locals} with a list of all the variables used in the program. At the start of the program, these variables are uninitialized; they become initialized on their first assignment. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \[ \begin{array}{l} \CvarGrammarRacket \\ \begin{array}{lcl} \LangCVarM{} & ::= & (\itm{label}\key{:}~ \Tail)\ldots \end{array} \end{array} \] \end{tcolorbox} \caption{The concrete syntax of the \LangCVar{} intermediate language.} \label{fig:c0-concrete-syntax} \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \[ \begin{array}{l} \CvarASTRacket \\ \begin{array}{lcl} \LangCVarM{} & ::= & \CPROGRAM{\itm{info}}{\LP\LP\itm{label}\,\key{.}\,\Tail\RP\ldots\RP} \end{array} \end{array} \] \end{tcolorbox} \caption{The abstract syntax of the \LangCVar{} intermediate language.} \label{fig:c0-syntax} \end{figure} The definitional interpreter for \LangCVar{} is in the support code, in the file \code{interp-Cvar.rkt}. \fi} {\if\edition\racketEd \section{Uniquify Variables} \label{sec:uniquify-Lvar} The \code{uniquify} pass replaces the variable bound by each \key{let} with a unique name. Both the input and output of the \code{uniquify} pass is the \LangVar{} language. For example, the \code{uniquify} pass should translate the program on the left into the program on the right. \begin{transformation} \begin{lstlisting} (let ([x 32]) (+ (let ([x 10]) x) x)) \end{lstlisting} \compilesto \begin{lstlisting} (let ([x.1 32]) (+ (let ([x.2 10]) x.2) x.1)) \end{lstlisting} \end{transformation} The following is another example translation, this time of a program with a \key{let} nested inside the initializing expression of another \key{let}. \begin{transformation} \begin{lstlisting} (let ([x (let ([x 4]) (+ x 1))]) (+ x 2)) \end{lstlisting} \compilesto \begin{lstlisting} (let ([x.2 (let ([x.1 4]) (+ x.1 1))]) (+ x.2 2)) \end{lstlisting} \end{transformation} We recommend implementing \code{uniquify} by creating a structurally recursive function named \code{uniquify\_exp} that does little other than copy an expression. However, when encountering a \key{let}, it should generate a unique name for the variable and associate the old name with the new name in an alist.\footnote{The Racket function \code{gensym} is handy for generating unique variable names.} The \code{uniquify\_exp} function needs to access this alist when it gets to a variable reference, so we add a parameter to \code{uniquify\_exp} for the alist. The skeleton of the \code{uniquify\_exp} function is shown in figure~\ref{fig:uniquify-Lvar}. %% The function is curried so that it is %% convenient to partially apply it to an alist and then apply it to %% different expressions, as in the last case for primitive operations in %% figure~\ref{fig:uniquify-Lvar}. The % \href{https://docs.racket-lang.org/reference/for.html#%28form._%28%28lib._racket%2Fprivate%2Fbase..rkt%29._for%2Flist%29%29}{\key{for/list}} % form of Racket is useful for transforming the element of a list to produce a new list.\index{subject}{for/list} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting} (define (uniquify_exp env) (lambda (e) (match e [(Var x) ___] [(Int n) (Int n)] [(Let x e body) ___] [(Prim op es) (Prim op (for/list ([e es]) ((uniquify_exp env) e)))]))) (define (uniquify p) (match p [(Program '() e) (Program '() ((uniquify_exp '()) e))])) \end{lstlisting} \end{tcolorbox} \caption{Skeleton for the \key{uniquify} pass.} \label{fig:uniquify-Lvar} \end{figure} \begin{exercise} \normalfont\normalsize % I don't like the italics for exercises. -Jeremy Complete the \code{uniquify} pass by filling in the blanks in figure~\ref{fig:uniquify-Lvar}; that is, implement the cases for variables and for the \key{let} form in the file \code{compiler.rkt} in the support code. \end{exercise} \begin{exercise} \normalfont\normalsize \label{ex:Lvar} Create five \LangVar{} programs that exercise the most interesting parts of the \key{uniquify} pass; that is, the programs should include \key{let} forms, variables, and variables that shadow each other. The five programs should be placed in the subdirectory named \key{tests}, and the file names should start with \code{var\_test\_} followed by a unique integer and end with the file extension \key{.rkt}. % The \key{run-tests.rkt} script in the support code checks whether the output programs produce the same result as the input programs. The script uses the \key{interp-tests} function (appendix~\ref{appendix:utilities}) from \key{utilities.rkt} to test your \key{uniquify} pass on the example programs. The \code{passes} parameter of \key{interp-tests} is a list that should have one entry for each pass in your compiler. For now, define \code{passes} to contain just one entry for \code{uniquify} as follows: \begin{lstlisting} (define passes (list (list "uniquify" uniquify interp_Lvar type-check-Lvar))) \end{lstlisting} Run the \key{run-tests.rkt} script in the support code to check whether the output programs produce the same result as the input programs. \end{exercise} \fi} \section{Remove Complex Operands} \label{sec:remove-complex-opera-Lvar} The \code{remove\_complex\_operands} pass compiles \LangVar{} programs into a restricted form in which the arguments of operations are atomic expressions. Put another way, this pass removes complex operands\index{subject}{complex operand}, such as the expression \racket{\code{(- 10)}}\python{\code{-10}} in the following program. This is accomplished by introducing a new temporary variable, assigning the complex operand to the new variable, and then using the new variable in place of the complex operand, as shown in the output of \code{remove\_complex\_operands} on the right. {\if\edition\racketEd \begin{transformation} % var_test_19.rkt \begin{lstlisting} (let ([x (+ 42 (- 10))]) (+ x 10)) \end{lstlisting} \compilesto \begin{lstlisting} (let ([x (let ([tmp.1 (- 10)]) (+ 42 tmp.1))]) (+ x 10)) \end{lstlisting} \end{transformation} \fi} {\if\edition\pythonEd\pythonColor \begin{transformation} \begin{lstlisting} x = 42 + -10 print(x + 10) \end{lstlisting} \compilesto \begin{lstlisting} tmp_0 = -10 x = 42 + tmp_0 tmp_1 = x + 10 print(tmp_1) \end{lstlisting} \end{transformation} \fi} \newcommand{\LvarMonadASTRacket}{ \begin{array}{rcl} \Atm &::=& \INT{\Int} \MID \VAR{\Var} \\ \Exp &::=& \Atm \MID \READ{} \\ &\MID& \NEG{\Atm} \MID \ADD{\Atm}{\Atm} \MID \SUB{\Atm}{\Atm} \\ &\MID& \LET{\Var}{\Exp}{\Exp} \\ \end{array} } \newcommand{\LvarMonadASTPython}{ \begin{array}{rcl} \Atm &::=& \INT{\Int} \MID \VAR{\Var} \\ \Exp{} &::=& \Atm \MID \READ{} \\ &\MID& \UNIOP{\key{USub()}}{\Atm} \MID \BINOP{\Atm}{\key{Add()}}{\Atm} \\ &\MID& \BINOP{\Atm}{\key{Sub()}}{\Atm} \\ \Stmt{} &::=& \PRINT{\Atm} \MID \EXPR{\Exp} \\ &\MID& \ASSIGN{\VAR{\Var}}{\Exp} \end{array} } \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] {\if\edition\racketEd \[ \begin{array}{l} \LvarMonadASTRacket \\ \begin{array}{rcl} \LangVarANFM{} &::=& \PROGRAM{\code{'()}}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \LvarMonadASTPython \\ \begin{array}{rcl} \LangVarANFM{} &::=& \PROGRAM{}{\Stmt^{*}} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{\LangVarANF{} is \LangVar{} with operands restricted to atomic expressions.} \label{fig:Lvar-anf-syntax} \end{figure} Figure~\ref{fig:Lvar-anf-syntax} presents the grammar for the output of this pass, the language \LangVarANF{}. The only difference is that operator arguments are restricted to be atomic expressions that are defined by the \Atm{} nonterminal. In particular, integer constants and variables are atomic. The atomic expressions are pure (they do not cause or depend on side effects) whereas complex expressions may have side effects, such as \READ{}. A language with this separation between pure expressions versus expressions with side effects is said to be in monadic normal form~\citep{Moggi:1991in,Danvy:2003fk}, which explains the \textit{mon} in the name \LangVarANF{}. An important invariant of the \code{remove\_complex\_operands} pass is that the relative ordering among complex expressions is not changed, but the relative ordering between atomic expressions and complex expressions can change and often does. The reason that these changes are behavior preserving is that the atomic expressions are pure. {\if\edition\racketEd Another well-known form for intermediate languages is the \emph{administrative normal form} (ANF)~\citep{Danvy:1991fk,Flanagan:1993cg}. \index{subject}{administrative normal form} \index{subject}{ANF} % The \LangVarANF{} language is not quite in ANF because it allows the right-hand side of a \code{let} to be a complex expression, such as another \code{let}. The flattening of nested \code{let} expressions is instead one of the responsibilities of the \code{explicate\_control} pass. \fi} {\if\edition\racketEd We recommend implementing this pass with two mutually recursive functions, \code{rco\_atom} and \code{rco\_exp}. The idea is to apply \code{rco\_atom} to subexpressions that need to become atomic and to apply \code{rco\_exp} to subexpressions that do not. Both functions take an \LangVar{} expression as input. The \code{rco\_exp} function returns an expression. The \code{rco\_atom} function returns two things: an atomic expression and an alist mapping temporary variables to complex subexpressions. You can return multiple things from a function using Racket's \key{values} form, and you can receive multiple things from a function call using the \key{define-values} form. \fi} % {\if\edition\pythonEd\pythonColor % We recommend implementing this pass with an auxiliary method named \code{rco\_exp} with two parameters: an \LangVar{} expression and a Boolean that specifies whether the expression needs to become atomic or not. The \code{rco\_exp} method should return a pair consisting of the new expression and a list of pairs, associating new temporary variables with their initializing expressions. % \fi} {\if\edition\racketEd % In the example program with the expression \code{(+ 42 (- 10))}, the subexpression \code{(- 10)} should be processed using the \code{rco\_atom} function because it is an argument of the \code{+} operator and therefore needs to become atomic. The output of \code{rco\_atom} applied to \code{(- 10)} is as follows: \begin{transformation} \begin{lstlisting} (- 10) \end{lstlisting} \compilesto \begin{lstlisting} tmp.1 ((tmp.1 . (- 10))) \end{lstlisting} \end{transformation} \fi} % {\if\edition\pythonEd\pythonColor % Returning to the example program with the expression \code{42 + -10}, the subexpression \code{-10} should be processed using the \code{rco\_exp} function with \code{True} as the second argument, because \code{-10} is an argument of the \code{+} operator and therefore needs to become atomic. The output of \code{rco\_exp} applied to \code{-10} is as follows. \begin{transformation} \begin{lstlisting} -10 \end{lstlisting} \compilesto \begin{lstlisting} tmp_1 [(tmp_1, -10)] \end{lstlisting} \end{transformation} % \fi} Take special care of programs, such as the following, that % \racket{bind a variable to an atomic expression.} % \python{assign an atomic expression to a variable.} % You should leave such \racket{variable bindings}\python{assignments} unchanged, as shown in the program on the right:\\ % {\if\edition\racketEd \begin{transformation} % var_test_20.rkt \begin{lstlisting} (let ([a 42]) (let ([b a]) b)) \end{lstlisting} \compilesto \begin{lstlisting} (let ([a 42]) (let ([b a]) b)) \end{lstlisting} \end{transformation} \fi} {\if\edition\pythonEd\pythonColor \begin{transformation} \begin{lstlisting} a = 42 b = a print(b) \end{lstlisting} \compilesto \begin{lstlisting} a = 42 b = a print(b) \end{lstlisting} \end{transformation} \fi} % \noindent A careless implementation might produce the following output with unnecessary temporary variables. \begin{center} \begin{minipage}{0.4\textwidth} {\if\edition\racketEd \begin{lstlisting} (let ([tmp.1 42]) (let ([a tmp.1]) (let ([tmp.2 a]) (let ([b tmp.2]) b)))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} tmp_1 = 42 a = tmp_1 tmp_2 = a b = tmp_2 print(b) \end{lstlisting} \fi} \end{minipage} \end{center} \begin{exercise} \normalfont\normalsize {\if\edition\racketEd Implement the \code{remove\_complex\_operands} function in \code{compiler.rkt}. % Create three new \LangVar{} programs that exercise the interesting code in the \code{remove\_complex\_operands} pass. Follow the guidelines regarding file names described in exercise~\ref{ex:Lvar}. % In the \code{run-tests.rkt} script, add the following entry to the list of \code{passes}, and then run the script to test your compiler. \begin{lstlisting} (list "remove-complex" remove_complex_operands interp_Lvar type-check-Lvar) \end{lstlisting} In debugging your compiler, it is often useful to see the intermediate programs that are output from each pass. To print the intermediate programs, place \lstinline{(debug-level 1)} before the call to \code{interp-tests} in \code{run-tests.rkt}. \fi} % {\if\edition\pythonEd\pythonColor Implement the \code{remove\_complex\_operands} pass in \code{compiler.py}, creating auxiliary functions for each nonterminal in the grammar, that is, \code{rco\_exp} and \code{rco\_stmt}. We recommend that you use the function \code{utils.generate\_name()} to generate fresh names from a stub string. \fi} \end{exercise} {\if\edition\pythonEd\pythonColor \begin{exercise} \normalfont\normalsize \label{ex:Lvar} Create five \LangVar{} programs that exercise the most interesting parts of the \code{remove\_complex\_operands} pass. The five programs should be placed in the subdirectory named \key{tests}, and the file names should start with \code{var\_test\_} followed by a unique integer and end with the file extension \key{.py}. %% The \key{run-tests.rkt} script in the support code checks whether the %% output programs produce the same result as the input programs. The %% script uses the \key{interp-tests} function %% (Appendix~\ref{appendix:utilities}) from \key{utilities.rkt} to test %% your \key{uniquify} pass on the example programs. The \code{passes} %% parameter of \key{interp-tests} is a list that should have one entry %% for each pass in your compiler. For now, define \code{passes} to %% contain just one entry for \code{uniquify} as shown below. %% \begin{lstlisting} %% (define passes %% (list (list "uniquify" uniquify interp_Lvar type-check-Lvar))) %% \end{lstlisting} Run the \key{run-tests.py} script in the support code to check whether the output programs produce the same result as the input programs. \end{exercise} \fi} {\if\edition\racketEd \section{Explicate Control} \label{sec:explicate-control-Lvar} The \code{explicate\_control} pass compiles \LangVar{} programs into \LangCVar{} programs that make the order of execution explicit in their syntax. For now this amounts to flattening \key{let} constructs into a sequence of assignment statements. For example, consider the following \LangVar{} program:\\ % var_test_11.rkt \begin{minipage}{0.96\textwidth} \begin{lstlisting} (let ([y (let ([x 20]) (+ x (let ([x 22]) x)))]) y) \end{lstlisting} \end{minipage}\\ % The output of the previous pass is shown next, on the left, and the output of \code{explicate\_control} is on the right. Recall that the right-hand side of a \key{let} executes before its body, so that the order of evaluation for this program is to assign \code{20} to \code{x.1}, \code{22} to \code{x.2}, and \code{(+ x.1 x.2)} to \code{y}, and then to return \code{y}. Indeed, the output of \code{explicate\_control} makes this ordering explicit. \begin{transformation} \begin{lstlisting} (let ([y (let ([x.1 20]) (let ([x.2 22]) (+ x.1 x.2)))]) y) \end{lstlisting} \compilesto \begin{lstlisting}[language=C] start: x.1 = 20; x.2 = 22; y = (+ x.1 x.2); return y; \end{lstlisting} \end{transformation} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting} (define (explicate_tail e) (match e [(Var x) ___] [(Int n) (Return (Int n))] [(Let x rhs body) ___] [(Prim op es) ___] [else (error "explicate_tail unhandled case" e)])) (define (explicate_assign e x cont) (match e [(Var x) ___] [(Int n) (Seq (Assign (Var x) (Int n)) cont)] [(Let y rhs body) ___] [(Prim op es) ___] [else (error "explicate_assign unhandled case" e)])) (define (explicate_control p) (match p [(Program info body) ___])) \end{lstlisting} \end{tcolorbox} \caption{Skeleton for the \code{explicate\_control} pass.} \label{fig:explicate-control-Lvar} \end{figure} The organization of this pass depends on the notion of tail position to which we have alluded. Here is the definition. \begin{definition}\normalfont The following rules define when an expression is in \emph{tail position}\index{subject}{tail position} for the language \LangVar{}. \begin{enumerate} \item In $\PROGRAM{\code{()}}{e}$, expression $e$ is in tail position. \item If $\LET{x}{e_1}{e_2}$ is in tail position, then so is $e_2$. \end{enumerate} \end{definition} We recommend implementing \code{explicate\_control} using two recursive functions, \code{explicate\_tail} and \code{explicate\_assign}, as suggested in the skeleton code shown in figure~\ref{fig:explicate-control-Lvar}. The \code{explicate\_tail} function should be applied to expressions in tail position, whereas the \code{explicate\_assign} should be applied to expressions that occur on the right-hand side of a \key{let}. % The \code{explicate\_tail} function takes an \Exp{} in \LangVar{} as input and produces a \Tail{} in \LangCVar{} (see figure~\ref{fig:c0-syntax}). % The \code{explicate\_assign} function takes an \Exp{} in \LangVar{}, the variable to which it is to be assigned, and a \Tail{} in \LangCVar{} for the code that comes after the assignment. The \code{explicate\_assign} function returns a $\Tail$ in \LangCVar{}. The \code{explicate\_assign} function is in accumulator-passing style: the \code{cont} parameter is used for accumulating the output. This accumulator-passing style plays an important role in the way that we generate high-quality code for conditional expressions in chapter~\ref{ch:Lif}. The abbreviation \code{cont} is for continuation because it contains the generated code that should come after the current assignment. This code organization is also related to continuation-passing style, except that \code{cont} is not what happens next during compilation but is what happens next in the generated code. \begin{exercise}\normalfont\normalsize % Implement the \code{explicate\_control} function in \code{compiler.rkt}. Create three new \LangInt{} programs that exercise the code in \code{explicate\_control}. % In the \code{run-tests.rkt} script, add the following entry to the list of \code{passes} and then run the script to test your compiler. \begin{lstlisting} (list "explicate control" explicate_control interp_Cvar type-check-Cvar) \end{lstlisting} \end{exercise} \fi} \section{Select Instructions} \label{sec:select-Lvar} \index{subject}{select instructions} In the \code{select\_instructions} pass we begin the work of translating \racket{from \LangCVar{}} to \LangXVar{}. The target language of this pass is a variant of x86 that still uses variables, so we add an AST node of the form $\VAR{\itm{var}}$ to the \Arg{} nonterminal of the \LangXInt{} abstract syntax (figure~\ref{fig:x86-int-ast}). \racket{We recommend implementing the \code{select\_instructions} with three auxiliary functions, one for each of the nonterminals of \LangCVar{}: $\Atm$, $\Stmt$, and $\Tail$.} \python{We recommend implementing an auxiliary function named \code{select\_stmt} for the $\Stmt$ nonterminal.} \racket{The cases for $\Atm$ are straightforward; variables stay the same and integer constants change to immediates; that is, $\INT{n}$ changes to $\IMM{n}$.} Next consider the cases for the $\Stmt$ nonterminal, starting with arithmetic operations. For example, consider the following addition operation, on the left side. (Let $\Arg_1$ and $\Arg_2$ be the translations of $\Atm_1$ and $\Atm_2$, respectively.) There is an \key{addq} instruction in x86, but it performs an in-place update. % So, we could move $\Arg_1$ into the \code{rax} register, then add $\Arg_2$ to \code{rax}, and then finally move \code{rax} into the left-hand \itm{var}. \begin{transformation} {\if\edition\racketEd \begin{lstlisting} |$\itm{var}$| = (+ |$\Atm_1$| |$\Atm_2$|); \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} |$\itm{var}$| = |$\Atm_1$| + |$\Atm_2$| \end{lstlisting} \fi} \compilesto \begin{lstlisting} movq |$\Arg_1$|, %rax addq |$\Arg_2$|, %rax movq %rax, |$\itm{var}$| \end{lstlisting} \end{transformation} % However, with some care we can generate shorter sequences of instructions. Suppose that one or more of the arguments of the addition is the same variable as the left-hand side of the assignment. Then the assignment statement can be translated into a single \key{addq} instruction, as follows. \begin{transformation} {\if\edition\racketEd \begin{lstlisting} |$\itm{var}$| = (+ |$\Atm_1$| |$\itm{var}$|); \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} |$\itm{var}$| = |$\Atm_1$| + |$\itm{var}$| \end{lstlisting} \fi} \compilesto \begin{lstlisting} addq |$\Arg_1$|, |$\itm{var}$| \end{lstlisting} \end{transformation} % On the other hand, if $\Atm_1$ is not the same variable as the left-hand side, then we can move $\Arg_1$ into the left-hand \itm{var} and then add $\Arg_2$ to \itm{var}. % \begin{transformation} {\if\edition\racketEd \begin{lstlisting} |$\itm{var}$| = (+ |$\Atm_1$| |$\Atm_2$|); \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} |$\itm{var}$| = |$\Atm_1$| + |$\Atm_2$| \end{lstlisting} \fi} \compilesto \begin{lstlisting} movq |$\Arg_1$|, |$\itm{var}$| addq |$\Arg_2$|, |$\itm{var}$| \end{lstlisting} \end{transformation} The \READOP{} operation does not have a direct counterpart in x86 assembly, so we provide this functionality with the function \code{read\_int} in the file \code{runtime.c}, written in C~\citep{Kernighan:1988nx}. In general, we refer to all the functionality in this file as the \emph{runtime system}\index{subject}{runtime system}, or simply the \emph{runtime} for short. When compiling your generated x86 assembly code, you need to compile \code{runtime.c} to \code{runtime.o} (an \emph{object file}, using \code{gcc} with option \code{-c}) and link it into the executable. For our purposes of code generation, all you need to do is translate an assignment of \READOP{} into a call to the \code{read\_int} function followed by a move from \code{rax} to the left-hand side variable. (Recall that the return value of a function goes into \code{rax}.) \begin{transformation} {\if\edition\racketEd \begin{lstlisting} |$\itm{var}$| = (read); \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} |$\itm{var}$| = input_int(); \end{lstlisting} \fi} \compilesto \begin{lstlisting} callq read_int movq %rax, |$\itm{var}$| \end{lstlisting} \end{transformation} {\if\edition\pythonEd\pythonColor % Similarly, we translate the \code{print} operation, shown below, into a call to the \code{print\_int} function defined in \code{runtime.c}. In x86, the first six arguments to functions are passed in registers, with the first argument passed in register \code{rdi}. So we move the $\Arg$ into \code{rdi} and then call \code{print\_int} using the \code{callq} instruction. \begin{transformation} \begin{lstlisting} print(|$\Atm$|) \end{lstlisting} \compilesto \begin{lstlisting} movq |$\Arg$|, %rdi callq print_int \end{lstlisting} \end{transformation} % \fi} {\if\edition\racketEd There are two cases for the $\Tail$ nonterminal: \key{Return} and \key{Seq}. Regarding \key{Return}, we recommend treating it as an assignment to the \key{rax} register followed by a jump to the conclusion of the program (so the conclusion needs to be labeled). For $\SEQ{s}{t}$, you can translate the statement $s$ and tail $t$ recursively and then append the resulting instructions. \fi} {\if\edition\pythonEd\pythonColor We recommend that you use the function \code{utils.label\_name()} to transform strings into labels, for example, in the target of the \code{callq} instruction. This practice makes your compiler portable across Linux and Mac OS X, which requires an underscore prefixed to all labels. \fi} \begin{exercise} \normalfont\normalsize {\if\edition\racketEd Implement the \code{select\_instructions} pass in \code{compiler.rkt}. Create three new example programs that are designed to exercise all the interesting cases in this pass. % In the \code{run-tests.rkt} script, add the following entry to the list of \code{passes} and then run the script to test your compiler. \begin{lstlisting} (list "instruction selection" select_instructions interp_pseudo-x86-0) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor Implement the \key{select\_instructions} pass in \code{compiler.py}. Create three new example programs that are designed to exercise all the interesting cases in this pass. Run the \code{run-tests.py} script to check whether the output programs produce the same result as the input programs. \fi} \end{exercise} \section{Assign Homes} \label{sec:assign-Lvar} The \code{assign\_homes} pass compiles \LangXVar{} programs to \LangXVar{} programs that no longer use program variables. Thus, the \code{assign\_homes} pass is responsible for placing all the program variables in registers or on the stack. For runtime efficiency, it is better to place variables in registers, but because there are only sixteen registers, some programs must necessarily resort to placing some variables on the stack. In this chapter we focus on the mechanics of placing variables on the stack. We study an algorithm for placing variables in registers in chapter~\ref{ch:register-allocation-Lvar}. Consider again the following \LangVar{} program from section~\ref{sec:remove-complex-opera-Lvar}:\\ % var_test_20.rkt \begin{minipage}{0.96\textwidth} {\if\edition\racketEd \begin{lstlisting} (let ([a 42]) (let ([b a]) b)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} a = 42 b = a print(b) \end{lstlisting} \fi} \end{minipage}\\ % The output of \code{select\_instructions} is shown next, on the left, and the output of \code{assign\_homes} is on the right. In this example, we assign variable \code{a} to stack location \code{-8(\%rbp)} and variable \code{b} to location \code{-16(\%rbp)}. \begin{transformation} \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] movq $42, a movq a, b movq b, %rax \end{lstlisting} \compilesto %stack-space: 16 \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] movq $42, -8(%rbp) movq -8(%rbp), -16(%rbp) movq -16(%rbp), %rax \end{lstlisting} \end{transformation} \racket{ The \code{assign\_homes} pass should replace all variables with stack locations. The list of variables can be obtained from the \code{locals-types} entry in the $\itm{info}$ of the \code{X86Program} node. The \code{locals-types} entry is an alist mapping all the variables in the program to their types (for now, just \code{Integer}). As an aside, the \code{locals-types} entry is computed by \code{type-check-Cvar} in the support code, which installs it in the $\itm{info}$ field of the \code{CProgram} node, which you should propagate to the \code{X86Program} node.} % \python{The \code{assign\_homes} pass should replace all uses of variables with stack locations.} % In the process of assigning variables to stack locations, it is convenient for you to compute and store the size of the frame (in bytes) in \racket{the $\itm{info}$ field of the \key{X86Program} node, with the key \code{stack-space},} % \python{the field \code{stack\_space} of the \key{X86Program} node,} % which is needed later to generate the conclusion of the \code{main} procedure. The x86-64 standard requires the frame size to be a multiple of 16 bytes.\index{subject}{frame} % TODO: store the number of variables instead? -Jeremy \begin{exercise}\normalfont\normalsize Implement the \code{assign\_homes} pass in \racket{\code{compiler.rkt}}\python{\code{compiler.py}}, defining auxiliary functions for each of the nonterminals in the \LangXVar{} grammar. We recommend that the auxiliary functions take an extra parameter that maps variable names to homes (stack locations for now). % {\if\edition\racketEd In the \code{run-tests.rkt} script, add the following entry to the list of \code{passes} and then run the script to test your compiler. \begin{lstlisting} (list "assign homes" assign-homes interp_x86-0) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor Run the \code{run-tests.py} script to check whether the output programs produce the same result as the input programs. \fi} \end{exercise} \section{Patch Instructions} \label{sec:patch-s0} The \code{patch\_instructions} pass compiles from \LangXVar{} to \LangXInt{} by making sure that each instruction adheres to the restriction that at most one argument of an instruction may be a memory reference. We return to the following example.\\ \begin{minipage}{0.5\textwidth} % var_test_20.rkt {\if\edition\racketEd \begin{lstlisting} (let ([a 42]) (let ([b a]) b)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} a = 42 b = a print(b) \end{lstlisting} \fi} \end{minipage}\\ The \code{assign\_homes} pass produces the following translation. \\ \begin{minipage}{0.5\textwidth} {\if\edition\racketEd \begin{lstlisting} movq $42, -8(%rbp) movq -8(%rbp), -16(%rbp) movq -16(%rbp), %rax \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} movq $42, -8(%rbp) movq -8(%rbp), -16(%rbp) movq -16(%rbp), %rdi callq print_int \end{lstlisting} \fi} \end{minipage}\\ The second \key{movq} instruction is problematic because both arguments are stack locations. We suggest fixing this problem by moving from the source location to the register \key{rax} and then from \key{rax} to the destination location, as follows. \begin{lstlisting} movq -8(%rbp), %rax movq %rax, -16(%rbp) \end{lstlisting} There is a similar corner case that also needs to be dealt with. If one argument is an immediate integer larger than $2^{16}$ and the other is a memory reference, then the instruction is invalid. One can fix this, for example, by first moving the immediate integer into \key{rax} and then using \key{rax} in place of the integer. \begin{exercise} \normalfont\normalsize Implement the \key{patch\_instructions} pass in \racket{\code{compiler.rkt}}\python{\code{compiler.py}}. Create three new example programs that are designed to exercise all the interesting cases in this pass. % {\if\edition\racketEd In the \code{run-tests.rkt} script, add the following entry to the list of \code{passes} and then run the script to test your compiler. \begin{lstlisting} (list "patch instructions" patch_instructions interp_x86-0) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor Run the \code{run-tests.py} script to check whether the output programs produce the same result as the input programs. \fi} \end{exercise} \section{Generate Prelude and Conclusion} \label{sec:print-x86} \index{subject}{prelude}\index{subject}{conclusion} The last step of the compiler from \LangVar{} to x86 is to generate the \code{main} function with a prelude and conclusion wrapped around the rest of the program, as shown in figure~\ref{fig:p1-x86} and discussed in section~\ref{sec:x86}. When running on Mac OS X, your compiler should prefix an underscore to all labels (for example, changing \key{main} to \key{\_main}). % \racket{The Racket call \code{(system-type 'os)} is useful for determining which operating system the compiler is running on. It returns \code{'macosx}, \code{'unix}, or \code{'windows}.} % \python{The Python \code{platform} library includes a \code{system()} function that returns \code{\textquotesingle Linux\textquotesingle}, \code{\textquotesingle Windows\textquotesingle}, or \code{\textquotesingle Darwin\textquotesingle} (for Mac).} \begin{exercise}\normalfont\normalsize % Implement the \key{prelude\_and\_conclusion} pass in \racket{\code{compiler.rkt}}\python{\code{compiler.py}}. % {\if\edition\racketEd In the \code{run-tests.rkt} script, add the following entry to the list of \code{passes} and then run the script to test your compiler. \begin{lstlisting} (list "prelude and conclusion" prelude-and-conclusion interp_x86-0) \end{lstlisting} % Uncomment the call to the \key{compiler-tests} function (appendix~\ref{appendix:utilities}), which tests your complete compiler by executing the generated x86 code. It translates the x86 AST that you produce into a string by invoking the \code{print-x86} method of the \code{print-x86-class} in \code{utilities.rkt}. Compile the provided \key{runtime.c} file to \key{runtime.o} using \key{gcc}. Run the script to test your compiler. % \fi} {\if\edition\pythonEd\pythonColor % Run the \code{run-tests.py} script to check whether the output programs produce the same result as the input programs. That script translates the x86 AST that you produce into a string by invoking the \code{repr} method that is implemented by the x86 AST classes in \code{x86\_ast.py}. % \fi} \end{exercise} \section{Challenge: Partial Evaluator for \LangVar{}} \label{sec:pe-Lvar} \index{subject}{partialevaluation@partial evaluation} This section describes two optional challenge exercises that involve adapting and improving the partial evaluator for \LangInt{} that was introduced in section~\ref{sec:partial-evaluation}. \begin{exercise}\label{ex:pe-Lvar} \normalfont\normalsize Adapt the partial evaluator from section~\ref{sec:partial-evaluation} (figure~\ref{fig:pe-arith}) so that it applies to \LangVar{} programs instead of \LangInt{} programs. Recall that \LangVar{} adds variables and % \racket{\key{let} binding}\python{assignment} % to the \LangInt{} language, so you will need to add cases for them in the \code{pe\_exp} % \racket{function.} % \python{and \code{pe\_stmt} functions.} % Once complete, add the partial evaluation pass to the front of your compiler, and make sure that your compiler still passes all the tests. \end{exercise} \begin{exercise} \normalfont\normalsize Improve on the partial evaluator by replacing the \code{pe\_neg} and \code{pe\_add} auxiliary functions with functions that know more about arithmetic. For example, your partial evaluator should translate {\if\edition\racketEd \[ \code{(+ 1 (+ (read) 1))} \qquad \text{into} \qquad \code{(+ 2 (read))} \] \fi} {\if\edition\pythonEd\pythonColor \[ \code{1 + (input\_int() + 1)} \qquad \text{into} \qquad \code{2 + input\_int()} \] \fi} % To accomplish this, the \code{pe\_exp} function should produce output in the form of the $\itm{residual}$ nonterminal of the following grammar. The idea is that when processing an addition expression, we can always produce one of the following: (1) an integer constant, (2) an addition expression with an integer constant on the left-hand side but not the right-hand side, or (3) an addition expression in which neither subexpression is a constant. % {\if\edition\racketEd \[ \begin{array}{lcl} \itm{inert} &::=& \Var \MID \LP\key{read}\RP \MID \LP\key{-} ~\Var\RP \MID \LP\key{-} ~\LP\key{read}\RP\RP \MID \LP\key{+} ~ \itm{inert} ~ \itm{inert}\RP\\ &\MID& \LP\key{let}~\LP\LS\Var~\itm{residual}\RS\RP~ \itm{residual} \RP \\ \itm{residual} &::=& \Int \MID \LP\key{+}~ \Int~ \itm{inert}\RP \MID \itm{inert} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{lcl} \itm{inert} &::=& \Var \MID \key{input\_int}\LP\RP \MID \key{-} \Var \MID \key{-} \key{input\_int}\LP\RP \MID \itm{inert} ~ \key{+} ~ \itm{inert}\\ \itm{residual} &::=& \Int \MID \Int ~ \key{+} ~ \itm{inert} \MID \itm{inert} \end{array} \] \fi} The \code{pe\_add} and \code{pe\_neg} functions may assume that their inputs are $\itm{residual}$ expressions and they should return $\itm{residual}$ expressions. Once the improvements are complete, make sure that your compiler still passes all the tests. After all, fast code is useless if it produces incorrect results! \end{exercise} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\if\edition\pythonEd\pythonColor \chapter{Parsing} \label{ch:parsing} \setcounter{footnote}{0} \index{subject}{parsing} In this chapter we learn how to use the Lark parser framework~\citep{shinan20:_lark_docs} to translate the concrete syntax of \LangInt{} (a sequence of characters) into an abstract syntax tree. You will then be asked to use Lark to create a parser for \LangVar{}. We also describe the parsing algorithms used inside Lark, studying the \citet{Earley:1970ly} and LALR(1) algorithms~\citep{DeRemer69,Anderson73}. A parser framework such as Lark takes in a specification of the concrete syntax and an input program and produces a parse tree. Even though a parser framework does most of the work for us, using one properly requires some knowledge. In particular, we must learn about its specification languages and we must learn how to deal with ambiguity in our language specifications. Also, some algorithms, such as LALR(1), place restrictions on the grammars they can handle, in which case knowing the algorithm help with trying to decipher the error messages. The process of parsing is traditionally subdivided into two phases: \emph{lexical analysis} (also called scanning) and \emph{syntax analysis} (also called parsing). The lexical analysis phase translates the sequence of characters into a sequence of \emph{tokens}, that is, words consisting of several characters. The parsing phase organizes the tokens into a \emph{parse tree} that captures how the tokens were matched by rules in the grammar of the language. The reason for the subdivision into two phases is to enable the use of a faster but less powerful algorithm for lexical analysis and the use of a slower but more powerful algorithm for parsing. % %% Likewise, parser generators typical come in pairs, with separate %% generators for the lexical analyzer (or lexer for short) and for the %% parser. A particularly influential pair of generators were %% \texttt{lex} and \texttt{yacc}. The \texttt{lex} generator was written %% by \citet{Lesk:1975uq} at Bell Labs. The \texttt{yacc} generator was %% written by \citet{Johnson:1979qy} at AT\&T and stands for Yet Another %% Compiler Compiler. % The Lark parser framework that we use in this chapter includes both lexical analyzers and parsers. The next section discusses lexical analysis, and the remainder of the chapter discusses parsing. \section{Lexical Analysis and Regular Expressions} \label{sec:lex} The lexical analyzers produced by Lark turn a sequence of characters (a string) into a sequence of token objects. For example, a Lark generated lexer for \LangInt{} converts the string \begin{lstlisting} 'print(1 + 3)' \end{lstlisting} \noindent into the following sequence of token objects: \begin{center} \begin{minipage}{0.95\textwidth} \begin{lstlisting} Token('PRINT', 'print') Token('LPAR', '(') Token('INT', '1') Token('PLUS', '+') Token('INT', '3') Token('RPAR', ')') Token('NEWLINE', '\n') \end{lstlisting} \end{minipage} \end{center} Each token includes a field for its \code{type}, such as \skey{INT}, and a field for its \code{value}, such as \skey{1}. Following in the tradition of \code{lex}~\citep{Lesk:1975uq}, the specification language for Lark's lexer is one regular expression for each type of token. The term \emph{regular} comes from the term \emph{regular languages}, which are the languages that can be recognized by a finite state machine. A \emph{regular expression} is a pattern formed of the following core elements:\index{subject}{regular expression}\footnote{Regular expressions traditionally include the empty regular expression that matches any zero-length part of a string, but Lark does not support the empty regular expression.} \begin{itemize} \item A single character $c$ is a regular expression, and it matches only itself. For example, the regular expression \code{a} matches only the string \skey{a}. \item Two regular expressions separated by a vertical bar $R_1 \ttm{|} R_2$ form a regular expression that matches any string that matches $R_1$ or $R_2$. For example, the regular expression \code{a|c} matches the string \skey{a} and the string \skey{c}. \item Two regular expressions in sequence $R_1 R_2$ form a regular expression that matches any string that can be formed by concatenating two strings, where the first string matches $R_1$ and the second string matches $R_2$. For example, the regular expression \code{(a|c)b} matches the strings \skey{ab} and \skey{cb}. (Parentheses can be used to control the grouping of operators within a regular expression.) \item A regular expression followed by an asterisks $R\ttm{*}$ (called Kleene closure) is a regular expression that matches any string that can be formed by concatenating zero or more strings that each match the regular expression $R$. For example, the regular expression \code{((a|c)b)*} matches the string \skey{abcbab} but not \skey{abc}. \end{itemize} For our convenience, Lark also accepts the following extended set of regular expressions that are automatically translated into the core regular expressions. \begin{itemize} \item A set of characters enclosed in square brackets $[c_1 c_2 \ldots c_n]$ is a regular expression that matches any one of the characters. So, $[c_1 c_2 \ldots c_n]$ is equivalent to the regular expression $c_1\mid c_2\mid \ldots \mid c_n$. \item A range of characters enclosed in square brackets $[c_1\ttm{-}c_2]$ is a regular expression that matches any character between $c_1$ and $c_2$, inclusive. For example, \code{[a-z]} matches any lowercase letter in the alphabet. \item A regular expression followed by the plus symbol $R\ttm{+}$ is a regular expression that matches any string that can be formed by concatenating one or more strings that each match $R$. So $R+$ is equivalent to $R(R*)$. For example, \code{[a-z]+} matches \skey{b} and \skey{bzca}. \item A regular expression followed by a question mark $R\ttm{?}$ is a regular expression that matches any string that either matches $R$ or is the empty string. For example, \code{a?b} matches both \skey{ab} and \skey{b}. \end{itemize} In a Lark grammar file, each kind of token is specified by a \emph{terminal}\index{subject}{terminal}, which is defined by a rule that consists of the name of the terminal followed by a colon followed by a sequence of literals. The literals include strings such as \code{"abc"}, regular expressions surrounded by \code{/} characters, terminal names, and literals composed using the regular expression operators ($+$, $*$, etc.). For example, the \code{DIGIT}, \code{INT}, and \code{NEWLINE} terminals are specified as follows: \begin{center} \begin{minipage}{0.95\textwidth} \begin{lstlisting} DIGIT: /[0-9]/ INT: "-"? DIGIT+ NEWLINE: (/\r/? /\n/)+ \end{lstlisting} \end{minipage} \end{center} \section{Grammars and Parse Trees} \label{sec:CFG} In section~\ref{sec:grammar} we learned how to use grammar rules to specify the abstract syntax of a language. We now take a closer look at using grammar rules to specify the concrete syntax. Recall that each rule has a left-hand side and a right-hand side, where the left-hand side is a nonterminal and the right-hand side is a pattern that defines what can be parsed as that nonterminal. For concrete syntax, each right-hand side expresses a pattern for a string instead of a pattern for an abstract syntax tree. In particular, each right-hand side is a sequence of \emph{symbols}\index{subject}{symbol}, where a symbol is either a terminal or a nonterminal. The nonterminals play the same role as in the abstract syntax, defining categories of syntax. The nonterminals of a grammar include the tokens defined in the lexer and all the nonterminals defined by the grammar rules. As an example, let us take a closer look at the concrete syntax of the \LangInt{} language, repeated here. \[ \begin{array}{l} \LintGrammarPython \\ \begin{array}{rcl} \LangInt{} &::=& \Stmt^{*} \end{array} \end{array} \] The Lark syntax for grammar rules differs slightly from the variant of BNF that we use in this book. In particular, the notation $::=$ is replaced by a single colon, and the use of typewriter font for string literals is replaced by quotation marks. The following grammar serves as a first draft of a Lark grammar for \LangInt{}. \begin{center} \begin{minipage}{0.95\textwidth} \begin{lstlisting}[escapechar=$] exp: INT | "input_int" "(" ")" | "-" exp | exp "+" exp | exp "-" exp | "(" exp ")" stmt_list: | stmt NEWLINE stmt_list lang_int: stmt_list \end{lstlisting} \end{minipage} \end{center} Let us begin by discussing the rule \code{exp: INT}, which says that if the lexer matches a string to \code{INT}, then the parser also categorizes the string as an \code{exp}. Recall that in section~\ref{sec:grammar} we defined the corresponding \Int{} nonterminal with a sentence in English. Here we specify \code{INT} more formally using a type of token \code{INT} and its regular expression \code{"-"? DIGIT+}. The rule \code{exp: exp "+" exp} says that any string that matches \code{exp}, followed by the \code{+} character, followed by another string that matches \code{exp}, is itself an \code{exp}. For example, the string \lstinline{'1+3'} is an \code{exp} because \lstinline{'1'} and \lstinline{'3'} are both \code{exp} by the rule \code{exp: INT}, and then the rule for addition applies to categorize \lstinline{'1+3'} as an \code{exp}. We can visualize the application of grammar rules to parse a string using a \emph{parse tree}\index{subject}{parse tree}. Each internal node in the tree is an application of a grammar rule and is labeled with its left-hand side nonterminal. Each leaf node is a substring of the input program. The parse tree for \lstinline{'1+3'} is shown in figure~\ref{fig:simple-parse-tree}. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \centering \includegraphics[width=1.9in]{figs/simple-parse-tree} \end{tcolorbox} \caption{The parse tree for \lstinline{'1+3'}.} \label{fig:simple-parse-tree} \end{figure} The result of parsing \lstinline{'1+3'} with this Lark grammar is the following parse tree as represented by \code{Tree} and \code{Token} objects. \begin{lstlisting} Tree('lang_int', [Tree('stmt', [Tree('exp', [Tree('exp', [Token('INT', '1')]), Tree('exp', [Token('INT', '3')])])]), Token('NEWLINE', '\n')]) \end{lstlisting} The nodes that come from the lexer are \code{Token} objects, whereas the nodes from the parser are \code{Tree} objects. Each \code{Tree} object has a \code{data} field containing the name of the nonterminal for the grammar rule that was applied. Each \code{Tree} object also has a \code{children} field that is a list containing trees and/or tokens. Note that Lark does not produce nodes for string literals in the grammar. For example, the \code{Tree} node for the addition expression has only two children for the two integers but is missing its middle child for the \code{"+"} terminal. This would be problematic except that Lark provides a mechanism for customizing the \code{data} field of each \code{Tree} node on the basis of which rule was applied. Next to each alternative in a grammar rule, write \code{->} followed by a string that you want to appear in the \code{data} field. The following is a second draft of a Lark grammar for \LangInt{}, this time with more specific labels on the \code{Tree} nodes. \begin{center} \begin{minipage}{0.95\textwidth} \begin{lstlisting}[escapechar=$] exp: INT -> int | "input_int" "(" ")" -> input_int | "-" exp -> usub | exp "+" exp -> add | exp "-" exp -> sub | "(" exp ")" -> paren stmt: "print" "(" exp ")" -> print | exp -> expr stmt_list: -> empty_stmt | stmt NEWLINE stmt_list -> add_stmt lang_int: stmt_list -> module \end{lstlisting} \end{minipage} \end{center} Here is the resulting parse tree. \begin{lstlisting} Tree('module', [Tree('expr', [Tree('add', [Tree('int', [Token('INT', '1')]), Tree('int', [Token('INT', '3')])])]), Token('NEWLINE', '\n')]) \end{lstlisting} \section{Ambiguous Grammars} A grammar is \emph{ambiguous}\index{subject}{ambiguous} when a string can be parsed in more than one way. For example, consider the string \lstinline{'1-2+3'}. This string can be parsed in two different ways using our draft grammar, resulting in the two parse trees shown in figure~\ref{fig:ambig-parse-tree}. This example is problematic because interpreting the second parse tree would yield \code{-4} even through the correct answer is \code{2}. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \centering \includegraphics[width=0.95\textwidth]{figs/ambig-parse-tree} \end{tcolorbox} \caption{The two parse trees for \lstinline{'1-2+3'}.} \label{fig:ambig-parse-tree} \end{figure} To deal with this problem we can change the grammar by categorizing the syntax in a more fine-grained fashion. In this case we want to disallow the application of the rule \code{exp: exp "-" exp} when the child on the right is an addition. To do this we can replace the \code{exp} after \code{"-"} with a nonterminal that categorizes all the expressions except for addition, as in the following. \begin{center} \begin{minipage}{0.95\textwidth} \begin{lstlisting}[escapechar=$] exp: exp "-" exp_no_add -> sub | exp "+" exp -> add | exp_no_add exp_no_add: INT -> int | "input_int" "(" ")" -> input_int | "-" exp -> usub | exp "-" exp_no_add -> sub | "(" exp ")" -> paren \end{lstlisting} \end{minipage} \end{center} However, there remains some ambiguity in the grammar. For example, the string \lstinline{'1-2-3'} can still be parsed in two different ways, as \lstinline{'(1-2)-3'} (correct) or \lstinline{'1-(2-3)'} (incorrect). That is, subtraction is left associative. Likewise, addition in Python is left associative. We also need to consider the interaction of unary subtraction with both addition and subtraction. How should we parse \lstinline{'-1+2'}? Unary subtraction has higher \emph{precedence}\index{subject}{precedence} than addition and subtraction, so \lstinline{'-1+2'} should parse the same as \lstinline{'(-1)+2'} and not \lstinline{'-(1+2)'}. The grammar in figure~\ref{fig:Lint-lark-grammar} handles the associativity of addition and subtraction by using the nonterminal \code{exp\_hi} for all the other expressions, and it uses \code{exp\_hi} for the second child in the rules for addition and subtraction. Furthermore, unary subtraction uses \code{exp\_hi} for its child. For languages with more operators and more precedence levels, one must refine the \code{exp} nonterminal into several nonterminals, one for each precedence level. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \centering \begin{lstlisting}[escapechar=$] exp: exp "+" exp_hi -> add | exp "-" exp_hi -> sub | exp_hi exp_hi: INT -> int | "input_int" "(" ")" -> input_int | "-" exp_hi -> usub | "(" exp ")" -> paren stmt: "print" "(" exp ")" -> print | exp -> expr stmt_list: -> empty_stmt | stmt NEWLINE stmt_list -> add_stmt lang_int: stmt_list -> module \end{lstlisting} \end{tcolorbox} \caption{An unambiguous Lark grammar for \LangInt{}.} \label{fig:Lint-lark-grammar} \end{figure} \section{From Parse Trees to Abstract Syntax Trees} As we have seen, the output of a Lark parser is a parse tree, that is, a tree consisting of \code{Tree} and \code{Token} nodes. So, the next step is to convert the parse tree to an abstract syntax tree. This can be accomplished with a recursive function that inspects the \code{data} field of each node and then constructs the corresponding AST node, using recursion to handle its children. The following is an excerpt from the \code{parse\_tree\_to\_ast} function for \LangInt{}. \begin{center} \begin{minipage}{0.95\textwidth} \begin{lstlisting} def parse_tree_to_ast(e): if e.data == 'int': return Constant(int(e.children[0].value)) elif e.data == 'input_int': return Call(Name('input_int'), []) elif e.data == 'add': e1, e2 = e.children return BinOp(parse_tree_to_ast(e1), Add(), parse_tree_to_ast(e2)) ... else: raise Exception('unhandled parse tree', e) \end{lstlisting} \end{minipage} \end{center} \begin{exercise} \normalfont\normalsize % Use Lark to create a lexer and parser for \LangVar{}. Use Lark's default parsing algorithm (Earley) with the \code{ambiguity} option set to \lstinline{'explicit'} so that if your grammar is ambiguous, the output will include multiple parse trees that will indicate to you that there is a problem with your grammar. Your parser should ignore white space, so we recommend using Lark's \code{\%ignore} directive as follows. \begin{lstlisting} WS: /[ \t\f\r\n]/+ %ignore WS \end{lstlisting} Change your compiler from chapter~\ref{ch:Lvar} to use your Lark parser instead of using the \code{parse} function from the \code{ast} module. Test your compiler on all the \LangVar{} programs that you have created, and create four additional programs that test for ambiguities in your grammar. \end{exercise} \section{Earley's Algorithm} \label{sec:earley} In this section we discuss the parsing algorithm of \citet{Earley:1970ly}, the default algorithm used by Lark. The algorithm is powerful in that it can handle any context-free grammar, which makes it easy to use. However, it is not a particularly efficient parsing algorithm. Earley's algorithm is $O(n^3)$ for ambiguous grammars and $O(n^2)$ for unambiguous grammars, where $n$ is the number of tokens in the input string~\citep{Hopcroft06:_automata}. In section~\ref{sec:lalr} we learn about the LALR(1) algorithm, which is more efficient but cannot handle all context-free grammars. Earley's algorithm can be viewed as an interpreter; it treats the grammar as the program being interpreted, and it treats the concrete syntax of the program-to-be-parsed as its input. Earley's algorithm uses a data structure called a \emph{chart}\index{subject}{chart} to keep track of its progress and to store its results. The chart is an array with one slot for each position in the input string, where position $0$ is before the first character and position $n$ is immediately after the last character. So, the array has length $n+1$ for an input string of length $n$. Each slot in the chart contains a set of \emph{dotted rules}. A dotted rule is simply a grammar rule with a period indicating how much of its right-hand side has already been parsed. For example, the dotted rule \begin{lstlisting} exp: exp "+" . exp_hi \end{lstlisting} represents a partial parse that has matched an \code{exp} followed by \code{+} but has not yet parsed an \code{exp} to the right of \code{+}. % Earley's algorithm starts with an initialization phase and then repeats three actions---prediction, scanning, and completion---for as long as opportunities arise. We demonstrate Earley's algorithm on a running example, parsing the following program: \begin{lstlisting} print(1 + 3) \end{lstlisting} The algorithm's initialization phase creates dotted rules for all the grammar rules whose left-hand side is the start symbol and places them in slot $0$ of the chart. We also record the starting position of the dotted rule in parentheses on the right. For example, given the grammar in figure~\ref{fig:Lint-lark-grammar}, we place \begin{lstlisting} lang_int: . stmt_list (0) \end{lstlisting} in slot $0$ of the chart. The algorithm then proceeds with \emph{prediction} actions in which it adds more dotted rules to the chart based on the nonterminals that come immediately after a period. In the dotted rule above, the nonterminal \code{stmt\_list} appears after a period, so we add all the rules for \code{stmt\_list} to slot $0$, with a period at the beginning of their right-hand sides, as follows: \begin{lstlisting} stmt_list: . (0) stmt_list: . stmt NEWLINE stmt_list (0) \end{lstlisting} We continue to perform prediction actions as more opportunities arise. For example, the \code{stmt} nonterminal now appears after a period, so we add all the rules for \code{stmt}. \begin{lstlisting} stmt: . "print" "(" exp ")" (0) stmt: . exp (0) \end{lstlisting} This reveals yet more opportunities for prediction, so we add the grammar rules for \code{exp} and \code{exp\_hi} to slot $0$. \begin{lstlisting}[escapechar=$] exp: . exp "+" exp_hi (0) exp: . exp "-" exp_hi (0) exp: . exp_hi (0) exp_hi: . INT (0) exp_hi: . "input_int" "(" ")" (0) exp_hi: . "-" exp_hi (0) exp_hi: . "(" exp ")" (0) \end{lstlisting} We have exhausted the opportunities for prediction, so the algorithm proceeds to \emph{scanning}, in which we inspect the next input token and look for a dotted rule at the current position that has a matching terminal immediately following the period. In our running example, the first input token is \code{"print"}, so we identify the rule in slot $0$ of the chart where \code{"print"} follows the period: \begin{lstlisting} stmt: . "print" "(" exp ")" (0) \end{lstlisting} We advance the period past \code{"print"} and add the resulting rule to slot $1$ of the chart: \begin{lstlisting} stmt: "print" . "(" exp ")" (0) \end{lstlisting} If the new dotted rule had a nonterminal after the period, we would need to carry out a prediction action, adding more dotted rules to slot $1$. That is not the case, so we continue scanning. The next input token is \code{"("}, so we add the following to slot $2$ of the chart. \begin{lstlisting} stmt: "print" "(" . exp ")" (0) \end{lstlisting} Now we have a nonterminal after the period, so we carry out several prediction actions, adding dotted rules for \code{exp} and \code{exp\_hi} to slot $2$ with a period at the beginning and with starting position $2$. \begin{lstlisting}[escapechar=$] exp: . exp "+" exp_hi (2) exp: . exp "-" exp_hi (2) exp: . exp_hi (2) exp_hi: . INT (2) exp_hi: . "input_int" "(" ")" (2) exp_hi: . "-" exp_hi (2) exp_hi: . "(" exp ")" (2) \end{lstlisting} With this prediction complete, we return to scanning, noting that the next input token is \code{"1"}, which the lexer parses as an \code{INT}. There is a matching rule in slot $2$: \begin{lstlisting} exp_hi: . INT (2) \end{lstlisting} so we advance the period and put the following rule into slot $3$. \begin{lstlisting} exp_hi: INT . (2) \end{lstlisting} This brings us to \emph{completion} actions. When the period reaches the end of a dotted rule, we recognize that the substring has matched the nonterminal on the left-hand side of the rule, in this case \code{exp\_hi}. We therefore need to advance the periods in any dotted rules into slot $2$ (the starting position for the finished rule) if the period is immediately followed by \code{exp\_hi}. So we identify \begin{lstlisting} exp: . exp_hi (2) \end{lstlisting} and add the following dotted rule to slot $3$ \begin{lstlisting} exp: exp_hi . (2) \end{lstlisting} This triggers another completion step for the nonterminal \code{exp}, adding two more dotted rules to slot $3$. \begin{lstlisting}[escapechar=$] exp: exp . "+" exp_hi (2) exp: exp . "-" exp_hi (2) \end{lstlisting} Returning to scanning, the next input token is \code{"+"}, so we add the following to slot $4$. \begin{lstlisting}[escapechar=$] exp: exp "+" . exp_hi (2) \end{lstlisting} The period precedes the nonterminal \code{exp\_hi}, so prediction adds the following dotted rules to slot $4$ of the chart. \begin{lstlisting}[escapechar=$] exp_hi: . INT (4) exp_hi: . "input_int" "(" ")" (4) exp_hi: . "-" exp_hi (4) exp_hi: . "(" exp ")" (4) \end{lstlisting} The next input token is \code{"3"} which the lexer categorized as an \code{INT}, so we advance the period past \code{INT} for the rules in slot $4$, of which there is just one, and put the following into slot $5$. \begin{lstlisting}[escapechar=$] exp_hi: INT . (4) \end{lstlisting} The period at the end of the rule triggers a completion action for the rules in slot $4$, one of which has a period before \code{exp\_hi}. So we advance the period and put the following into slot $5$. \begin{lstlisting}[escapechar=$] exp: exp "+" exp_hi . (2) \end{lstlisting} This triggers another completion action for the rules in slot $2$ that have a period before \code{exp}. \begin{lstlisting}[escapechar=$] stmt: "print" "(" exp . ")" (0) exp: exp . "+" exp_hi (2) exp: exp . "-" exp_hi (2) \end{lstlisting} We scan the next input token \code{")"}, placing the following dotted rule into slot $6$. \begin{lstlisting}[escapechar=$] stmt: "print" "(" exp ")" . (0) \end{lstlisting} This triggers the completion of \code{stmt} in slot $0$ \begin{lstlisting} stmt_list: stmt . NEWLINE stmt_list (0) \end{lstlisting} The last input token is a \code{NEWLINE}, so we advance the period and place the new dotted rule into slot $7$. \begin{lstlisting} stmt_list: stmt NEWLINE . stmt_list (0) \end{lstlisting} We are close to the end of parsing the input! The period is before the \code{stmt\_list} nonterminal, so we apply prediction for \code{stmt\_list} and then \code{stmt}. \begin{lstlisting} stmt_list: . (7) stmt_list: . stmt NEWLINE stmt_list (7) stmt: . "print" "(" exp ")" (7) stmt: . exp (7) \end{lstlisting} There is immediately an opportunity for completion of \code{stmt\_list}, so we add the following to slot $7$. \begin{lstlisting} stmt_list: stmt NEWLINE stmt_list . (0) \end{lstlisting} This triggers another completion action for \code{stmt\_list} in slot $0$ \begin{lstlisting} lang_int: stmt_list . (0) \end{lstlisting} which in turn completes \code{lang\_int}, the start symbol of the grammar, so the parsing of the input is complete. For reference, we give a general description of Earley's algorithm. \begin{enumerate} \item The algorithm begins by initializing slot $0$ of the chart with the grammar rule for the start symbol, placing a period at the beginning of the right-hand side, and recording its starting position as $0$. \item The algorithm repeatedly applies the following three kinds of actions for as long as there are opportunities to do so. \begin{itemize} \item Prediction: If there is a rule in slot $k$ whose period comes before a nonterminal, add the rules for that nonterminal into slot $k$, placing a period at the beginning of their right-hand sides and recording their starting position as $k$. \item Scanning: If the token at position $k$ of the input string matches the symbol after the period in a dotted rule in slot $k$ of the chart, advance the period in the dotted rule, adding the result to slot $k+1$. \item Completion: If a dotted rule in slot $k$ has a period at the end, inspect the rules in the slot corresponding to the starting position of the completed rule. If any of those rules have a nonterminal following their period that matches the left-hand side of the completed rule, then advance their period, placing the new dotted rule in slot $k$. \end{itemize} While repeating these three actions, take care never to add duplicate dotted rules to the chart. \end{enumerate} We have described how Earley's algorithm recognizes that an input string matches a grammar, but we have not described how it builds a parse tree. The basic idea is simple, but building parse trees in an efficient way is more complex, requiring a data structure called a shared packed parse forest~\citep{Tomita:1985qr}. The simple idea is to attach a partial parse tree to every dotted rule in the chart. Initially, the tree node associated with a dotted rule has no children. As the period moves to the right, the nodes from the subparses are added as children to the tree node. As mentioned at the beginning of this section, Earley's algorithm is $O(n^2)$ for unambiguous grammars, which means that it can parse input files that contain thousands of tokens in a reasonable amount of time, but not millions. % In the next section we discuss the LALR(1) parsing algorithm, which is efficient enough to use with even the largest of input files. \section{The LALR(1) Algorithm} \label{sec:lalr} The LALR(1) algorithm~\citep{DeRemer69,Anderson73} can be viewed as a two-phase approach in which it first compiles the grammar into a state machine and then runs the state machine to parse an input string. The second phase has time complexity $O(n)$ where $n$ is the number of tokens in the input, so LALR(1) is the best one could hope for with respect to efficiency. % A particularly influential implementation of LALR(1) is the \texttt{yacc} parser generator by \citet{Johnson:1979qy}; \texttt{yacc} stands for ``yet another compiler compiler.'' % The LALR(1) state machine uses a stack to record its progress in parsing the input string. Each element of the stack is a pair: a state number and a grammar symbol (a terminal or a nonterminal). The symbol characterizes the input that has been parsed so far, and the state number is used to remember how to proceed once the next symbol's worth of input has been parsed. Each state in the machine represents where the parser stands in the parsing process with respect to certain grammar rules. In particular, each state is associated with a set of dotted rules. Figure~\ref{fig:shift-reduce} shows an example LALR(1) state machine (also called parse table) for the following simple but ambiguous grammar: \begin{lstlisting}[escapechar=$] exp: INT | exp "+" exp stmt: "print" exp start: stmt \end{lstlisting} Consider state 1 in figure~\ref{fig:shift-reduce}. The parser has just read in a \lstinline{"print"} token, so the top of the stack is \lstinline{(1,"print")}. The parser is part of the way through parsing the input according to grammar rule 1, which is signified by showing rule 1 with a period after the \code{"print"} token and before the \code{exp} nonterminal. There are two rules that could apply next, rules 2 and 3, so state 1 also shows those rules with a period at the beginning of their right-hand sides. The edges between states indicate which transitions the machine should make depending on the next input token. So, for example, if the next input token is \code{INT} then the parser will push \code{INT} and the target state 4 on the stack and transition to state 4. Suppose that we are now at the end of the input. State 4 says that we should reduce by rule 3, so we pop from the stack the same number of items as the number of symbols in the right-hand side of the rule, in this case just one. We then momentarily jump to the state at the top of the stack (state 1) and then follow the goto edge that corresponds to the left-hand side of the rule we just reduced by, in this case \code{exp}, so we arrive at state 3. (A slightly longer example parse is shown in figure~\ref{fig:shift-reduce}.) \begin{figure}[htbp] \centering \includegraphics[width=5.0in]{figs/shift-reduce-conflict} \caption{An LALR(1) parse table and a trace of an example run.} \label{fig:shift-reduce} \end{figure} In general, the algorithm works as follows. First, set the current state to state $0$. Then repeat the following, looking at the next input token. \begin{itemize} \item If there there is a shift edge for the input token in the current state, push the edge's target state and the input token onto the stack and proceed to the edge's target state. \item If there is a reduce action for the input token in the current state, pop $k$ elements from the stack, where $k$ is the number of symbols in the right-hand side of the rule being reduced. Jump to the state at the top of the stack and then follow the goto edge for the nonterminal that matches the left-hand side of the rule that we are reducing by. Push the edge's target state and the nonterminal on the stack. \end{itemize} Notice that in state 6 of figure~\ref{fig:shift-reduce} there is both a shift and a reduce action for the token \lstinline{PLUS}, so the algorithm does not know which action to take in this case. When a state has both a shift and a reduce action for the same token, we say there is a \emph{shift/reduce conflict}. In this case, the conflict will arise, for example, in trying to parse the input \lstinline{print 1 + 2 + 3}. After having consumed \lstinline{print 1 + 2}, the parser will be in state 6 and will not know whether to reduce to form an \code{exp} of \lstinline{1 + 2} or to proceed by shifting the next \lstinline{+} from the input. A similar kind of problem, known as a \emph{reduce/reduce} conflict, arises when there are two reduce actions in a state for the same token. To understand which grammars give rise to shift/reduce and reduce/reduce conflicts, it helps to know how the parse table is generated from the grammar, which we discuss next. The parse table is generated one state at a time. State 0 represents the start of the parser. We add the grammar rule for the start symbol to this state with a period at the beginning of the right-hand side, similarly to the initialization phase of the Earley parser. If the period appears immediately before another nonterminal, we add all the rules with that nonterminal on the left-hand side. Again, we place a period at the beginning of the right-hand side of each new rule. This process, called \emph{state closure}, is continued until there are no more rules to add (similarly to the prediction actions of an Earley parser). We then examine each dotted rule in the current state $I$. Suppose that a dotted rule has the form $A ::= s_1.\,X \,s_2$, where $A$ and $X$ are symbols and $s_1$ and $s_2$ are sequences of symbols. We create a new state and call it $J$. If $X$ is a terminal, we create a shift edge from $I$ to $J$ (analogously to scanning in Earley), whereas if $X$ is a nonterminal, we create a goto edge from $I$ to $J$. We then need to add some dotted rules to state $J$. We start by adding all dotted rules from state $I$ that have the form $B ::= s_1.\,X\,s_2$ (where $B$ is any nonterminal and $s_1$ and $s_2$ are arbitrary sequences of symbols), with the period moved past the $X$. (This is analogous to completion in Earley's algorithm.) We then perform state closure on $J$. This process repeats until there are no more states or edges to add. We then mark states as accepting states if they have a dotted rule that is the start rule with a period at the end. Also, to add the reduce actions, we look for any state containing a dotted rule with a period at the end. Let $n$ be the rule number for this dotted rule. We then put a reduce $n$ action into that state for every token $Y$. For example, in figure~\ref{fig:shift-reduce} state 4 has a dotted rule with a period at the end. We therefore put a reduce by rule 3 action into state 4 for every token. When inserting reduce actions, take care to spot any shift/reduce or reduce/reduce conflicts. If there are any, abort the construction of the parse table. \begin{exercise} \normalfont\normalsize % Working on paper, walk through the parse table generation process for the grammar at the top of figure~\ref{fig:shift-reduce}, and check your results against the parse table shown in figure~\ref{fig:shift-reduce}. \end{exercise} \begin{exercise} \normalfont\normalsize % Change the parser in your compiler for \LangVar{} to set the \code{parser} option of Lark to \lstinline{'lalr'}. Test your compiler on all the \LangVar{} programs that you have created. In doing so, Lark may signal an error due to shift/reduce or reduce/reduce conflicts in your grammar. If so, change your Lark grammar for \LangVar{} to remove those conflicts. \end{exercise} \section{Further Reading} In this chapter we have just scratched the surface of the field of parsing, with the study of a very general but less efficient algorithm (Earley) and with a more limited but highly efficient algorithm (LALR). There are many more algorithms and classes of grammars that fall between these two ends of the spectrum. We recommend to the reader \citet{Aho:2006wb} for a thorough treatment of parsing. Regarding lexical analysis, we have described the specification language, which are the regular expressions, but not the algorithms for recognizing them. In short, regular expressions can be translated to nondeterministic finite automata, which in turn are translated to finite automata. We refer the reader again to \citet{Aho:2006wb} for all the details on lexical analysis. \fi} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Register Allocation} \label{ch:register-allocation-Lvar} \setcounter{footnote}{0} \index{subject}{register allocation} In chapter~\ref{ch:Lvar} we learned how to compile \LangVar{} to x86, storing variables on the procedure call stack. The CPU may require tens to hundreds of cycles to access a location on the stack, whereas accessing a register takes only a single cycle. In this chapter we improve the efficiency of our generated code by storing some variables in registers. The goal of register allocation is to fit as many variables into registers as possible. Some programs have more variables than registers, so we cannot always map each variable to a different register. Fortunately, it is common for different variables to be in use during different periods of time during program execution, and in those cases we can map multiple variables to the same register. The program shown in figure~\ref{fig:reg-eg} serves as a running example. The source program is on the left and the output of instruction selection\index{subject}{instruction selection} is on the right. The program is almost completely in the x86 assembly language, but it still uses variables. Consider variables \code{x} and \code{z}. After the variable \code{x} has been moved to \code{z}, it is no longer in use. Variable \code{z}, on the other hand, is used only after this point, so \code{x} and \code{z} could share the same register. \begin{figure} \begin{tcolorbox}[colback=white] \begin{minipage}{0.45\textwidth} Example \LangVar{} program: % var_test_28.rkt {\if\edition\racketEd \begin{lstlisting} (let ([v 1]) (let ([w 42]) (let ([x (+ v 7)]) (let ([y x]) (let ([z (+ x w)]) (+ z (- y))))))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} v = 1 w = 42 x = v + 7 y = x z = x + w print(z + (- y)) \end{lstlisting} \fi} \end{minipage} \begin{minipage}{0.45\textwidth} After instruction selection: {\if\edition\racketEd \begin{lstlisting} locals-types: x : Integer, y : Integer, z : Integer, t : Integer, v : Integer, w : Integer start: movq $1, v movq $42, w movq v, x addq $7, x movq x, y movq x, z addq w, z movq y, t negq t movq z, %rax addq t, %rax jmp conclusion \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} movq $1, v movq $42, w movq v, x addq $7, x movq x, y movq x, z addq w, z movq y, tmp_0 negq tmp_0 movq z, tmp_1 addq tmp_0, tmp_1 movq tmp_1, %rdi callq print_int \end{lstlisting} \fi} \end{minipage} \end{tcolorbox} \caption{A running example for register allocation.} \label{fig:reg-eg} \end{figure} The topic of section~\ref{sec:liveness-analysis-Lvar} is how to compute where a variable is in use. Once we have that information, we compute which variables are in use at the same time, that is, which ones \emph{interfere}\index{subject}{interfere} with each other, and represent this relation as an undirected graph whose vertices are variables and edges indicate when two variables interfere (section~\ref{sec:build-interference}). We then model register allocation as a graph coloring problem (section~\ref{sec:graph-coloring}). If we run out of registers despite these efforts, we place the remaining variables on the stack, similarly to how we handled variables in chapter~\ref{ch:Lvar}. It is common to use the verb \emph{spill}\index{subject}{spill} for assigning a variable to a stack location. The decision to spill a variable is handled as part of the graph coloring process. We make the simplifying assumption that each variable is assigned to one location (a register or stack address). A more sophisticated approach is to assign a variable to one or more locations in different regions of the program. For example, if a variable is used many times in short sequence and then used again only after many other instructions, it could be more efficient to assign the variable to a register during the initial sequence and then move it to the stack for the rest of its lifetime. We refer the interested reader to \citet{Cooper:2011aa} (chapter 13) for more information about that approach. % discuss prioritizing variables based on how much they are used. \section{Registers and Calling Conventions} \label{sec:calling-conventions} \index{subject}{calling conventions} As we perform register allocation, we must be aware of the \emph{calling conventions} \index{subject}{calling conventions} that govern how function calls are performed in x86. % Even though \LangVar{} does not include programmer-defined functions, our generated code includes a \code{main} function that is called by the operating system and our generated code contains calls to the \code{read\_int} function. Function calls require coordination between two pieces of code that may be written by different programmers or generated by different compilers. Here we follow the System V calling conventions that are used by the GNU C compiler on Linux and MacOS~\citep{Bryant:2005aa,Matz:2013aa}. % The calling conventions include rules about how functions share the use of registers. In particular, the caller is responsible for freeing some registers prior to the function call for use by the callee. These are called the \emph{caller-saved registers} \index{subject}{caller-saved registers} and they are \begin{lstlisting} rax rcx rdx rsi rdi r8 r9 r10 r11 \end{lstlisting} On the other hand, the callee is responsible for preserving the values of the \emph{callee-saved registers}, \index{subject}{callee-saved registers} which are \begin{lstlisting} rsp rbp rbx r12 r13 r14 r15 \end{lstlisting} We can think about this caller/callee convention from two points of view, the caller view and the callee view, as follows: \begin{itemize} \item The caller should assume that all the caller-saved registers get overwritten with arbitrary values by the callee. On the other hand, the caller can safely assume that all the callee-saved registers retain their original values. \item The callee can freely use any of the caller-saved registers. However, if the callee wants to use a callee-saved register, the callee must arrange to put the original value back in the register prior to returning to the caller. This can be accomplished by saving the value to the stack in the prelude of the function and restoring the value in the conclusion of the function. \end{itemize} In x86, registers are also used for passing arguments to a function and for the return value. In particular, the first six arguments of a function are passed in the following six registers, in this order. \begin{lstlisting} rdi rsi rdx rcx r8 r9 \end{lstlisting} We refer to these six registers are the argument-passing registers \index{subject}{argument-passing registers}. If there are more than six arguments, the convention is to use space on the frame of the caller for the rest of the arguments. In chapter~\ref{ch:Lfun}, we instead pass a tuple containing the sixth argument and the rest of the arguments, which simplifies the treatment of efficient tail calls. % \racket{For now, the only function we care about is \code{read\_int}, which takes zero arguments.} % \python{For now, the only functions we care about are \code{read\_int} and \code{print\_int}, which take zero and one argument, respectively.} % The register \code{rax} is used for the return value of a function. The next question is how these calling conventions impact register allocation. Consider the \LangVar{} program presented in figure~\ref{fig:example-calling-conventions}. We first analyze this example from the caller point of view and then from the callee point of view. We refer to a variable that is in use during a function call as a \emph{call-live variable}\index{subject}{call-live variable}. The program makes two calls to \READOP{}. The variable \code{x} is call-live because it is in use during the second call to \READOP{}; we must ensure that the value in \code{x} does not get overwritten during the call to \READOP{}. One obvious approach is to save all the values that reside in caller-saved registers to the stack prior to each function call and to restore them after each call. That way, if the register allocator chooses to assign \code{x} to a caller-saved register, its value will be preserved across the call to \READOP{}. However, saving and restoring to the stack is relatively slow. If \code{x} is not used many times, it may be better to assign \code{x} to a stack location in the first place. Or better yet, if we can arrange for \code{x} to be placed in a callee-saved register, then it won't need to be saved and restored during function calls. We recommend an approach that captures these issues in the interference graph, without complicating the graph coloring algorithm. During liveness analysis we know which variables are call-live because we compute which variables are in use at every instruction (section~\ref{sec:liveness-analysis-Lvar}). When we build the interference graph (section~\ref{sec:build-interference}), we can place an edge in the interference graph between each call-live variable and the caller-saved registers. This will prevent the graph coloring algorithm from assigning call-live variables to caller-saved registers. On the other hand, for variables that are not call-live, we prefer placing them in caller-saved registers to leave more room for call-live variables in the callee-saved registers. This can also be implemented without complicating the graph coloring algorithm. We recommend that the graph coloring algorithm assign variables to natural numbers, choosing the lowest number for which there is no interference. After the coloring is complete, we map the numbers to registers and stack locations: mapping the lowest numbers to caller-saved registers, the next lowest to callee-saved registers, and the largest numbers to stack locations. This ordering gives preference to registers over stack locations and to caller-saved registers over callee-saved registers. Returning to the example in figure~\ref{fig:example-calling-conventions}, let us analyze the generated x86 code on the right-hand side. Variable \code{x} is assigned to \code{rbx}, a callee-saved register. Thus, it is already in a safe place during the second call to \code{read\_int}. Next, variable \code{y} is assigned to \code{rcx}, a caller-saved register, because \code{y} is not a call-live variable. We have completed the analysis from the caller point of view, so now we switch to the callee point of view, focusing on the prelude and conclusion of the \code{main} function. As usual, the prelude begins with saving the \code{rbp} register to the stack and setting the \code{rbp} to the current stack pointer. We now know why it is necessary to save the \code{rbp}: it is a callee-saved register. The prelude then pushes \code{rbx} to the stack because (1) \code{rbx} is a callee-saved register and (2) \code{rbx} is assigned to a variable (\code{x}). The other callee-saved registers are not saved in the prelude because they are not used. The prelude subtracts 8 bytes from the \code{rsp} to make it 16-byte aligned. Shifting attention to the conclusion, we see that \code{rbx} is restored from the stack with a \code{popq} instruction. \index{subject}{prelude}\index{subject}{conclusion} \begin{figure}[tp] \begin{tcolorbox}[colback=white] \begin{minipage}{0.45\textwidth} Example \LangVar{} program: %var_test_14.rkt {\if\edition\racketEd \begin{lstlisting} (let ([x (read)]) (let ([y (read)]) (+ (+ x y) 42))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} x = input_int() y = input_int() print((x + y) + 42) \end{lstlisting} \fi} \end{minipage} \begin{minipage}{0.45\textwidth} Generated x86 assembly: {\if\edition\racketEd \begin{lstlisting} start: callq read_int movq %rax, %rbx callq read_int movq %rax, %rcx addq %rcx, %rbx movq %rbx, %rax addq $42, %rax jmp _conclusion .globl main main: pushq %rbp movq %rsp, %rbp pushq %rbx subq $8, %rsp jmp start conclusion: addq $8, %rsp popq %rbx popq %rbp retq \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} .globl main main: pushq %rbp movq %rsp, %rbp pushq %rbx subq $8, %rsp callq read_int movq %rax, %rbx callq read_int movq %rax, %rcx movq %rbx, %rdx addq %rcx, %rdx movq %rdx, %rcx addq $42, %rcx movq %rcx, %rdi callq print_int addq $8, %rsp popq %rbx popq %rbp retq \end{lstlisting} \fi} \end{minipage} \end{tcolorbox} \caption{An example with function calls.} \label{fig:example-calling-conventions} \end{figure} %\clearpage \section{Liveness Analysis} \label{sec:liveness-analysis-Lvar} \index{subject}{liveness analysis} The \code{uncover\_live} \racket{pass}\python{function} performs \emph{liveness analysis}; that is, it discovers which variables are in use in different regions of a program. % A variable or register is \emph{live} at a program point if its current value is used at some later point in the program. We refer to variables, stack locations, and registers collectively as \emph{locations}. % Consider the following code fragment in which there are two writes to \code{b}. Are variables \code{a} and \code{b} both live at the same time? \begin{center} \begin{minipage}{0.85\textwidth} \begin{lstlisting}[numbers=left,numberstyle=\tiny] movq $5, a movq $30, b movq a, c movq $10, b addq b, c \end{lstlisting} \end{minipage} \end{center} The answer is no, because \code{a} is live from line 1 to 3 and \code{b} is live from line 4 to 5. The integer written to \code{b} on line 2 is never used because it is overwritten (line 4) before the next read (line 5). The live locations for each instruction can be computed by traversing the instruction sequence back to front (i.e., backward in execution order). Let $I_1,\ldots, I_n$ be the instruction sequence. We write $L_{\mathsf{after}}(k)$ for the set of live locations after instruction $I_k$ and write $L_{\mathsf{before}}(k)$ for the set of live locations before instruction $I_k$. \racket{We recommend representing these sets with the Racket \code{set} data structure described in figure~\ref{fig:set}.} \python{We recommend representing these sets with the Python \href{https://docs.python.org/3.10/library/stdtypes.html\#set-types-set-frozenset}{\code{set}} data structure.} {\if\edition\racketEd \begin{figure}[tp] %\begin{wrapfigure}[19]{l}[0.75in]{0.55\textwidth} \small \begin{tcolorbox}[title=\href{https://docs.racket-lang.org/reference/sets.html}{The Racket Set Package}] A \emph{set} is an unordered collection of elements without duplicates. Here are some of the operations defined on sets. \index{subject}{set} \begin{description} \item[$\LP\code{set}~v~\ldots\RP$] constructs a set containing the specified elements. \item[$\LP\code{set-union}~set_1~set_2\RP$] returns the union of the two sets. \item[$\LP\code{set-subtract}~set_1~set_2\RP$] returns the set difference of the two sets. \item[$\LP\code{set-member?}~set~v\RP$] answers whether element $v$ is in $set$. \item[$\LP\code{set-count}~set\RP$] returns the number of unique elements in $set$. \item[$\LP\code{set->list}~set\RP$] converts $set$ to a list. \end{description} \end{tcolorbox} %\end{wrapfigure} \caption{The \code{set} data structure.} \label{fig:set} \end{figure} \fi} The locations that are live after an instruction are its \emph{live-after}\index{subject}{live-after} set, and the locations that are live before an instruction are its \emph{live-before}\index{subject}{live-before} set. The live-after set of an instruction is always the same as the live-before set of the next instruction. \begin{equation} \label{eq:live-after-before-next} L_{\mathsf{after}}(k) = L_{\mathsf{before}}(k+1) \end{equation} To start things off, there are no live locations after the last instruction, so \begin{equation}\label{eq:live-last-empty} L_{\mathsf{after}}(n) = \emptyset \end{equation} We then apply the following rule repeatedly, traversing the instruction sequence back to front. \begin{equation}\label{eq:live-before-after-minus-writes-plus-reads} L_{\mathtt{before}}(k) = (L_{\mathtt{after}}(k) - W(k)) \cup R(k), \end{equation} where $W(k)$ are the locations written to by instruction $I_k$, and $R(k)$ are the locations read by instruction $I_k$. {\if\edition\racketEd % There is a special case for \code{jmp} instructions. The locations that are live before a \code{jmp} should be the locations in $L_{\mathsf{before}}$ at the target of the jump. So, we recommend maintaining an alist named \code{label->live} that maps each label to the $L_{\mathsf{before}}$ for the first instruction in its block. For now the only \code{jmp} in a \LangXVar{} program is the jump to the conclusion. (For example, see figure~\ref{fig:reg-eg}.) The conclusion reads from \ttm{rax} and \ttm{rsp}, so the alist should map \code{conclusion} to the set $\{\ttm{rax},\ttm{rsp}\}$. % \fi} Let us walk through the previous example, applying these formulas starting with the instruction on line 5 of the code fragment. We collect the answers in figure~\ref{fig:liveness-example-0}. The $L_{\mathsf{after}}$ for the \code{addq b, c} instruction is $\emptyset$ because it is the last instruction (formula~\eqref{eq:live-last-empty}). The $L_{\mathsf{before}}$ for this instruction is $\{\ttm{b},\ttm{c}\}$ because it reads from variables \code{b} and \code{c} (formula~\eqref{eq:live-before-after-minus-writes-plus-reads}): \[ L_{\mathsf{before}}(5) = (\emptyset - \{\ttm{c}\}) \cup \{ \ttm{b}, \ttm{c} \} = \{ \ttm{b}, \ttm{c} \} \] Moving on the the instruction \code{movq \$10, b} at line 4, we copy the live-before set from line 5 to be the live-after set for this instruction (formula~\eqref{eq:live-after-before-next}). \[ L_{\mathsf{after}}(4) = \{ \ttm{b}, \ttm{c} \} \] This move instruction writes to \code{b} and does not read from any variables, so we have the following live-before set (formula~\eqref{eq:live-before-after-minus-writes-plus-reads}). \[ L_{\mathsf{before}}(4) = (\{\ttm{b},\ttm{c}\} - \{\ttm{b}\}) \cup \emptyset = \{ \ttm{c} \} \] The live-before for instruction \code{movq a, c} is $\{\ttm{a}\}$ because it writes to $\{\ttm{c}\}$ and reads from $\{\ttm{a}\}$ (formula~\eqref{eq:live-before-after-minus-writes-plus-reads}). The live-before for \code{movq \$30, b} is $\{\ttm{a}\}$ because it writes to a variable that is not live and does not read from a variable. Finally, the live-before for \code{movq \$5, a} is $\emptyset$ because it writes to variable \code{a}. \begin{figure}[tbp] \centering \begin{tcolorbox}[colback=white] \hspace{10pt} \begin{minipage}{0.4\textwidth} \begin{lstlisting}[numbers=left,numberstyle=\tiny] movq $5, a movq $30, b movq a, c movq $10, b addq b, c \end{lstlisting} \end{minipage} \vrule\hspace{10pt} \begin{minipage}{0.45\textwidth} \begin{align*} L_{\mathsf{before}}(1)= \emptyset, L_{\mathsf{after}}(1)= \{\ttm{a}\}\\ L_{\mathsf{before}}(2)= \{\ttm{a}\}, L_{\mathsf{after}}(2)= \{\ttm{a}\}\\ L_{\mathsf{before}}(3)= \{\ttm{a}\}, L_{\mathsf{after}}(2)= \{\ttm{c}\}\\ L_{\mathsf{before}}(4)= \{\ttm{c}\}, L_{\mathsf{after}}(4)= \{\ttm{b},\ttm{c}\}\\ L_{\mathsf{before}}(5)= \{\ttm{b},\ttm{c}\}, L_{\mathsf{after}}(5)= \emptyset \end{align*} \end{minipage} \end{tcolorbox} \caption{Example output of liveness analysis on a short example.} \label{fig:liveness-example-0} \end{figure} \begin{exercise}\normalfont\normalsize Perform liveness analysis by hand on the running example in figure~\ref{fig:reg-eg}, computing the live-before and live-after sets for each instruction. Compare your answers to the solution shown in figure~\ref{fig:live-eg}. \end{exercise} \begin{figure}[tp] \hspace{20pt} \begin{minipage}{0.55\textwidth} \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} |$\{\ttm{rsp}\}$| movq $1, v |$\{\ttm{v},\ttm{rsp}\}$| movq $42, w |$\{\ttm{v},\ttm{w},\ttm{rsp}\}$| movq v, x |$\{\ttm{w},\ttm{x},\ttm{rsp}\}$| addq $7, x |$\{\ttm{w},\ttm{x},\ttm{rsp}\}$| movq x, y |$\{\ttm{w},\ttm{x},\ttm{y},\ttm{rsp}\}$| movq x, z |$\{\ttm{w},\ttm{y},\ttm{z},\ttm{rsp}\}$| addq w, z |$\{\ttm{y},\ttm{z},\ttm{rsp}\}$| movq y, t |$\{\ttm{t},\ttm{z},\ttm{rsp}\}$| negq t |$\{\ttm{t},\ttm{z},\ttm{rsp}\}$| movq z, %rax |$\{\ttm{rax},\ttm{t},\ttm{rsp}\}$| addq t, %rax |$\{\ttm{rax},\ttm{rsp}\}$| jmp conclusion \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} movq $1, v |$\{\ttm{v}\}$| movq $42, w |$\{\ttm{w}, \ttm{v}\}$| movq v, x |$\{\ttm{w}, \ttm{x}\}$| addq $7, x |$\{\ttm{w}, \ttm{x}\}$| movq x, y |$\{\ttm{w}, \ttm{x}, \ttm{y}\}$| movq x, z |$\{\ttm{w}, \ttm{y}, \ttm{z}\}$| addq w, z |$\{\ttm{y}, \ttm{z}\}$| movq y, tmp_0 |$\{\ttm{tmp\_0}, \ttm{z}\}$| negq tmp_0 |$\{\ttm{tmp\_0}, \ttm{z}\}$| movq z, tmp_1 |$\{\ttm{tmp\_0}, \ttm{tmp\_1}\}$| addq tmp_0, tmp_1 |$\{\ttm{tmp\_1}\}$| movq tmp_1, %rdi |$\{\ttm{rdi}\}$| callq print_int |$\{\}$| \end{lstlisting} \fi} \end{tcolorbox} \end{minipage} \caption{The running example annotated with live-after sets.} \label{fig:live-eg} \end{figure} \begin{exercise}\normalfont\normalsize Implement the \code{uncover\_live} \racket{pass}\python{function}. % \racket{Store the sequence of live-after sets in the $\itm{info}$ field of the \code{Block} structure.} % \python{Return a dictionary that maps each instruction to its live-after set.} % \racket{We recommend creating an auxiliary function that takes a list of instructions and an initial live-after set (typically empty) and returns the list of live-after sets.} % We recommend creating auxiliary functions to (1) compute the set of locations that appear in an \Arg{}, (2) compute the locations read by an instruction (the $R$ function), and (3) the locations written by an instruction (the $W$ function). The \code{callq} instruction should include all the caller-saved registers in its write set $W$ because the calling convention says that those registers may be written to during the function call. Likewise, the \code{callq} instruction should include the appropriate argument-passing registers in its read set $R$, depending on the arity of the function being called. (This is why the abstract syntax for \code{callq} includes the arity.) \end{exercise} %\clearpage \section{Build the Interference Graph} \label{sec:build-interference} {\if\edition\racketEd \begin{figure}[tp] %\begin{wrapfigure}[23]{r}[0.75in]{0.55\textwidth} \small \begin{tcolorbox}[title=\href{https://docs.racket-lang.org/graph/index.html}{The Racket Graph Library}] A \emph{graph} is a collection of vertices and edges where each edge connects two vertices. A graph is \emph{directed} if each edge points from a source to a target. Otherwise the graph is \emph{undirected}. \index{subject}{graph}\index{subject}{directed graph}\index{subject}{undirected graph} \begin{description} %% We currently don't use directed graphs. We instead use %% directed multi-graphs. -Jeremy \item[$\LP\code{directed-graph}\,\itm{edges}\RP$] constructs a directed graph from a list of edges. Each edge is a list containing the source and target vertex. \item[$\LP\code{undirected-graph}\,\itm{edges}\RP$] constructs a undirected graph from a list of edges. Each edge is represented by a list containing two vertices. \item[$\LP\code{add-vertex!}\,\itm{graph}\,\itm{vertex}\RP$] inserts a vertex into the graph. \item[$\LP\code{add-edge!}\,\itm{graph}\,\itm{source}\,\itm{target}\RP$] inserts an edge between the two vertices. \item[$\LP\code{in-neighbors}\,\itm{graph}\,\itm{vertex}\RP$] returns a sequence of vertices adjacent to the vertex. \item[$\LP\code{in-vertices}\,\itm{graph}\RP$] returns a sequence of all vertices in the graph. \end{description} \end{tcolorbox} %\end{wrapfigure} \caption{The Racket \code{graph} package.} \label{fig:graph} \end{figure} \fi} On the basis of the liveness analysis, we know where each location is live. However, during register allocation, we need to answer questions of the specific form: are locations $u$ and $v$ live at the same time? (If so, they cannot be assigned to the same register.) To make this question more efficient to answer, we create an explicit data structure, an \emph{interference graph}\index{subject}{interference graph}. An interference graph is an undirected graph that has a node for every variable and register and has an edge between two nodes if they are live at the same time, that is, if they interfere with each other. % \racket{We recommend using the Racket \code{graph} package (figure~\ref{fig:graph}) to represent the interference graph.} % \python{We provide implementations of directed and undirected graph data structures in the file \code{graph.py} of the support code.} A straightforward way to compute the interference graph is to look at the set of live locations between each instruction and add an edge to the graph for every pair of variables in the same set. This approach is less than ideal for two reasons. First, it can be expensive because it takes $O(n^2)$ time to consider every pair in a set of $n$ live locations. Second, in the special case in which two locations hold the same value (because one was assigned to the other), they can be live at the same time without interfering with each other. A better way to compute the interference graph is to focus on writes~\citep{Appel:2003fk}. The writes performed by an instruction must not overwrite something in a live location. So for each instruction, we create an edge between the locations being written to and the live locations. (However, a location never interferes with itself.) For the \key{callq} instruction, we consider all the caller-saved registers to have been written to, so an edge is added between every live variable and every caller-saved register. Also, for \key{movq} there is the special case of two variables holding the same value. If a live variable $v$ is the same as the source of the \key{movq}, then there is no need to add an edge between $v$ and the destination, because they both hold the same value. % Hence we have the following two rules: \begin{enumerate} \item If instruction $I_k$ is a move instruction of the form \key{movq} $s$\key{,} $d$, then for every $v \in L_{\mathsf{after}}(k)$, if $v \neq d$ and $v \neq s$, add the edge $(d,v)$. \item For any other instruction $I_k$, for every $d \in W(k)$ and every $v \in L_{\mathsf{after}}(k)$, if $v \neq d$, add the edge $(d,v)$. \end{enumerate} Working from the top to bottom of figure~\ref{fig:live-eg}, we apply these rules to each instruction. We highlight a few of the instructions. \racket{The first instruction is \lstinline{movq $1, v}, and the live-after set is $\{\ttm{v},\ttm{rsp}\}$. Rule 1 applies, so \code{v} interferes with \code{rsp}.} % \python{The first instruction is \lstinline{movq $1, v}, and the live-after set is $\{\ttm{v}\}$. Rule 1 applies, but there is no interference because $\ttm{v}$ is the destination of the move.} % \racket{The fourth instruction is \lstinline{addq $7, x}, and the live-after set is $\{\ttm{w},\ttm{x},\ttm{rsp}\}$. Rule 2 applies, so $\ttm{x}$ interferes with \ttm{w} and \ttm{rsp}.} % \python{The fourth instruction is \lstinline{addq $7, x}, and the live-after set is $\{\ttm{w},\ttm{x}\}$. Rule 2 applies, so $\ttm{x}$ interferes with \ttm{w}.} % \racket{The next instruction is \lstinline{movq x, y}, and the live-after set is $\{\ttm{w},\ttm{x},\ttm{y},\ttm{rsp}\}$. Rule 1 applies, so \ttm{y} interferes with \ttm{w} and \ttm{rsp} but not \ttm{x}, because \ttm{x} is the source of the move and therefore \ttm{x} and \ttm{y} hold the same value.} % \python{The next instruction is \lstinline{movq x, y}, and the live-after set is $\{\ttm{w},\ttm{x},\ttm{y}\}$. Rule 1 applies, so \ttm{y} interferes with \ttm{w} but not \ttm{x}, because \ttm{x} is the source of the move and therefore \ttm{x} and \ttm{y} hold the same value.} % Figure~\ref{fig:interference-results} lists the interference results for all the instructions, and the resulting interference graph is shown in figure~\ref{fig:interfere}. We elide the register nodes from the interference graph in figure~\ref{fig:interfere} because there were no interference edges involving registers and we did not wish to clutter the graph, but in general one needs to include all the registers in the interference graph. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{quote} {\if\edition\racketEd \begin{tabular}{ll} \lstinline!movq $1, v!& \ttm{v} interferes with \ttm{rsp},\\ \lstinline!movq $42, w!& \ttm{w} interferes with \ttm{v} and \ttm{rsp},\\ \lstinline!movq v, x!& \ttm{x} interferes with \ttm{w} and \ttm{rsp},\\ \lstinline!addq $7, x!& \ttm{x} interferes with \ttm{w} and \ttm{rsp},\\ \lstinline!movq x, y!& \ttm{y} interferes with \ttm{w} and \ttm{rsp} but not \ttm{x},\\ \lstinline!movq x, z!& \ttm{z} interferes with \ttm{w}, \ttm{y}, and \ttm{rsp},\\ \lstinline!addq w, z!& \ttm{z} interferes with \ttm{y} and \ttm{rsp}, \\ \lstinline!movq y, t!& \ttm{t} interferes with \ttm{z} and \ttm{rsp}, \\ \lstinline!negq t!& \ttm{t} interferes with \ttm{z} and \ttm{rsp}, \\ \lstinline!movq z, %rax! & \ttm{rax} interferes with \ttm{t} and \ttm{rsp}, \\ \lstinline!addq t, %rax! & \ttm{rax} interferes with \ttm{rsp}. \\ \lstinline!jmp conclusion!& no interference. \end{tabular} \fi} {\if\edition\pythonEd\pythonColor \begin{tabular}{ll} \lstinline!movq $1, v!& no interference\\ \lstinline!movq $42, w!& \ttm{w} interferes with \ttm{v}\\ \lstinline!movq v, x!& \ttm{x} interferes with \ttm{w}\\ \lstinline!addq $7, x!& \ttm{x} interferes with \ttm{w}\\ \lstinline!movq x, y!& \ttm{y} interferes with \ttm{w} but not \ttm{x}\\ \lstinline!movq x, z!& \ttm{z} interferes with \ttm{w} and \ttm{y}\\ \lstinline!addq w, z!& \ttm{z} interferes with \ttm{y} \\ \lstinline!movq y, tmp_0!& \ttm{tmp\_0} interferes with \ttm{z} \\ \lstinline!negq tmp_0!& \ttm{tmp\_0} interferes with \ttm{z} \\ \lstinline!movq z, tmp_1! & \ttm{tmp\_0} interferes with \ttm{tmp\_1} \\ \lstinline!addq tmp_0, tmp_1! & no interference\\ \lstinline!movq tmp_1, %rdi! & no interference \\ \lstinline!callq print_int!& no interference. \end{tabular} \fi} \end{quote} \end{tcolorbox} \caption{Interference results for the running example.} \label{fig:interference-results} \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \large {\if\edition\racketEd \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (rax) at (0,0) {$\ttm{rax}$}; \node (rsp) at (9,2) {$\ttm{rsp}$}; \node (t1) at (0,2) {$\ttm{t}$}; \node (z) at (3,2) {$\ttm{z}$}; \node (x) at (6,2) {$\ttm{x}$}; \node (y) at (3,0) {$\ttm{y}$}; \node (w) at (6,0) {$\ttm{w}$}; \node (v) at (9,0) {$\ttm{v}$}; \draw (t1) to (rax); \draw (t1) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \draw (v) to (rsp); \draw (w) to (rsp); \draw (x) to (rsp); \draw (y) to (rsp); \path[-.,bend left=15] (z) edge node {} (rsp); \path[-.,bend left=10] (t1) edge node {} (rsp); \draw (rax) to (rsp); \end{tikzpicture} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (t0) at (0,2) {$\ttm{tmp\_0}$}; \node (t1) at (0,0) {$\ttm{tmp\_1}$}; \node (z) at (3,2) {$\ttm{z}$}; \node (x) at (6,2) {$\ttm{x}$}; \node (y) at (3,0) {$\ttm{y}$}; \node (w) at (6,0) {$\ttm{w}$}; \node (v) at (9,0) {$\ttm{v}$}; \draw (t0) to (t1); \draw (t0) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \end{tikzpicture} \] \fi} \end{tcolorbox} \caption{The interference graph of the example program.} \label{fig:interfere} \end{figure} \begin{exercise}\normalfont\normalsize \racket{Implement the compiler pass named \code{build\_interference} according to the algorithm suggested here. We recommend using the Racket \code{graph} package to create and inspect the interference graph. The output graph of this pass should be stored in the $\itm{info}$ field of the program, under the key \code{conflicts}.} % \python{Implement a function named \code{build\_interference} according to the algorithm suggested above that returns the interference graph.} \end{exercise} \section{Graph Coloring via Sudoku} \label{sec:graph-coloring} \index{subject}{graph coloring} \index{subject}{sudoku} \index{subject}{color} We come to the main event discussed in this chapter, mapping variables to registers and stack locations. Variables that interfere with each other must be mapped to different locations. In terms of the interference graph, this means that adjacent vertices must be mapped to different locations. If we think of locations as colors, the register allocation problem becomes the graph coloring problem~\citep{Balakrishnan:1996ve,Rosen:2002bh}. The reader may be more familiar with the graph coloring problem than he or she realizes; the popular game of sudoku is an instance of the graph coloring problem. The following describes how to build a graph out of an initial sudoku board. \begin{itemize} \item There is one vertex in the graph for each sudoku square. \item There is an edge between two vertices if the corresponding squares are in the same row, in the same column, or in the same $3\times 3$ region. \item Choose nine colors to correspond to the numbers $1$ to $9$. \item On the basis of the initial assignment of numbers to squares on the sudoku board, assign the corresponding colors to the corresponding vertices in the graph. \end{itemize} If you can color the remaining vertices in the graph with the nine colors, then you have also solved the corresponding game of sudoku. Figure~\ref{fig:sudoku-graph} shows an initial sudoku game board and the corresponding graph with colored vertices. Here we use a monochrome representation of colors, mapping the sudoku number 1 to black, 2 to white, and 3 to gray. We show edges for only a sampling of the vertices (the colored ones) because showing edges for all the vertices would make the graph unreadable. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \includegraphics[width=0.5\textwidth]{figs/sudoku} \includegraphics[width=0.5\textwidth]{figs/sudoku-graph-bw} \end{tcolorbox} \caption{A sudoku game board and the corresponding colored graph.} \label{fig:sudoku-graph} \end{figure} Some techniques for playing sudoku correspond to heuristics used in graph coloring algorithms. For example, one of the basic techniques for sudoku is called Pencil Marks. The idea is to use a process of elimination to determine what numbers are no longer available for a square and to write those numbers in the square (writing very small). For example, if the number $1$ is assigned to a square, then write the pencil mark $1$ in all the squares in the same row, column, and region to indicate that $1$ is no longer an option for those other squares. % The Pencil Marks technique corresponds to the notion of \emph{saturation}\index{subject}{saturation} due to \citet{Brelaz:1979eu}. The saturation of a vertex, in sudoku terms, is the set of numbers that are no longer available. In graph terminology, we have the following definition: \begin{equation*} \mathrm{saturation}(u) = \{ c \;|\; \exists v. v \in \mathrm{adjacent}(u) \text{ and } \mathrm{color}(v) = c \} \end{equation*} where $\mathrm{adjacent}(u)$ is the set of vertices that share an edge with $u$. The Pencil Marks technique leads to a simple strategy for filling in numbers: if there is a square with only one possible number left, then choose that number! But what if there are no squares with only one possibility left? One brute-force approach is to try them all: choose the first one, and if that ultimately leads to a solution, great. If not, backtrack and choose the next possibility. One good thing about Pencil Marks is that it reduces the degree of branching in the search tree. Nevertheless, backtracking can be terribly time consuming. One way to reduce the amount of backtracking is to use the most-constrained-first heuristic (aka minimum remaining values)~\citep{Russell2003}. That is, in choosing a square, always choose one with the fewest possibilities left (the vertex with the highest saturation). The idea is that choosing highly constrained squares earlier rather than later is better, because later on there may not be any possibilities left in the highly saturated squares. However, register allocation is easier than sudoku, because the register allocator can fall back to assigning variables to stack locations when the registers run out. Thus, it makes sense to replace backtracking with greedy search: make the best choice at the time and keep going. We still wish to minimize the number of colors needed, so we use the most-constrained-first heuristic in the greedy search. Figure~\ref{fig:satur-algo} gives the pseudocode for a simple greedy algorithm for register allocation based on saturation and the most-constrained-first heuristic. It is roughly equivalent to the DSATUR graph coloring algorithm~\citep{Brelaz:1979eu}. Just as in sudoku, the algorithm represents colors with integers. The integers $0$ through $k-1$ correspond to the $k$ registers that we use for register allocation. In particular, we recommend the following correspondence, with $k=11$. \begin{lstlisting} 0: rcx, 1: rdx, 2: rsi, 3: rdi, 4: r8, 5: r9, 6: r10, 7: rbx, 8: r12, 9: r13, 10: r14 \end{lstlisting} The integers $k$ and larger correspond to stack locations. The registers that are not used for register allocation, such as \code{rax}, are assigned to negative integers. In particular, we recommend the following correspondence. \begin{lstlisting} -1: rax, -2: rsp, -3: rbp, -4: r11, -5: r15 \end{lstlisting} %% One might wonder why we include registers at all in the liveness %% analysis and interference graph. For example, we never allocate a %% variable to \code{rax} and \code{rsp}, so it would be harmless to %% leave them out. As we see in chapter~\ref{ch:Lvec}, when we begin %% to use register for passing arguments to functions, it will be %% necessary for those registers to appear in the interference graph %% because those registers will also be assigned to variables, and we %% don't want those two uses to encroach on each other. Regarding %% registers such as \code{rax} and \code{rsp} that are not used for %% variables, we could omit them from the interference graph but that %% would require adding special cases to our algorithm, which would %% complicate the logic for little gain. \begin{figure}[btp] \begin{tcolorbox}[colback=white] \centering \begin{lstlisting}[basicstyle=\rmfamily,deletekeywords={for,from,with,is,not,in,find},morekeywords={while},columns=fullflexible] Algorithm: DSATUR Input: A graph |$G$| Output: An assignment |$\mathrm{color}[v]$| for each vertex |$v \in G$| |$W \gets \mathrm{vertices}(G)$| while |$W \neq \emptyset$| do pick a vertex |$u$| from |$W$| with the highest saturation, breaking ties randomly find the lowest color |$c$| that is not in |$\{ \mathrm{color}[v] \;:\; v \in \mathrm{adjacent}(u)\}$| |$\mathrm{color}[u] \gets c$| |$W \gets W - \{u\}$| \end{lstlisting} \end{tcolorbox} \caption{The saturation-based greedy graph coloring algorithm.} \label{fig:satur-algo} \end{figure} {\if\edition\racketEd With the DSATUR algorithm in hand, let us return to the running example and consider how to color the interference graph shown in figure~\ref{fig:interfere}. % We start by assigning each register node to its own color. For example, \code{rax} is assigned the color $-1$, \code{rsp} is assign $-2$, \code{rcx} is assigned $0$, and \code{rdx} is assigned $1$. (To reduce clutter in the interference graph, we elide nodes that do not have interference edges, such as \code{rcx}.) The variables are not yet colored, so they are annotated with a dash. We then update the saturation for vertices that are adjacent to a register, obtaining the following annotated graph. For example, the saturation for \code{t} is $\{-1,-2\}$ because it interferes with both \code{rax} and \code{rsp}. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (rax) at (0,0) {$\ttm{rax}:-1,\{-2\}$}; \node (rsp) at (10,2) {$\ttm{rsp}:-2,\{-1\}$}; \node (t1) at (0,2) {$\ttm{t}:-,\{-1,-2\}$}; \node (z) at (3,2) {$\ttm{z}:-,\{-2\}$}; \node (x) at (6,2) {$\ttm{x}:-,\{-2\}$}; \node (y) at (3,0) {$\ttm{y}:-,\{-2\}$}; \node (w) at (6,0) {$\ttm{w}:-,\{-2\}$}; \node (v) at (10,0) {$\ttm{v}:-,\{-2\}$}; \draw (t1) to (rax); \draw (t1) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \draw (v) to (rsp); \draw (w) to (rsp); \draw (x) to (rsp); \draw (y) to (rsp); \path[-.,bend left=15] (z) edge node {} (rsp); \path[-.,bend left=10] (t1) edge node {} (rsp); \draw (rax) to (rsp); \end{tikzpicture} \] The algorithm says to select a maximally saturated vertex. So, we pick $\ttm{t}$ and color it with the first available integer, which is $0$. We mark $0$ as no longer available for $\ttm{z}$, $\ttm{rax}$, and \ttm{rsp} because they interfere with $\ttm{t}$. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (rax) at (0,0) {$\ttm{rax}:-1,\{0,-2\}$}; \node (rsp) at (10,2) {$\ttm{rsp}:-2,\{-1,0\}$}; \node (t1) at (0,2) {$\ttm{t}:0,\{-1,-2\}$}; \node (z) at (3,2) {$\ttm{z}:-,\{0,-2\}$}; \node (x) at (6,2) {$\ttm{x}:-,\{-2\}$}; \node (y) at (3,0) {$\ttm{y}:-,\{-2\}$}; \node (w) at (6,0) {$\ttm{w}:-,\{-2\}$}; \node (v) at (10,0) {$\ttm{v}:-,\{-2\}$}; \draw (t1) to (rax); \draw (t1) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \draw (v) to (rsp); \draw (w) to (rsp); \draw (x) to (rsp); \draw (y) to (rsp); \path[-.,bend left=15] (z) edge node {} (rsp); \path[-.,bend left=10] (t1) edge node {} (rsp); \draw (rax) to (rsp); \end{tikzpicture} \] We repeat the process, selecting a maximally saturated vertex, choosing \code{z}, and coloring it with the first available number, which is $1$. We add $1$ to the saturation for the neighboring vertices \code{t}, \code{y}, \code{w}, and \code{rsp}. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (rax) at (0,0) {$\ttm{rax}:-1,\{0,-2\}$}; \node (rsp) at (10,2) {$\ttm{rsp}:-2,\{-1,0,1\}$}; \node (t1) at (0,2) {$\ttm{t}:0,\{-1,1,-2\}$}; \node (z) at (3,2) {$\ttm{z}:1,\{0,-2\}$}; \node (x) at (6,2) {$\ttm{x}:-,\{-2\}$}; \node (y) at (3,0) {$\ttm{y}:-,\{1,-2\}$}; \node (w) at (6,0) {$\ttm{w}:-,\{1,-2\}$}; \node (v) at (10,0) {$\ttm{v}:-,\{-2\}$}; \draw (t1) to (rax); \draw (t1) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \draw (v) to (rsp); \draw (w) to (rsp); \draw (x) to (rsp); \draw (y) to (rsp); \path[-.,bend left=15] (z) edge node {} (rsp); \path[-.,bend left=10] (t1) edge node {} (rsp); \draw (rax) to (rsp); \end{tikzpicture} \] The most saturated vertices are now \code{w} and \code{y}. We color \code{w} with the first available color, which is $0$. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (rax) at (0,0) {$\ttm{rax}:-1,\{0,-2\}$}; \node (rsp) at (10,2) {$\ttm{rsp}:-2,\{-1,0,1\}$}; \node (t1) at (0,2) {$\ttm{t}:0,\{-1,1,-2\}$}; \node (z) at (3,2) {$\ttm{z}:1,\{0,-2\}$}; \node (x) at (6,2) {$\ttm{x}:-,\{0,-2\}$}; \node (y) at (3,0) {$\ttm{y}:-,\{0,1,-2\}$}; \node (w) at (6,0) {$\ttm{w}:0,\{1,-2\}$}; \node (v) at (10,0) {$\ttm{v}:-,\{0,-2\}$}; \draw (t1) to (rax); \draw (t1) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \draw (v) to (rsp); \draw (w) to (rsp); \draw (x) to (rsp); \draw (y) to (rsp); \path[-.,bend left=15] (z) edge node {} (rsp); \path[-.,bend left=10] (t1) edge node {} (rsp); \draw (rax) to (rsp); \end{tikzpicture} \] Vertex \code{y} is now the most highly saturated, so we color \code{y} with $2$. We cannot choose $0$ or $1$ because those numbers are in \code{y}'s saturation set. Indeed, \code{y} interferes with \code{w} and \code{z}, whose colors are $0$ and $1$ respectively. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (rax) at (0,0) {$\ttm{rax}:-1,\{0,-2\}$}; \node (rsp) at (10,2) {$\ttm{rsp}:-2,\{-1,0,1,2\}$}; \node (t1) at (0,2) {$\ttm{t}:0,\{-1,1,-2\}$}; \node (z) at (3,2) {$\ttm{z}:1,\{0,2,-2\}$}; \node (x) at (6,2) {$\ttm{x}:-,\{0,-2\}$}; \node (y) at (3,0) {$\ttm{y}:2,\{0,1,-2\}$}; \node (w) at (6,0) {$\ttm{w}:0,\{1,2,-2\}$}; \node (v) at (10,0) {$\ttm{v}:-,\{0,-2\}$}; \draw (t1) to (rax); \draw (t1) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \draw (v) to (rsp); \draw (w) to (rsp); \draw (x) to (rsp); \draw (y) to (rsp); \path[-.,bend left=15] (z) edge node {} (rsp); \path[-.,bend left=10] (t1) edge node {} (rsp); \draw (rax) to (rsp); \end{tikzpicture} \] Now \code{x} and \code{v} are the most saturated, so we color \code{v} with $1$. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (rax) at (0,0) {$\ttm{rax}:-1,\{0,-2\}$}; \node (rsp) at (10,2) {$\ttm{rsp}:-2,\{-1,0,1,2\}$}; \node (t1) at (0,2) {$\ttm{t}:0,\{-1,1,-2\}$}; \node (z) at (3,2) {$\ttm{z}:1,\{0,2,-2\}$}; \node (x) at (6,2) {$\ttm{x}:-,\{0,-2\}$}; \node (y) at (3,0) {$\ttm{y}:2,\{0,1,-2\}$}; \node (w) at (6,0) {$\ttm{w}:0,\{1,2,-2\}$}; \node (v) at (10,0) {$\ttm{v}:1,\{0,-2\}$}; \draw (t1) to (rax); \draw (t1) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \draw (v) to (rsp); \draw (w) to (rsp); \draw (x) to (rsp); \draw (y) to (rsp); \path[-.,bend left=15] (z) edge node {} (rsp); \path[-.,bend left=10] (t1) edge node {} (rsp); \draw (rax) to (rsp); \end{tikzpicture} \] In the last step of the algorithm, we color \code{x} with $1$. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (rax) at (0,0) {$\ttm{rax}:-1,\{0,-2\}$}; \node (rsp) at (10,2) {$\ttm{rsp}:-2,\{-1,0,1,2\}$}; \node (t1) at (0,2) {$\ttm{t}:0,\{-1,1,-2\}$}; \node (z) at (3,2) {$\ttm{z}:1,\{0,2,-2\}$}; \node (x) at (6,2) {$\ttm{x}:1,\{0,-2\}$}; \node (y) at (3,0) {$\ttm{y}:2,\{0,1,-2\}$}; \node (w) at (6,0) {$\ttm{w}:0,\{1,2,-2\}$}; \node (v) at (10,0) {$\ttm{v}:1,\{0,-2\}$}; \draw (t1) to (rax); \draw (t1) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \draw (v) to (rsp); \draw (w) to (rsp); \draw (x) to (rsp); \draw (y) to (rsp); \path[-.,bend left=15] (z) edge node {} (rsp); \path[-.,bend left=10] (t1) edge node {} (rsp); \draw (rax) to (rsp); \end{tikzpicture} \] So, we obtain the following coloring: \[ \{ \ttm{rax} \mapsto -1, \ttm{rsp} \mapsto -2, \ttm{t} \mapsto 0, \ttm{z} \mapsto 1, \ttm{x} \mapsto 1, \ttm{y} \mapsto 2, \ttm{w} \mapsto 0, \ttm{v} \mapsto 1 \} \] \fi} % {\if\edition\pythonEd\pythonColor % With the DSATUR algorithm in hand, let us return to the running example and consider how to color the interference graph shown in figure~\ref{fig:interfere}, again mapping 1 to blank, 2 to white, and 3 to gray. We annotate each variable node with a dash to indicate that it has not yet been assigned a color. Each register node (not shown) should be assigned the number that the register corresponds to, for example, color \code{rcx} with the number \code{0} and \code{rdx} with \code{1}. The saturation sets are also shown for each node; all of them start as the empty set. We do not show the register nodes in the following graph because there were no interference edges involving registers in this program; however, in general there can be inference edges that involve registers. % \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (t0) at (0,2) {$\ttm{tmp\_0}: -, \{\}$}; \node (t1) at (0,0) {$\ttm{tmp\_1}: -, \{\}$}; \node (z) at (3,2) {$\ttm{z}: -, \{\}$}; \node (x) at (6,2) {$\ttm{x}: -, \{\}$}; \node (y) at (3,0) {$\ttm{y}: -, \{\}$}; \node (w) at (6,0) {$\ttm{w}: -, \{\}$}; \node (v) at (9,0) {$\ttm{v}: -, \{\}$}; \draw (t0) to (t1); \draw (t0) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \end{tikzpicture} \] The algorithm says to select a maximally saturated vertex, but they are all equally saturated. So we flip a coin and pick $\ttm{tmp\_0}$ and then we color it with the first available integer, which is $0$. We mark $0$ as no longer available for $\ttm{tmp\_1}$ and $\ttm{z}$ because they interfere with $\ttm{tmp\_0}$. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (t0) at (0,2) {$\ttm{tmp\_0}: 0, \{\}$}; \node (t1) at (0,0) {$\ttm{tmp\_1}: -, \{0\}$}; \node (z) at (3,2) {$\ttm{z}: -, \{0\}$}; \node (x) at (6,2) {$\ttm{x}: -, \{\}$}; \node (y) at (3,0) {$\ttm{y}: -, \{\}$}; \node (w) at (6,0) {$\ttm{w}: -, \{\}$}; \node (v) at (9,0) {$\ttm{v}: -, \{\}$}; \draw (t0) to (t1); \draw (t0) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \end{tikzpicture} \] We repeat the process. The most saturated vertices are \code{z} and \code{tmp\_1}, so we choose \code{z} and color it with the first available number, which is $1$. We add $1$ to the saturation for the neighboring vertices \code{tmp\_0}, \code{y}, and \code{w}. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (t0) at (0,2) {$\ttm{tmp\_0}: 0, \{1\}$}; \node (t1) at (0,0) {$\ttm{tmp\_1}: -, \{0\}$}; \node (z) at (3,2) {$\ttm{z}: 1, \{0\}$}; \node (x) at (6,2) {$\ttm{x}: -, \{\}$}; \node (y) at (3,0) {$\ttm{y}: -, \{1\}$}; \node (w) at (6,0) {$\ttm{w}: -, \{1\}$}; \node (v) at (9,0) {$\ttm{v}: -, \{\}$}; \draw (t0) to (t1); \draw (t0) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \end{tikzpicture} \] The most saturated vertices are now \code{tmp\_1}, \code{w}, and \code{y}. We color \code{w} with the first available color, which is $0$. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (t0) at (0,2) {$\ttm{tmp\_0}: 0, \{1\}$}; \node (t1) at (0,0) {$\ttm{tmp\_1}: -, \{0\}$}; \node (z) at (3,2) {$\ttm{z}: 1, \{0\}$}; \node (x) at (6,2) {$\ttm{x}: -, \{0\}$}; \node (y) at (3,0) {$\ttm{y}: -, \{0,1\}$}; \node (w) at (6,0) {$\ttm{w}: 0, \{1\}$}; \node (v) at (9,0) {$\ttm{v}: -, \{0\}$}; \draw (t0) to (t1); \draw (t0) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \end{tikzpicture} \] Now \code{y} is the most saturated, so we color it with $2$. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (t0) at (0,2) {$\ttm{tmp\_0}: 0, \{1\}$}; \node (t1) at (0,0) {$\ttm{tmp\_1}: -, \{0\}$}; \node (z) at (3,2) {$\ttm{z}: 1, \{0,2\}$}; \node (x) at (6,2) {$\ttm{x}: -, \{0\}$}; \node (y) at (3,0) {$\ttm{y}: 2, \{0,1\}$}; \node (w) at (6,0) {$\ttm{w}: 0, \{1,2\}$}; \node (v) at (9,0) {$\ttm{v}: -, \{0\}$}; \draw (t0) to (t1); \draw (t0) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \end{tikzpicture} \] The most saturated vertices are \code{tmp\_1}, \code{x}, and \code{v}. We choose to color \code{v} with $1$. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (t0) at (0,2) {$\ttm{tmp\_0}: 0, \{1\}$}; \node (t1) at (0,0) {$\ttm{tmp\_1}: -, \{0\}$}; \node (z) at (3,2) {$\ttm{z}: 1, \{0,2\}$}; \node (x) at (6,2) {$\ttm{x}: -, \{0\}$}; \node (y) at (3,0) {$\ttm{y}: 2, \{0,1\}$}; \node (w) at (6,0) {$\ttm{w}: 0, \{1,2\}$}; \node (v) at (9,0) {$\ttm{v}: 1, \{0\}$}; \draw (t0) to (t1); \draw (t0) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \end{tikzpicture} \] We color the remaining two variables, \code{tmp\_1} and \code{x}, with $1$. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (t0) at (0,2) {$\ttm{tmp\_0}: 0, \{1\}$}; \node (t1) at (0,0) {$\ttm{tmp\_1}: 1, \{0\}$}; \node (z) at (3,2) {$\ttm{z}: 1, \{0,2\}$}; \node (x) at (6,2) {$\ttm{x}: 1, \{0\}$}; \node (y) at (3,0) {$\ttm{y}: 2, \{0,1\}$}; \node (w) at (6,0) {$\ttm{w}: 0, \{1,2\}$}; \node (v) at (9,0) {$\ttm{v}: 1, \{0\}$}; \draw (t0) to (t1); \draw (t0) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \end{tikzpicture} \] So, we obtain the following coloring: \[ \{ \ttm{tmp\_0} \mapsto 0, \ttm{tmp\_1} \mapsto 1, \ttm{z} \mapsto 1, \ttm{x} \mapsto 1, \ttm{y} \mapsto 2, \ttm{w} \mapsto 0, \ttm{v} \mapsto 1 \} \] \fi} We recommend creating an auxiliary function named \code{color\_graph} that takes an interference graph and a list of all the variables in the program. This function should return a mapping of variables to their colors (represented as natural numbers). By creating this helper function, you will be able to reuse it in chapter~\ref{ch:Lfun} when we add support for functions. To prioritize the processing of highly saturated nodes inside the \code{color\_graph} function, we recommend using the priority queue data structure \racket{described in figure~\ref{fig:priority-queue}}\python{in the file \code{priority\_queue.py} of the support code}. \racket{In addition, you will need to maintain a mapping from variables to their handles in the priority queue so that you can notify the priority queue when their saturation changes.} {\if\edition\racketEd \begin{figure}[tp] %\begin{wrapfigure}[25]{r}[0.75in]{0.55\textwidth} \small \begin{tcolorbox}[title=Priority Queue] A \emph{priority queue}\index{subject}{priority queue} is a collection of items in which the removal of items is governed by priority. In a \emph{min} queue, lower priority items are removed first. An implementation is in \code{priority\_queue.rkt} of the support code.\index{subject}{min queue} \begin{description} \item[$\LP\code{make-pqueue}\,\itm{cmp}\RP$] constructs an empty priority queue that uses the $\itm{cmp}$ predicate to determine whether its first argument has lower or equal priority to its second argument. \item[$\LP\code{pqueue-count}\,\itm{queue}\RP$] returns the number of items in the queue. \item[$\LP\code{pqueue-push!}\,\itm{queue}\,\itm{item}\RP$] inserts the item into the queue and returns a handle for the item in the queue. \item[$\LP\code{pqueue-pop!}\,\itm{queue}\RP$] returns the item with the lowest priority. \item[$\LP\code{pqueue-decrease-key!}\,\itm{queue}\,\itm{handle}\RP$] notifies the queue that the priority has decreased for the item associated with the given handle. \end{description} \end{tcolorbox} %\end{wrapfigure} \caption{The priority queue data structure.} \label{fig:priority-queue} \end{figure} \fi} With the coloring complete, we finalize the assignment of variables to registers and stack locations. We map the first $k$ colors to the $k$ registers and the rest of the colors to stack locations. Suppose for the moment that we have just one register to use for register allocation, \key{rcx}. Then we have the following map from colors to locations. \[ \{ 0 \mapsto \key{\%rcx}, \; 1 \mapsto \key{-8(\%rbp)}, \; 2 \mapsto \key{-16(\%rbp)} \} \] Composing this mapping with the coloring, we arrive at the following assignment of variables to locations. {\if\edition\racketEd \begin{gather*} \{ \ttm{v} \mapsto \key{-8(\%rbp)}, \, \ttm{w} \mapsto \key{\%rcx}, \, \ttm{x} \mapsto \key{-8(\%rbp)}, \, \ttm{y} \mapsto \key{-16(\%rbp)}, \\ \ttm{z} \mapsto \key{-8(\%rbp)}, \, \ttm{t} \mapsto \key{\%rcx} \} \end{gather*} \fi} {\if\edition\pythonEd\pythonColor \begin{gather*} \{ \ttm{v} \mapsto \key{-8(\%rbp)}, \, \ttm{w} \mapsto \key{\%rcx}, \, \ttm{x} \mapsto \key{-8(\%rbp)}, \, \ttm{y} \mapsto \key{-16(\%rbp)}, \\ \ttm{z} \mapsto \key{-8(\%rbp)}, \, \ttm{tmp\_0} \mapsto \key{\%rcx}, \, \ttm{tmp\_1} \mapsto \key{-8(\%rbp)} \} \end{gather*} \fi} Adapt the code from the \code{assign\_homes} pass (section~\ref{sec:assign-Lvar}) to replace the variables with their assigned location. Applying this assignment to our running example shown next, on the left, yields the program on the right. % why frame size of 32? -JGS \begin{center} {\if\edition\racketEd \begin{minipage}{0.35\textwidth} \begin{lstlisting} movq $1, v movq $42, w movq v, x addq $7, x movq x, y movq x, z addq w, z movq y, t negq t movq z, %rax addq t, %rax jmp conclusion \end{lstlisting} \end{minipage} $\Rightarrow\qquad$ \begin{minipage}{0.45\textwidth} \begin{lstlisting} movq $1, -8(%rbp) movq $42, %rcx movq -8(%rbp), -8(%rbp) addq $7, -8(%rbp) movq -8(%rbp), -16(%rbp) movq -8(%rbp), -8(%rbp) addq %rcx, -8(%rbp) movq -16(%rbp), %rcx negq %rcx movq -8(%rbp), %rax addq %rcx, %rax jmp conclusion \end{lstlisting} \end{minipage} \fi} {\if\edition\pythonEd\pythonColor \begin{minipage}{0.35\textwidth} \begin{lstlisting} movq $1, v movq $42, w movq v, x addq $7, x movq x, y movq x, z addq w, z movq y, tmp_0 negq tmp_0 movq z, tmp_1 addq tmp_0, tmp_1 movq tmp_1, %rdi callq print_int \end{lstlisting} \end{minipage} $\Rightarrow\qquad$ \begin{minipage}{0.45\textwidth} \begin{lstlisting} movq $1, -8(%rbp) movq $42, %rcx movq -8(%rbp), -8(%rbp) addq $7, -8(%rbp) movq -8(%rbp), -16(%rbp) movq -8(%rbp), -8(%rbp) addq %rcx, -8(%rbp) movq -16(%rbp), %rcx negq %rcx movq -8(%rbp), -8(%rbp) addq %rcx, -8(%rbp) movq -8(%rbp), %rdi callq print_int \end{lstlisting} \end{minipage} \fi} \end{center} \begin{exercise}\normalfont\normalsize Implement the \code{allocate\_registers} pass. Create five programs that exercise all aspects of the register allocation algorithm, including spilling variables to the stack. % {\if\edition\racketEd Replace \code{assign\_homes} in the list of \code{passes} in the \code{run-tests.rkt} script with the three new passes: \code{uncover\_live}, \code{build\_interference}, and \code{allocate\_registers}. Temporarily remove the call to \code{compiler-tests}. Run the script to test the register allocator. \fi} % {\if\edition\pythonEd\pythonColor Run the \code{run-tests.py} script to check whether the output programs produce the same result as the input programs. \fi} \end{exercise} \section{Patch Instructions} \label{sec:patch-instructions} The remaining step in the compilation to x86 is to ensure that the instructions have at most one argument that is a memory access. % In the running example, the instruction \code{movq -8(\%rbp), -16(\%rbp)} is problematic. Recall from section~\ref{sec:patch-s0} that the fix is to first move \code{-8(\%rbp)} into \code{rax} and then move \code{rax} into \code{-16(\%rbp)}. % The moves from \code{-8(\%rbp)} to \code{-8(\%rbp)} are also problematic, but they can simply be deleted. In general, we recommend deleting all the trivial moves whose source and destination are the same location. % The following is the output of \code{patch\_instructions} on the running example. \begin{center} {\if\edition\racketEd \begin{minipage}{0.35\textwidth} \begin{lstlisting} movq $1, -8(%rbp) movq $42, %rcx movq -8(%rbp), -8(%rbp) addq $7, -8(%rbp) movq -8(%rbp), -16(%rbp) movq -8(%rbp), -8(%rbp) addq %rcx, -8(%rbp) movq -16(%rbp), %rcx negq %rcx movq -8(%rbp), %rax addq %rcx, %rax jmp conclusion \end{lstlisting} \end{minipage} $\Rightarrow\qquad$ \begin{minipage}{0.45\textwidth} \begin{lstlisting} movq $1, -8(%rbp) movq $42, %rcx addq $7, -8(%rbp) movq -8(%rbp), %rax movq %rax, -16(%rbp) addq %rcx, -8(%rbp) movq -16(%rbp), %rcx negq %rcx movq -8(%rbp), %rax addq %rcx, %rax jmp conclusion \end{lstlisting} \end{minipage} \fi} {\if\edition\pythonEd\pythonColor \begin{minipage}{0.35\textwidth} \begin{lstlisting} movq $1, -8(%rbp) movq $42, %rcx movq -8(%rbp), -8(%rbp) addq $7, -8(%rbp) movq -8(%rbp), -16(%rbp) movq -8(%rbp), -8(%rbp) addq %rcx, -8(%rbp) movq -16(%rbp), %rcx negq %rcx movq -8(%rbp), -8(%rbp) addq %rcx, -8(%rbp) movq -8(%rbp), %rdi callq print_int \end{lstlisting} \end{minipage} $\Rightarrow\qquad$ \begin{minipage}{0.45\textwidth} \begin{lstlisting} movq $1, -8(%rbp) movq $42, %rcx addq $7, -8(%rbp) movq -8(%rbp), %rax movq %rax, -16(%rbp) addq %rcx, -8(%rbp) movq -16(%rbp), %rcx negq %rcx addq %rcx, -8(%rbp) movq -8(%rbp), %rdi callq print_int \end{lstlisting} \end{minipage} \fi} \end{center} \begin{exercise}\normalfont\normalsize % Update the \code{patch\_instructions} compiler pass to delete trivial moves. % %Insert it after \code{allocate\_registers} in the list of \code{passes} %in the \code{run-tests.rkt} script. % Run the script to test the \code{patch\_instructions} pass. \end{exercise} \section{Prelude and Conclusion} \label{sec:print-x86-reg-alloc} \index{subject}{calling conventions} \index{subject}{prelude}\index{subject}{conclusion} Recall that this pass generates the prelude and conclusion instructions to satisfy the x86 calling conventions (section~\ref{sec:calling-conventions}). With the addition of the register allocator, the callee-saved registers used by the register allocator must be saved in the prelude and restored in the conclusion. In the \code{allocate\_registers} pass, % \racket{add an entry to the \itm{info} of \code{X86Program} named \code{used\_callee}} % \python{add a field named \code{used\_callee} to the \code{X86Program} AST node} % that stores the set of callee-saved registers that were assigned to variables. The \code{prelude\_and\_conclusion} pass can then access this information to decide which callee-saved registers need to be saved and restored. % When calculating the amount to adjust the \code{rsp} in the prelude, make sure to take into account the space used for saving the callee-saved registers. Also, remember that the frame needs to be a multiple of 16 bytes! We recommend using the following equation for the amount $A$ to subtract from the \code{rsp}. Let $S$ be the number of stack locations used by spilled variables\footnote{Sometimes two or more spilled variables are assigned to the same stack location, so $S$ can be less than the number of spilled variables.} and $C$ be the number of callee-saved registers that were allocated\index{subject}{allocate} to variables. The $\itm{align}$ function rounds a number up to the nearest 16 bytes. \[ \itm{A} = \itm{align}(8\itm{S} + 8\itm{C}) - 8\itm{C} \] The reason we subtract $8\itm{C}$ in this equation is that the prelude uses \code{pushq} to save each of the callee-saved registers, and \code{pushq} subtracts $8$ from the \code{rsp}. \racket{An overview of all the passes involved in register allocation is shown in figure~\ref{fig:reg-alloc-passes}.} {\if\edition\racketEd \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{tikzpicture}[baseline=(current bounding box.center)] \node (Lvar) at (0,2) {\large \LangVar{}}; \node (Lvar-2) at (3,2) {\large \LangVar{}}; \node (Lvar-3) at (7,2) {\large \LangVarANF{}}; \node (Cvar-1) at (0,0) {\large \LangCVar{}}; \node (x86-2) at (0,-2) {\large \LangXVar{}}; \node (x86-3) at (3,-2) {\large \LangXVar{}}; \node (x86-4) at (7,-2) {\large \LangXInt{}}; \node (x86-5) at (7,-4) {\large \LangXInt{}}; \node (x86-2-1) at (0,-4) {\large \LangXVar{}}; \node (x86-2-2) at (3,-4) {\large \LangXVar{}}; \path[->,bend left=15] (Lvar) edge [above] node {\ttfamily\footnotesize uniquify} (Lvar-2); \path[->,bend left=15] (Lvar-2) edge [above] node {\ttfamily\footnotesize remove\_complex\_operands} (Lvar-3); \path[->,bend left=15] (Lvar-3) edge [right] node {\ttfamily\footnotesize \ \ explicate\_control} (Cvar-1); \path[->,bend right=15] (Cvar-1) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend left=15] (x86-2) edge [right] node {\ttfamily\footnotesize uncover\_live} (x86-2-1); \path[->,bend right=15] (x86-2-1) edge [below] node {\ttfamily\footnotesize build\_interference} (x86-2-2); \path[->,bend right=15] (x86-2-2) edge [right] node {\ttfamily\footnotesize allocate\_registers} (x86-3); \path[->,bend left=15] (x86-3) edge [above] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend left=15] (x86-4) edge [right] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \end{tcolorbox} \caption{Diagram of the passes for \LangVar{} with register allocation.} \label{fig:reg-alloc-passes} \end{figure} \fi} Figure~\ref{fig:running-example-x86} shows the x86 code generated for the running example (figure~\ref{fig:reg-eg}). To demonstrate both the use of registers and the stack, we limit the register allocator for this example to use just two registers: \code{rcx} (color $0$) and \code{rbx} (color $1$). In the prelude\index{subject}{prelude} of the \code{main} function, we push \code{rbx} onto the stack because it is a callee-saved register and it was assigned to a variable by the register allocator. We subtract \code{8} from the \code{rsp} at the end of the prelude to reserve space for the one spilled variable. After that subtraction, the \code{rsp} is aligned to 16 bytes. Moving on to the program proper, we see how the registers were allocated. % \racket{Variables \code{v}, \code{x}, and \code{z} were assigned to \code{rbx}, and variables \code{w} and \code{t} was assigned to \code{rcx}.} % \python{Variables \code{v}, \code{x}, \code{y}, and \code{tmp\_0} were assigned to \code{rcx}, and variables \code{w} and \code{tmp\_1} were assigned to \code{rbx}.} % Variable \racket{\code{y}}\python{\code{z}} was spilled to the stack location \code{-16(\%rbp)}. Recall that the prelude saved the callee-save register \code{rbx} onto the stack. The spilled variables must be placed lower on the stack than the saved callee-save registers, so in this case \racket{\code{y}}\python{z} is placed at \code{-16(\%rbp)}. In the conclusion\index{subject}{conclusion}, we undo the work that was done in the prelude. We move the stack pointer up by \code{8} bytes (the room for spilled variables), then pop the old values of \code{rbx} and \code{rbp} (callee-saved registers), and finish with \code{retq} to return control to the operating system. \begin{figure}[tbp] \begin{minipage}{0.55\textwidth} \begin{tcolorbox}[colback=white] % var_test_28.rkt % (use-minimal-set-of-registers! #t) % 0 -> rcx % 1 -> rbx % % t 0 rcx % z 1 rbx % w 0 rcx % y 2 rbp -16 % v 1 rbx % x 1 rbx {\if\edition\racketEd \begin{lstlisting} start: movq $1, %rbx movq $42, %rcx addq $7, %rbx movq %rbx, -16(%rbp) addq %rcx, %rbx movq -16(%rbp), %rcx negq %rcx movq %rbx, %rax addq %rcx, %rax jmp conclusion .globl main main: pushq %rbp movq %rsp, %rbp pushq %rbx subq $8, %rsp jmp start conclusion: addq $8, %rsp popq %rbx popq %rbp retq \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor %{v: %rcx, x: %rcx, z: -16(%rbp), w: %rbx, tmp_1: %rbx, y: %rcx, tmp_0: %rcx} \begin{lstlisting} .globl main main: pushq %rbp movq %rsp, %rbp pushq %rbx subq $8, %rsp movq $1, %rcx movq $42, %rbx addq $7, %rcx movq %rcx, -16(%rbp) addq %rbx, -16(%rbp) negq %rcx movq -16(%rbp), %rbx addq %rcx, %rbx movq %rbx, %rdi callq print_int addq $8, %rsp popq %rbx popq %rbp retq \end{lstlisting} \fi} \end{tcolorbox} \end{minipage} \caption{The x86 output from the running example (figure~\ref{fig:reg-eg}), limiting allocation to just \code{rbx} and \code{rcx}.} \label{fig:running-example-x86} \end{figure} \begin{exercise}\normalfont\normalsize Update the \code{prelude\_and\_conclusion} pass as described in this section. % \racket{ In the \code{run-tests.rkt} script, add \code{prelude\_and\_conclusion} to the list of passes and the call to \code{compiler-tests}.} % Run the script to test the complete compiler for \LangVar{} that performs register allocation. \end{exercise} \section{Challenge: Move Biasing} \label{sec:move-biasing} \index{subject}{move biasing} This section describes an enhancement to the register allocator, called move biasing, for students who are looking for an extra challenge. {\if\edition\racketEd To motivate the need for move biasing we return to the running example, but this time we use all the general purpose registers. So, we have the following mapping of color numbers to registers. \[ \{ 0 \mapsto \key{\%rcx}, \; 1 \mapsto \key{\%rdx}, \; 2 \mapsto \key{\%rsi}, \ldots \} \] Using the same assignment of variables to color numbers that was produced by the register allocator described in the last section, we get the following program. \begin{center} \begin{minipage}{0.35\textwidth} \begin{lstlisting} movq $1, v movq $42, w movq v, x addq $7, x movq x, y movq x, z addq w, z movq y, t negq t movq z, %rax addq t, %rax jmp conclusion \end{lstlisting} \end{minipage} $\Rightarrow\qquad$ \begin{minipage}{0.45\textwidth} \begin{lstlisting} movq $1, %rdx movq $42, %rcx movq %rdx, %rdx addq $7, %rdx movq %rdx, %rsi movq %rdx, %rdx addq %rcx, %rdx movq %rsi, %rcx negq %rcx movq %rdx, %rax addq %rcx, %rax jmp conclusion \end{lstlisting} \end{minipage} \end{center} In this output code there are two \key{movq} instructions that can be removed because their source and target are the same. However, if we had put \key{t}, \key{v}, \key{x}, and \key{y} into the same register, we could instead remove three \key{movq} instructions. We can accomplish this by taking into account which variables appear in \key{movq} instructions with which other variables. \fi} {\if\edition\pythonEd\pythonColor % To motivate the need for move biasing we return to the running example and recall that in section~\ref{sec:patch-instructions} we were able to remove three trivial move instructions from the running example. However, we could remove another trivial move if we were able to allocate \code{y} and \code{tmp\_0} to the same register. \fi} We say that two variables $p$ and $q$ are \emph{move related}\index{subject}{move related} if they participate together in a \key{movq} instruction, that is, \key{movq} $p$\key{,} $q$ or \key{movq} $q$\key{,} $p$. % Recall that we color variables that are more saturated before coloring variables that are less saturated, and in the case of equally saturated variables, we choose randomly. Now we break such ties by giving preference to variables that have an available color that is the same as the color of a move-related variable. % Furthermore, when the register allocator chooses a color for a variable, it should prefer a color that has already been used for a move-related variable if one exists (and assuming that they do not interfere). This preference should not override the preference for registers over stack locations. So, this preference should be used as a tie breaker in choosing between two registers or in choosing between two stack locations. We recommend representing the move relationships in a graph, similarly to how we represented interference. The following is the \emph{move graph} for our running example. {\if\edition\racketEd \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (rax) at (0,0) {$\ttm{rax}$}; \node (rsp) at (9,2) {$\ttm{rsp}$}; \node (t) at (0,2) {$\ttm{t}$}; \node (z) at (3,2) {$\ttm{z}$}; \node (x) at (6,2) {$\ttm{x}$}; \node (y) at (3,0) {$\ttm{y}$}; \node (w) at (6,0) {$\ttm{w}$}; \node (v) at (9,0) {$\ttm{v}$}; \draw (v) to (x); \draw (x) to (y); \draw (x) to (z); \draw (y) to (t); \end{tikzpicture} \] \fi} % {\if\edition\pythonEd\pythonColor \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (t0) at (0,2) {$\ttm{tmp\_0}$}; \node (t1) at (0,0) {$\ttm{tmp\_1}$}; \node (z) at (3,2) {$\ttm{z}$}; \node (x) at (6,2) {$\ttm{x}$}; \node (y) at (3,0) {$\ttm{y}$}; \node (w) at (6,0) {$\ttm{w}$}; \node (v) at (9,0) {$\ttm{v}$}; \draw (y) to (t0); \draw (z) to (x); \draw (z) to (t1); \draw (x) to (y); \draw (x) to (v); \end{tikzpicture} \] \fi} {\if\edition\racketEd Now we replay the graph coloring, pausing to see the coloring of \code{y}. Recall the following configuration. The most saturated vertices were \code{w} and \code{y}. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (rax) at (0,0) {$\ttm{rax}:-1,\{0,-2\}$}; \node (rsp) at (9,2) {$\ttm{rsp}:-2,\{-1,0,1,2\}$}; \node (t1) at (0,2) {$\ttm{t}:0,\{1,-2\}$}; \node (z) at (3,2) {$\ttm{z}:1,\{0,-2\}$}; \node (x) at (6,2) {$\ttm{x}:-,\{-2\}$}; \node (y) at (3,0) {$\ttm{y}:-,\{1,-2\}$}; \node (w) at (6,0) {$\ttm{w}:-,\{1,-2\}$}; \node (v) at (9,0) {$\ttm{v}:-,\{-2\}$}; \draw (t1) to (rax); \draw (t1) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \draw (v) to (rsp); \draw (w) to (rsp); \draw (x) to (rsp); \draw (y) to (rsp); \path[-.,bend left=15] (z) edge node {} (rsp); \path[-.,bend left=10] (t1) edge node {} (rsp); \draw (rax) to (rsp); \end{tikzpicture} \] % The last time, we chose to color \code{w} with $0$. This time, we see that \code{w} is not move-related to any vertex, but \code{y} is move-related to \code{t}. So we choose to color \code{y} with $0$, the same color as \code{t}. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (rax) at (0,0) {$\ttm{rax}:-1,\{0,-2\}$}; \node (rsp) at (9,2) {$\ttm{rsp}:-2,\{-1,0,1,2\}$}; \node (t1) at (0,2) {$\ttm{t}:0,\{1,-2\}$}; \node (z) at (3,2) {$\ttm{z}:1,\{0,-2\}$}; \node (x) at (6,2) {$\ttm{x}:-,\{-2\}$}; \node (y) at (3,0) {$\ttm{y}:0,\{1,-2\}$}; \node (w) at (6,0) {$\ttm{w}:-,\{0,1,-2\}$}; \node (v) at (9,0) {$\ttm{v}:-,\{-2\}$}; \draw (t1) to (rax); \draw (t1) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \draw (v) to (rsp); \draw (w) to (rsp); \draw (x) to (rsp); \draw (y) to (rsp); \path[-.,bend left=15] (z) edge node {} (rsp); \path[-.,bend left=10] (t1) edge node {} (rsp); \draw (rax) to (rsp); \end{tikzpicture} \] Now \code{w} is the most saturated, so we color it $2$. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (rax) at (0,0) {$\ttm{rax}:-1,\{0,-2\}$}; \node (rsp) at (9,2) {$\ttm{rsp}:-2,\{-1,0,1,2\}$}; \node (t1) at (0,2) {$\ttm{t}:0,\{1,-2\}$}; \node (z) at (3,2) {$\ttm{z}:1,\{0,2,-2\}$}; \node (x) at (6,2) {$\ttm{x}:-,\{2,-2\}$}; \node (y) at (3,0) {$\ttm{y}:0,\{1,2,-2\}$}; \node (w) at (6,0) {$\ttm{w}:2,\{0,1,-2\}$}; \node (v) at (9,0) {$\ttm{v}:-,\{2,-2\}$}; \draw (t1) to (rax); \draw (t1) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \draw (v) to (rsp); \draw (w) to (rsp); \draw (x) to (rsp); \draw (y) to (rsp); \path[-.,bend left=15] (z) edge node {} (rsp); \path[-.,bend left=10] (t1) edge node {} (rsp); \draw (rax) to (rsp); \end{tikzpicture} \] At this point, vertices \code{x} and \code{v} are most saturated, but \code{x} is move related to \code{y} and \code{z}, so we color \code{x} to $0$ to match \code{y}. Finally, we color \code{v} to $0$. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (rax) at (0,0) {$\ttm{rax}:-1,\{0,-2\}$}; \node (rsp) at (9,2) {$\ttm{rsp}:-2,\{-1,0,1,2\}$}; \node (t) at (0,2) {$\ttm{t}:0,\{1,-2\}$}; \node (z) at (3,2) {$\ttm{z}:1,\{0,2,-2\}$}; \node (x) at (6,2) {$\ttm{x}:0,\{2,-2\}$}; \node (y) at (3,0) {$\ttm{y}:0,\{1,2,-2\}$}; \node (w) at (6,0) {$\ttm{w}:2,\{0,1,-2\}$}; \node (v) at (9,0) {$\ttm{v}:0,\{2,-2\}$}; \draw (t1) to (rax); \draw (t) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \draw (v) to (rsp); \draw (w) to (rsp); \draw (x) to (rsp); \draw (y) to (rsp); \path[-.,bend left=15] (z) edge node {} (rsp); \path[-.,bend left=10] (t1) edge node {} (rsp); \draw (rax) to (rsp); \end{tikzpicture} \] \fi} % {\if\edition\pythonEd\pythonColor Now we replay the graph coloring, pausing before the coloring of \code{w}. Recall the following configuration. The most saturated vertices were \code{tmp\_1}, \code{w}, and \code{y}. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (t0) at (0,2) {$\ttm{tmp\_0}: 0, \{1\}$}; \node (t1) at (0,0) {$\ttm{tmp\_1}: -, \{0\}$}; \node (z) at (3,2) {$\ttm{z}: 1, \{0\}$}; \node (x) at (6,2) {$\ttm{x}: -, \{\}$}; \node (y) at (3,0) {$\ttm{y}: -, \{1\}$}; \node (w) at (6,0) {$\ttm{w}: -, \{1\}$}; \node (v) at (9,0) {$\ttm{v}: -, \{\}$}; \draw (t0) to (t1); \draw (t0) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \end{tikzpicture} \] We have arbitrarily chosen to color \code{w} instead of \code{tmp\_1} or \code{y}. Note, however, that \code{w} is not move related to any variables, whereas \code{y} and \code{tmp\_1} are move related to \code{tmp\_0} and \code{z}, respectively. If we instead choose \code{y} and color it $0$, we can delete another move instruction. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (t0) at (0,2) {$\ttm{tmp\_0}: 0, \{1\}$}; \node (t1) at (0,0) {$\ttm{tmp\_1}: -, \{0\}$}; \node (z) at (3,2) {$\ttm{z}: 1, \{0\}$}; \node (x) at (6,2) {$\ttm{x}: -, \{\}$}; \node (y) at (3,0) {$\ttm{y}: 0, \{1\}$}; \node (w) at (6,0) {$\ttm{w}: -, \{0,1\}$}; \node (v) at (9,0) {$\ttm{v}: -, \{\}$}; \draw (t0) to (t1); \draw (t0) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \end{tikzpicture} \] Now \code{w} is the most saturated, so we color it $2$. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (t0) at (0,2) {$\ttm{tmp\_0}: 0, \{1\}$}; \node (t1) at (0,0) {$\ttm{tmp\_1}: -, \{0\}$}; \node (z) at (3,2) {$\ttm{z}: 1, \{0\}$}; \node (x) at (6,2) {$\ttm{x}: -, \{2\}$}; \node (y) at (3,0) {$\ttm{y}: 0, \{1,2\}$}; \node (w) at (6,0) {$\ttm{w}: 2, \{0,1\}$}; \node (v) at (9,0) {$\ttm{v}: -, \{2\}$}; \draw (t0) to (t1); \draw (t0) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \end{tikzpicture} \] To finish the coloring, \code{x} and \code{v} get $0$ and \code{tmp\_1} gets $1$. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node (t0) at (0,2) {$\ttm{tmp\_0}: 0, \{1\}$}; \node (t1) at (0,0) {$\ttm{tmp\_1}: 1, \{0\}$}; \node (z) at (3,2) {$\ttm{z}: 1, \{0\}$}; \node (x) at (6,2) {$\ttm{x}: 0, \{2\}$}; \node (y) at (3,0) {$\ttm{y}: 0, \{1,2\}$}; \node (w) at (6,0) {$\ttm{w}: 2, \{0,1\}$}; \node (v) at (9,0) {$\ttm{v}: 0, \{2\}$}; \draw (t0) to (t1); \draw (t0) to (z); \draw (z) to (y); \draw (z) to (w); \draw (x) to (w); \draw (y) to (w); \draw (v) to (w); \end{tikzpicture} \] \fi} So, we have the following assignment of variables to registers. {\if\edition\racketEd \begin{gather*} \{ \ttm{v} \mapsto \key{\%rcx}, \, \ttm{w} \mapsto \key{\%rsi}, \, \ttm{x} \mapsto \key{\%rcx}, \, \ttm{y} \mapsto \key{\%rcx}, \, \ttm{z} \mapsto \key{\%rdx}, \, \ttm{t} \mapsto \key{\%rcx} \} \end{gather*} \fi} {\if\edition\pythonEd\pythonColor \begin{gather*} \{ \ttm{v} \mapsto \key{\%rcx}, \, \ttm{w} \mapsto \key{-16(\%rbp)}, \, \ttm{x} \mapsto \key{\%rcx}, \, \ttm{y} \mapsto \key{\%rcx}, \\ \ttm{z} \mapsto \key{-8(\%rbp)}, \, \ttm{tmp\_0} \mapsto \key{\%rcx}, \, \ttm{tmp\_1} \mapsto \key{-8(\%rbp)} \} \end{gather*} \fi} % We apply this register assignment to the running example shown next, on the left, to obtain the code in the middle. The \code{patch\_instructions} then deletes the trivial moves to obtain the code on the right. {\if\edition\racketEd \begin{center} \begin{minipage}{0.2\textwidth} \begin{lstlisting} movq $1, v movq $42, w movq v, x addq $7, x movq x, y movq x, z addq w, z movq y, t negq t movq z, %rax addq t, %rax jmp conclusion \end{lstlisting} \end{minipage} $\Rightarrow\qquad$ \begin{minipage}{0.25\textwidth} \begin{lstlisting} movq $1, %rcx movq $42, %rsi movq %rcx, %rcx addq $7, %rcx movq %rcx, %rcx movq %rcx, %rdx addq %rsi, %rdx movq %rcx, %rcx negq %rcx movq %rdx, %rax addq %rcx, %rax jmp conclusion \end{lstlisting} \end{minipage} $\Rightarrow\qquad$ \begin{minipage}{0.23\textwidth} \begin{lstlisting} movq $1, %rcx movq $42, %rsi addq $7, %rcx movq %rcx, %rdx addq %rsi, %rdx negq %rcx movq %rdx, %rax addq %rcx, %rax jmp conclusion \end{lstlisting} \end{minipage} \end{center} \fi} {\if\edition\pythonEd\pythonColor \begin{center} \begin{minipage}{0.20\textwidth} \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] movq $1, v movq $42, w movq v, x addq $7, x movq x, y movq x, z addq w, z movq y, tmp_0 negq tmp_0 movq z, tmp_1 addq tmp_0, tmp_1 movq tmp_1, %rdi callq _print_int \end{lstlisting} \end{minipage} ${\Rightarrow\qquad}$ \begin{minipage}{0.35\textwidth} \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] movq $1, %rcx movq $42, -16(%rbp) movq %rcx, %rcx addq $7, %rcx movq %rcx, %rcx movq %rcx, -8(%rbp) addq -16(%rbp), -8(%rbp) movq %rcx, %rcx negq %rcx movq -8(%rbp), -8(%rbp) addq %rcx, -8(%rbp) movq -8(%rbp), %rdi callq _print_int \end{lstlisting} \end{minipage} ${\Rightarrow\qquad}$ \begin{minipage}{0.20\textwidth} \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] movq $1, %rcx movq $42, -16(%rbp) addq $7, %rcx movq %rcx, -8(%rbp) movq -16(%rbp), %rax addq %rax, -8(%rbp) negq %rcx addq %rcx, -8(%rbp) movq -8(%rbp), %rdi callq print_int \end{lstlisting} \end{minipage} \end{center} \fi} \begin{exercise}\normalfont\normalsize Change your implementation of \code{allocate\_registers} to take move biasing into account. Create two new tests that include at least one opportunity for move biasing, and visually inspect the output x86 programs to make sure that your move biasing is working properly. Make sure that your compiler still passes all the tests. \end{exercise} %To do: another neat challenge would be to do % live range splitting~\citep{Cooper:1998ly}. \\ --Jeremy %% \subsection{Output of the Running Example} %% \label{sec:reg-alloc-output} % challenge: prioritize variables based on execution frequencies % and the number of uses of a variable % challenge: enhance the coloring algorithm using Chaitin's % approach of prioritizing high-degree variables % by removing low-degree variables (coloring them later) % from the interference graph \section{Further Reading} \label{sec:register-allocation-further-reading} Early register allocation algorithms were developed for Fortran compilers in the 1950s~\citep{Horwitz:1966aa,Backus:1978aa}. The use of graph coloring began in the late 1970s and early 1980s with the work of \citet{Chaitin:1981vl} on an optimizing compiler for PL/I. The algorithm is based on the following observation of \citet{Kempe:1879aa}. If a graph $G$ has a vertex $v$ with degree lower than $k$, then $G$ is $k$ colorable if the subgraph of $G$ with $v$ removed is also $k$ colorable. To see why, suppose that the subgraph is $k$ colorable. At worst, the neighbors of $v$ are assigned different colors, but because there are fewer than $k$ neighbors, there will be one or more colors left over to use for coloring $v$ in $G$. The algorithm of \citet{Chaitin:1981vl} removes a vertex $v$ of degree less than $k$ from the graph and recursively colors the rest of the graph. Upon returning from the recursion, it colors $v$ with one of the available colors and returns. \citet{Chaitin:1982vn} augments this algorithm to handle spilling as follows. If there are no vertices of degree lower than $k$ then pick a vertex at random, spill it, remove it from the graph, and proceed recursively to color the rest of the graph. Prior to coloring, \citet{Chaitin:1981vl} merged variables that are move-related and that don't interfere with each other, in a process called \emph{coalescing}. Although coalescing decreases the number of moves, it can make the graph more difficult to color. \citet{Briggs:1994kx} proposed \emph{conservative coalescing} in which two variables are merged only if they have fewer than $k$ neighbors of high degree. \citet{George:1996aa} observes that conservative coalescing is sometimes too conservative and made it more aggressive by iterating the coalescing with the removal of low-degree vertices. % Attacking the problem from a different angle, \citet{Briggs:1994kx} also proposed \emph{biased coloring}, in which a variable is assigned to the same color as another move-related variable if possible, as discussed in section~\ref{sec:move-biasing}. % The algorithm of \citet{Chaitin:1981vl} and its successors iteratively performs coalescing, graph coloring, and spill code insertion until all variables have been assigned a location. \citet{Briggs:1994kx} observes that \citet{Chaitin:1982vn} sometimes spilled variables that don't have to be: a high-degree variable can be colorable if many of its neighbors are assigned the same color. \citet{Briggs:1994kx} proposed \emph{optimistic coloring}, in which a high-degree vertex is not immediately spilled. Instead the decision is deferred until after the recursive call, when it is apparent whether there is an available color or not. We observe that this algorithm is equivalent to the smallest-last ordering algorithm~\citep{Matula:1972aa} if one takes the first $k$ colors to be registers and the rest to be stack locations. %% biased coloring Earlier editions of the compiler course at Indiana University \citep{Dybvig:2010aa} were based on the algorithm of \citet{Briggs:1994kx}. The smallest-last ordering algorithm is one of many \emph{greedy} coloring algorithms. A greedy coloring algorithm visits all the vertices in a particular order and assigns each one the first available color. An \emph{offline} greedy algorithm chooses the ordering up front, prior to assigning colors. The algorithm of \citet{Chaitin:1981vl} should be considered offline because the vertex ordering does not depend on the colors assigned. Other orderings are possible. For example, \citet{Chow:1984ys} ordered variables according to an estimate of runtime cost. An \emph{online} greedy coloring algorithm uses information about the current assignment of colors to influence the order in which the remaining vertices are colored. The saturation-based algorithm described in this chapter is one such algorithm. We choose to use saturation-based coloring because it is fun to introduce graph coloring via sudoku! A register allocator may choose to map each variable to just one location, as in \citet{Chaitin:1981vl}, or it may choose to map a variable to one or more locations. The latter can be achieved by \emph{live range splitting}, where a variable is replaced by several variables that each handle part of its live range~\citep{Chow:1984ys,Briggs:1994kx,Cooper:1998ly}. %% 1950s, Sheldon Best, Fortran \cite{Backus:1978aa}, Belady's page %% replacement algorithm, bottom-up local %% \citep{Horwitz:1966aa} straight-line programs, single basic block, %% Cooper: top-down (priority bassed), bottom-up %% top-down %% order variables by priority (estimated cost) %% caveat: split variables into two groups: %% constrained (>k neighbors) and unconstrained (}\index{subject}{greaterthan@\texttt{>}}, and \key{>=}\index{subject}{greaterthaneq@\texttt{>=}} operations for comparing integers. \end{enumerate} \racket{We reorganize the abstract syntax for the primitive operations given in figure~\ref{fig:Lif-syntax}, using only one grammar rule for all of them. This means that the grammar no longer checks whether the arity of an operator matches the number of arguments. That responsibility is moved to the type checker for \LangIf{} (section~\ref{sec:type-check-Lif}).} \newcommand{\LifGrammarRacket}{ \begin{array}{lcl} \Type &::=& \key{Boolean} \\ \itm{bool} &::=& \TRUE \MID \FALSE \\ \itm{cmp} &::= & \key{eq?} \MID \key{<} \MID \key{<=} \MID \key{>} \MID \key{>=} \\ \Exp &::=& \itm{bool} \MID (\key{and}\;\Exp\;\Exp) \MID (\key{or}\;\Exp\;\Exp) \MID (\key{not}\;\Exp) \\ &\MID& (\itm{cmp}\;\Exp\;\Exp) \MID \CIF{\Exp}{\Exp}{\Exp} \end{array} } \newcommand{\LifASTRacket}{ \begin{array}{lcl} \Type &::=& \key{Boolean} \\ \itm{bool} &::=& \code{\#t} \MID \code{\#f} \\ \itm{cmp} &::= & \code{eq?} \MID \code{<} \MID \code{<=} \MID \code{>} \MID \code{>=} \\ \itm{op} &::= & \itm{cmp} \MID \code{and} \MID \code{or} \MID \code{not} \\ \Exp &::=& \BOOL{\itm{bool}} \MID \IF{\Exp}{\Exp}{\Exp} \end{array} } \newcommand{\LintOpAST}{ \begin{array}{rcl} \Type &::=& \key{Integer} \\ \itm{op} &::= & \code{read} \MID \code{+} \MID \code{-}\\ \Exp{} &::=& \INT{\Int} \MID \PRIM{\itm{op}}{\Exp\ldots} \end{array} } \newcommand{\LifGrammarPython}{ \begin{array}{rcl} \itm{cmp} &::= & \key{==} \MID \key{!=} \MID \key{<} \MID \key{<=} \MID \key{>} \MID \key{>=} \\ \Exp &::=& \TRUE \MID \FALSE \MID \CAND{\Exp}{\Exp} \MID \COR{\Exp}{\Exp} \MID \key{not}~\Exp \\ &\MID& \CCMP{\itm{cmp}}{\Exp}{\Exp} \MID \CIF{\Exp}{\Exp}{\Exp} \\ \Stmt &::=& \key{if}~ \Exp \key{:}~ \Stmt^{+} ~\key{else:}~ \Stmt^{+} \end{array} } \newcommand{\LifASTPython}{ \begin{array}{lcl} \itm{boolop} &::=& \code{And()} \MID \code{Or()} \\ \itm{unaryop} &::=& \code{Not()} \\ \itm{cmp} &::= & \code{Eq()} \MID \code{NotEq()} \MID \code{Lt()} \MID \code{LtE()} \MID \code{Gt()} \MID \code{GtE()} \\ \itm{bool} &::=& \code{True} \MID \code{False} \\ \Exp &::=& \BOOL{\itm{bool}} \MID \BOOLOP{\itm{boolop}}{\Exp}{\Exp}\\ &\MID& \CMP{\Exp}{\itm{cmp}}{\Exp} \MID \IF{\Exp}{\Exp}{\Exp} \\ \Stmt{} &::=& \IFSTMT{\Exp}{\Stmt^{+}}{\Stmt^{+}} \end{array} } \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintGrammarRacket{}} \\ \hline \gray{\LvarGrammarRacket{}} \\ \hline \LifGrammarRacket{} \\ \begin{array}{lcl} \LangIfM{} &::=& \Exp \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintGrammarPython} \\ \hline \gray{\LvarGrammarPython} \\ \hline \LifGrammarPython \\ \begin{array}{rcl} \LangIfM{} &::=& \Stmt^{*} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The concrete syntax of \LangIf{}, extending \LangVar{} (figure~\ref{fig:Lvar-concrete-syntax}) with Booleans and conditionals.} \label{fig:Lif-concrete-syntax} \end{figure} \begin{figure}[tp] %\begin{minipage}{0.66\textwidth} \begin{tcolorbox}[colback=white] \centering {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintOpAST} \\ \hline \gray{\LvarASTRacket{}} \\ \hline \LifASTRacket{} \\ \begin{array}{lcl} \LangIfM{} &::=& \PROGRAM{\code{'()}}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintASTPython} \\ \hline \gray{\LvarASTPython} \\ \hline \LifASTPython \\ \begin{array}{lcl} \LangIfM{} &::=& \PROGRAM{\code{'()}}{\Stmt^{*}} \end{array} \end{array} \] \fi} \end{tcolorbox} %\end{minipage} \python{\index{subject}{not equal@\NOTEQNAME{}}} \python{ \index{subject}{BoolOp@\texttt{BoolOp}} \index{subject}{Compare@\texttt{Compare}} \index{subject}{Lt@\texttt{Lt}} \index{subject}{LtE@\texttt{LtE}} \index{subject}{Gt@\texttt{Gt}} \index{subject}{GtE@\texttt{GtE}} } \caption{The abstract syntax of \LangIf{}.} \label{fig:Lif-syntax} \end{figure} Figure~\ref{fig:interp-Lif} shows the definition of the interpreter for \LangIf{}, which inherits from the interpreter for \LangVar{} (figure~\ref{fig:interp-Lvar}). The literals \TRUE{} and \FALSE{} evaluate to the corresponding Boolean values. The conditional expression $\CIF{e_1}{e_2}{\itm{e_3}}$ evaluates expression $e_1$ and then either evaluates $e_2$ or $e_3$, depending on whether $e_1$ produced \TRUE{} or \FALSE{}. The logical operations \code{and}, \code{or}, and \code{not} behave according to propositional logic. In addition, the \code{and} and \code{or} operations perform \emph{short-circuit evaluation}. % That is, given the expression $\CAND{e_1}{e_2}$, the expression $e_2$ is not evaluated if $e_1$ evaluates to \FALSE{}. % Similarly, given $\COR{e_1}{e_2}$, the expression $e_2$ is not evaluated if $e_1$ evaluates to \TRUE{}. \racket{With the increase in the number of primitive operations, the interpreter would become repetitive without some care. We refactor the case for \code{Prim}, moving the code that differs with each operation into the \code{interp\_op} method shown in figure~\ref{fig:interp-op-Lif}. We handle the \code{and} and \code{or} operations separately because of their short-circuiting behavior.} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define interp-Lif-class (class interp-Lvar-class (super-new) (define/public (interp_op op) ...) (define/override ((interp_exp env) e) (define recur (interp_exp env)) (match e [(Bool b) b] [(If cnd thn els) (match (recur cnd) [#t (recur thn)] [#f (recur els)])] [(Prim 'and (list e1 e2)) (match (recur e1) [#t (match (recur e2) [#t #t] [#f #f])] [#f #f])] [(Prim 'or (list e1 e2)) (define v1 (recur e1)) (match v1 [#t #t] [#f (match (recur e2) [#t #t] [#f #f])])] [(Prim op args) (apply (interp_op op) (for/list ([e args]) (recur e)))] [else ((super interp_exp env) e)])) )) (define (interp_Lif p) (send (new interp-Lif-class) interp_program p)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} class InterpLif(InterpLvar): def interp_exp(self, e, env): match e: case IfExp(test, body, orelse): if self.interp_exp(test, env): return self.interp_exp(body, env) else: return self.interp_exp(orelse, env) case UnaryOp(Not(), v): return not self.interp_exp(v, env) case BoolOp(And(), values): if self.interp_exp(values[0], env): return self.interp_exp(values[1], env) else: return False case BoolOp(Or(), values): if self.interp_exp(values[0], env): return True else: return self.interp_exp(values[1], env) case Compare(left, [cmp], [right]): l = self.interp_exp(left, env) r = self.interp_exp(right, env) return self.interp_cmp(cmp)(l, r) case _: return super().interp_exp(e, env) def interp_stmt(self, s, env, cont): match s: case If(test, body, orelse): match self.interp_exp(test, env): case True: return self.interp_stmts(body + cont, env) case False: return self.interp_stmts(orelse + cont, env) case _: return super().interp_stmt(s, env, cont) ... \end{lstlisting} \fi} \end{tcolorbox} \caption{Interpreter for the \LangIf{} language. \racket{(See figure~\ref{fig:interp-op-Lif} for \code{interp-op}.)} \python{(See figure~\ref{fig:interp-cmp-Lif} for \code{interp\_cmp}.)}} \label{fig:interp-Lif} \end{figure} {\if\edition\racketEd \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting} (define/public (interp_op op) (match op ['+ fx+] ['- fx-] ['read read-fixnum] ['not (lambda (v) (match v [#t #f] [#f #t]))] ['eq? (lambda (v1 v2) (cond [(or (and (fixnum? v1) (fixnum? v2)) (and (boolean? v1) (boolean? v2)) (and (vector? v1) (vector? v2))) (eq? v1 v2)]))] ['< (lambda (v1 v2) (cond [(and (fixnum? v1) (fixnum? v2)) (< v1 v2)]))] ['<= (lambda (v1 v2) (cond [(and (fixnum? v1) (fixnum? v2)) (<= v1 v2)]))] ['> (lambda (v1 v2) (cond [(and (fixnum? v1) (fixnum? v2)) (> v1 v2)]))] ['>= (lambda (v1 v2) (cond [(and (fixnum? v1) (fixnum? v2)) (>= v1 v2)]))] [else (error 'interp_op "unknown operator")])) \end{lstlisting} \end{tcolorbox} \caption{Interpreter for the primitive operators in the \LangIf{} language.} \label{fig:interp-op-Lif} \end{figure} \fi} {\if\edition\pythonEd\pythonColor \begin{figure} \begin{tcolorbox}[colback=white] \begin{lstlisting} class InterpLif(InterpLvar): ... def interp_cmp(self, cmp): match cmp: case Lt(): return lambda x, y: x < y case LtE(): return lambda x, y: x <= y case Gt(): return lambda x, y: x > y case GtE(): return lambda x, y: x >= y case Eq(): return lambda x, y: x == y case NotEq(): return lambda x, y: x != y \end{lstlisting} \end{tcolorbox} \caption{Interpreter for the comparison operators in the \LangIf{} language.} \label{fig:interp-cmp-Lif} \end{figure} \fi} \section{Type Checking \LangIf{} Programs} \label{sec:type-check-Lif} It is helpful to think about type checking\index{subject}{type checking} in two complementary ways. A type checker predicts the type of value that will be produced by each expression in the program. For \LangIf{}, we have just two types, \INTTY{} and \BOOLTY{}. So, a type checker should predict that {\if\edition\racketEd \begin{lstlisting} (+ 10 (- (+ 12 20))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} 10 + -(12 + 20) \end{lstlisting} \fi} \noindent produces a value of type \INTTY{}, whereas {\if\edition\racketEd \begin{lstlisting} (and (not #f) #t) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} (not False) and True \end{lstlisting} \fi} \noindent produces a value of type \BOOLTY{}. A second way to think about type checking is that it enforces a set of rules about which operators can be applied to which kinds of values. For example, our type checker for \LangIf{} signals an error for the following expression: % {\if\edition\racketEd \begin{lstlisting} (not (+ 10 (- (+ 12 20)))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} not (10 + -(12 + 20)) \end{lstlisting} \fi} \noindent The subexpression \racket{\code{(+ 10 (- (+ 12 20)))}} \python{\code{(10 + -(12 + 20))}} has type \INTTY{}, but the type checker enforces the rule that the argument of \code{not} must be an expression of type \BOOLTY{}. We implement type checking using classes and methods because they provide the open recursion needed to reuse code as we extend the type checker in subsequent chapters, analogous to the use of classes and methods for the interpreters (section~\ref{sec:extensible-interp}). We separate the type checker for the \LangVar{} subset into its own class, shown in figure~\ref{fig:type-check-Lvar}. The type checker for \LangIf{} is shown in figure~\ref{fig:type-check-Lif}, and it inherits from the type checker for \LangVar{}. These type checkers are in the files \racket{\code{type-check-Lvar.rkt}}\python{\code{type\_check\_Lvar.py}} and \racket{\code{type-check-Lif.rkt}}\python{\code{type\_check\_Lif.py}} of the support code. % Each type checker is a structurally recursive function over the AST. Given an input expression \code{e}, the type checker either signals an error or returns \racket{an expression and} its type. % \racket{It returns an expression because there are situations in which we want to change or update the expression.} Next we discuss the \code{type\_check\_exp} function of \LangVar{} shown in figure~\ref{fig:type-check-Lvar}. The type of an integer constant is \INTTY{}. To handle variables, the type checker uses the environment \code{env} to map variables to types. % \racket{Consider the case for \key{let}. We type check the initializing expression to obtain its type \key{T} and then associate type \code{T} with the variable \code{x} in the environment used to type check the body of the \key{let}. Thus, when the type checker encounters a use of variable \code{x}, it can find its type in the environment.} % \python{Consider the case for assignment. We type check the initializing expression to obtain its type \key{t}. If the variable \code{lhs.id} is already in the environment because there was a prior assignment, we check that this initializer has the same type as the prior one. If this is the first assignment to the variable, we associate type \code{t} with the variable \code{lhs.id} in the environment. Thus, when the type checker encounters a use of variable \code{x}, it can find its type in the environment.} % \racket{Regarding primitive operators, we recursively analyze the arguments and then invoke \code{type\_check\_op} to check whether the argument types are allowed.} % \python{Regarding addition, subtraction, and negation, we recursively analyze the arguments, check that they have type \INTTY{}, and return \INTTY{}.} \racket{Several auxiliary methods are used in the type checker. The method \code{operator-types} defines a dictionary that maps the operator names to their parameter and return types. The \code{type-equal?} method determines whether two types are equal, which for now simply dispatches to \code{equal?} (deep equality). The \code{check-type-equal?} method triggers an error if the two types are not equal. The \code{type-check-op} method looks up the operator in the \code{operator-types} dictionary and then checks whether the argument types are equal to the parameter types. The result is the return type of the operator.} % \python{The auxiliary method \code{check\_type\_equal} triggers an error if the two types are not equal.} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define type-check-Lvar-class (class object% (super-new) (define/public (operator-types) '((+ . ((Integer Integer) . Integer)) (- . ((Integer Integer) . Integer)) (read . (() . Integer)))) (define/public (type-equal? t1 t2) (equal? t1 t2)) (define/public (check-type-equal? t1 t2 e) (unless (type-equal? t1 t2) (error 'type-check "~a != ~a\nin ~v" t1 t2 e))) (define/public (type-check-op op arg-types e) (match (dict-ref (operator-types) op) [`(,param-types . ,return-type) (for ([at arg-types] [pt param-types]) (check-type-equal? at pt e)) return-type] [else (error 'type-check-op "unrecognized ~a" op)])) (define/public (type-check-exp env) (lambda (e) (match e [(Int n) (values (Int n) 'Integer)] [(Var x) (values (Var x) (dict-ref env x))] [(Let x e body) (define-values (e^ Te) ((type-check-exp env) e)) (define-values (b Tb) ((type-check-exp (dict-set env x Te)) body)) (values (Let x e^ b) Tb)] [(Prim op es) (define-values (new-es ts) (for/lists (exprs types) ([e es]) ((type-check-exp env) e))) (values (Prim op new-es) (type-check-op op ts e))] [else (error 'type-check-exp "couldn't match" e)]))) (define/public (type-check-program e) (match e [(Program info body) (define-values (body^ Tb) ((type-check-exp '()) body)) (check-type-equal? Tb 'Integer body) (Program info body^)] [else (error 'type-check-Lvar "couldn't match ~a" e)])) )) (define (type-check-Lvar p) (send (new type-check-Lvar-class) type-check-program p)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[escapechar=`] class TypeCheckLvar: def check_type_equal(self, t1, t2, e): if t1 != t2: msg = 'error: ' + repr(t1) + ' != ' + repr(t2) + ' in ' + repr(e) raise Exception(msg) def type_check_exp(self, e, env): match e: case BinOp(left, (Add() | Sub()), right): l = self.type_check_exp(left, env) check_type_equal(l, int, left) r = self.type_check_exp(right, env) check_type_equal(r, int, right) return int case UnaryOp(USub(), v): t = self.type_check_exp(v, env) check_type_equal(t, int, v) return int case Name(id): return env[id] case Constant(value) if isinstance(value, int): return int case Call(Name('input_int'), []): return int def type_check_stmts(self, ss, env): if len(ss) == 0: return match ss[0]: case Assign([lhs], value): t = self.type_check_exp(value, env) if lhs.id in env: check_type_equal(env[lhs.id], t, value) else: env[lhs.id] = t return self.type_check_stmts(ss[1:], env) case Expr(Call(Name('print'), [arg])): t = self.type_check_exp(arg, env) check_type_equal(t, int, arg) return self.type_check_stmts(ss[1:], env) case Expr(value): self.type_check_exp(value, env) return self.type_check_stmts(ss[1:], env) def type_check_P(self, p): match p: case Module(body): self.type_check_stmts(body, {}) \end{lstlisting} \fi} \end{tcolorbox} \caption{Type checker for the \LangVar{} language.} \label{fig:type-check-Lvar} \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define type-check-Lif-class (class type-check-Lvar-class (super-new) (inherit check-type-equal?) (define/override (operator-types) (append '((and . ((Boolean Boolean) . Boolean)) (or . ((Boolean Boolean) . Boolean)) (< . ((Integer Integer) . Boolean)) (<= . ((Integer Integer) . Boolean)) (> . ((Integer Integer) . Boolean)) (>= . ((Integer Integer) . Boolean)) (not . ((Boolean) . Boolean))) (super operator-types))) (define/override (type-check-exp env) (lambda (e) (match e [(Bool b) (values (Bool b) 'Boolean)] [(Prim 'eq? (list e1 e2)) (define-values (e1^ T1) ((type-check-exp env) e1)) (define-values (e2^ T2) ((type-check-exp env) e2)) (check-type-equal? T1 T2 e) (values (Prim 'eq? (list e1^ e2^)) 'Boolean)] [(If cnd thn els) (define-values (cnd^ Tc) ((type-check-exp env) cnd)) (define-values (thn^ Tt) ((type-check-exp env) thn)) (define-values (els^ Te) ((type-check-exp env) els)) (check-type-equal? Tc 'Boolean e) (check-type-equal? Tt Te e) (values (If cnd^ thn^ els^) Te)] [else ((super type-check-exp env) e)]))) )) (define (type-check-Lif p) (send (new type-check-Lif-class) type-check-program p)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] class TypeCheckLif(TypeCheckLvar): def type_check_exp(self, e, env): match e: case Constant(value) if isinstance(value, bool): return bool case BinOp(left, Sub(), right): l = self.type_check_exp(left, env); check_type_equal(l, int, left) r = self.type_check_exp(right, env); check_type_equal(r, int, right) return int case UnaryOp(Not(), v): t = self.type_check_exp(v, env); check_type_equal(t, bool, v) return bool case BoolOp(op, values): left = values[0] ; right = values[1] l = self.type_check_exp(left, env); check_type_equal(l, bool, left) r = self.type_check_exp(right, env); check_type_equal(r, bool, right) return bool case Compare(left, [cmp], [right]) if isinstance(cmp, Eq) \ or isinstance(cmp, NotEq): l = self.type_check_exp(left, env) r = self.type_check_exp(right, env) check_type_equal(l, r, e) return bool case Compare(left, [cmp], [right]): l = self.type_check_exp(left, env); check_type_equal(l, int, left) r = self.type_check_exp(right, env); check_type_equal(r, int, right) return bool case IfExp(test, body, orelse): t = self.type_check_exp(test, env); check_type_equal(bool, t, test) b = self.type_check_exp(body, env) o = self.type_check_exp(orelse, env) check_type_equal(b, o, e) return b case _: return super().type_check_exp(e, env) def type_check_stmts(self, ss, env): if len(ss) == 0: return match ss[0]: case If(test, body, orelse): t = self.type_check_exp(test, env); check_type_equal(bool, t, test) b = self.type_check_stmts(body, env) o = self.type_check_stmts(orelse, env) check_type_equal(b, o, ss[0]) return self.type_check_stmts(ss[1:], env) case _: return super().type_check_stmts(ss, env) \end{lstlisting} \fi} \end{tcolorbox} \caption{Type checker for the \LangIf{} language.} \label{fig:type-check-Lif} \end{figure} The definition of the type checker for \LangIf{} is shown in figure~\ref{fig:type-check-Lif}. % The type of a Boolean constant is \BOOLTY{}. % \racket{The \code{operator-types} function adds dictionary entries for the new operators.} % \python{The logical \code{not} operator requires its argument to be a \BOOLTY{} and produces a \BOOLTY{}. Similarly for the logical \code{and} and logical \code{or} operators.} % The equality operator requires the two arguments to have the same type, and therefore we handle it separately from the other operators. % \python{The other comparisons (less-than, etc.) require their arguments to be of type \INTTY{}, and they produce a \BOOLTY{}.} % The condition of an \code{if} must be of \BOOLTY{} type, and the two branches must have the same type. \begin{exercise}\normalfont\normalsize Create ten new test programs in \LangIf{}. Half the programs should have a type error. For those programs, create an empty file with the same base name and with file extension \code{.tyerr}. For example, if the test \racket{\code{cond\_test\_14.rkt}}\python{\code{cond\_test\_14.py}} is expected to error, then create an empty file named \code{cond\_test\_14.tyerr}. % \racket{This indicates to \code{interp-tests} and \code{compiler-tests} that a type error is expected. } % The other half of the test programs should not have type errors. % \racket{In the \code{run-tests.rkt} script, change the second argument of \code{interp-tests} and \code{compiler-tests} to \code{type-check-Lif}, which causes the type checker to run prior to the compiler passes. Temporarily change the \code{passes} to an empty list and run the script, thereby checking that the new test programs either type check or do not, as intended.} % Run the test script to check that these test programs type check as expected. \end{exercise} \clearpage \section{The \LangCIf{} Intermediate Language} \label{sec:Cif} {\if\edition\racketEd % The \LangCIf{} language builds on \LangCVar{} by adding logical and comparison operators to the \Exp{} nonterminal and the literals \TRUE{} and \FALSE{} to the \Arg{} nonterminal. Regarding control flow, \LangCIf{} adds \key{goto} and \code{if} statements to the \Tail{} nonterminal. The condition of an \code{if} statement is a comparison operation and the branches are \code{goto} statements, making it straightforward to compile \code{if} statements to x86. The \key{CProgram} construct contains an alist mapping labels to $\Tail$ expressions. A \code{goto} statement transfers control to the $\Tail$ expression corresponding to its label. % Figure~\ref{fig:c1-concrete-syntax} defines the concrete syntax of the \LangCIf{} intermediate language, and figure~\ref{fig:c1-syntax} defines its abstract syntax. % \fi} % {\if\edition\pythonEd\pythonColor % The output of \key{explicate\_control} is a language similar to the $C$ language~\citep{Kernighan:1988nx} in that it has labels and \code{goto} statements, so we name it \LangCIf{}. % The \LangCIf{} language supports the same operators as \LangIf{}, but the arguments of operators are restricted to atomic expressions. The \LangCIf{} language does not include \code{if} expressions, but it does include a restricted form of \code{if} statement. The condition must be a comparison, and the two branches may contain only \code{goto} statements. These restrictions make it easier to translate \code{if} statements to x86. The \LangCIf{} language also adds a \code{return} statement to finish the program with a specified value. % The \key{CProgram} construct contains a dictionary mapping labels to lists of statements that end with a \emph{tail} statement, which is either a \code{return} statement, a \code{goto}, or an \code{if} statement. % A \code{goto} transfers control to the sequence of statements associated with its label. % Figure~\ref{fig:c1-concrete-syntax} shows the concrete syntax for \LangCIf{}, and figure~\ref{fig:c1-syntax} shows its abstract syntax. % \fi} % \newcommand{\CifGrammarRacket}{ \begin{array}{lcl} \Atm &::=& \itm{bool} \\ \itm{cmp} &::= & \code{eq?} \MID \code{<} \MID \code{<=} \MID \code{>} \MID \code{>=} \\ \Exp &::=& \CNOT{\Atm} \MID \LP \itm{cmp}~\Atm~\Atm\RP \\ \Tail &::= & \key{goto}~\itm{label}\key{;}\\ &\MID& \key{if}~\LP \itm{cmp}~\Atm~\Atm \RP~ \key{goto}~\itm{label}\key{;} ~\key{else}~\key{goto}~\itm{label}\key{;} \end{array} } \newcommand{\CifASTRacket}{ \begin{array}{lcl} \Atm &::=& \BOOL{\itm{bool}} \\ \itm{cmp} &::= & \code{eq?} \MID \code{<} \MID \code{<=} \MID \code{>} \MID \code{>=} \\ \Exp &::= & \UNIOP{\key{'not}}{\Atm} \MID \BINOP{\key{'}\itm{cmp}}{\Atm}{\Atm} \\ \Tail &::= & \GOTO{\itm{label}} \\ &\MID& \IFSTMT{\BINOP{\itm{cmp}}{\Atm}{\Atm}}{\GOTO{\itm{label}}}{\GOTO{\itm{label}}} \end{array} } \newcommand{\CifGrammarPython}{ \begin{array}{lcl} \Atm &::=& \Int \MID \Var \MID \itm{bool} \\ \Exp &::= & \Atm \MID \CREAD{} \MID \CUNIOP{\key{-}}{\Atm} \MID \CBINOP{\key{+}}{\Atm}{\Atm} \MID \CBINOP{\key{-}}{\Atm}{\Atm} \MID \CCMP{\itm{cmp}}{\Atm}{\Atm} \\ \Stmt &::=& \CPRINT{\Atm} \MID \Exp \MID \CASSIGN{\Var}{\Exp} \\ \Tail &::=& \CRETURN{\Exp} \MID \CGOTO{\itm{label}} \\ &\MID& \CIFSTMT{\CCMP{\itm{cmp}}{\Atm}{\Atm}}{\CGOTO{\itm{label}}}{\CGOTO{\itm{label}}} \end{array} } \newcommand{\CifASTPython}{ \begin{array}{lcl} \Atm &::=& \INT{\Int} \MID \VAR{\Var} \MID \BOOL{\itm{bool}} \\ \Exp &::= & \Atm \MID \READ{} \MID \UNIOP{\key{USub()}}{\Atm} \\ &\MID& \BINOP{\Atm}{\key{Sub()}}{\Atm} \MID \BINOP{\Atm}{\key{Add()}}{\Atm} \\ &\MID& \CMP{\Atm}{\itm{cmp}}{\Atm} \\ \Stmt &::=& \PRINT{\Atm} \MID \EXPR{\Exp} \\ &\MID& \ASSIGN{\VAR{\Var}}{\Exp} \\ \Tail &::= & \RETURN{\Exp} \MID \GOTO{\itm{label}} \\ &\MID& \IFSTMT{\CMP{\Atm}{\itm{cmp}}{\Atm}}{\LS\GOTO{\itm{label}}\RS}{\LS\GOTO{\itm{label}}\RS} \end{array} } \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\CvarGrammarRacket} \\ \hline \CifGrammarRacket \\ \begin{array}{lcl} \LangCIfM{} & ::= & (\itm{label}\key{:}~ \Tail)\ldots \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \CifGrammarPython \\ \begin{array}{lcl} \LangCIfM{} & ::= & (\itm{label}\code{:}~\Stmt^{*}\;\Tail) \ldots \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The concrete syntax of the \LangCIf{} intermediate language% \racket{, an extension of \LangCVar{} (figure~\ref{fig:c0-concrete-syntax})}.} \label{fig:c1-concrete-syntax} \end{figure} \begin{figure}[tp] \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\CvarASTRacket} \\ \hline \CifASTRacket \\ \begin{array}{lcl} \LangCIfM{} & ::= & \CPROGRAM{\itm{info}}{\LP\LP\itm{label}\,\key{.}\,\Tail\RP\ldots\RP} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \CifASTPython \\ \begin{array}{lcl} \LangCIfM{} & ::= & \CPROGRAM{\itm{info}}{\LC\itm{label}\key{:}\,\LS\Stmt,\ldots,\Tail\RS, \ldots \RC} \end{array} \end{array} \] \fi} \end{tcolorbox} \racket{ \index{subject}{IfStmt@\IFSTMTNAME{}} } \index{subject}{Goto@\texttt{Goto}} \index{subject}{Return@\texttt{Return}} \caption{The abstract syntax of \LangCIf{}\racket{, an extension of \LangCVar{} (figure~\ref{fig:c0-syntax})}.} \label{fig:c1-syntax} \end{figure} \section{The \LangXIf{} Language} \label{sec:x86-if} \index{subject}{x86} To implement Booleans, the new logical operations, the comparison operations, and the \key{if} expression\python{ and statement}, we delve further into the x86 language. Figures~\ref{fig:x86-1-concrete} and \ref{fig:x86-1} present the definitions of the concrete and abstract syntax for the \LangXIf{} subset of x86, which includes instructions for logical operations, comparisons, and \racket{conditional} jumps. % \python{The abstract syntax for an \LangXIf{} program contains a dictionary mapping labels to sequences of instructions, each of which we refer to as a \emph{basic block}\index{subject}{basic block}.} As x86 does not provide direct support for Booleans, we take the usual approach of encoding Booleans as integers, with \code{True} as $1$ and \code{False} as $0$. Furthermore, x86 does not provide an instruction that directly implements logical negation (\code{not} in \LangIf{} and \LangCIf{}). However, the \code{xorq} instruction can be used to encode \code{not}. The \key{xorq} instruction takes two arguments, performs a pairwise exclusive-or ($\mathrm{XOR}$) operation on each bit of its arguments, and writes the results into its second argument. Recall the following truth table for exclusive-or: \begin{center} \begin{tabular}{l|cc} & 0 & 1 \\ \hline 0 & 0 & 1 \\ 1 & 1 & 0 \end{tabular} \end{center} For example, applying $\mathrm{XOR}$ to each bit of the binary numbers $0011$ and $0101$ yields $0110$. Notice that in the row of the table for the bit $1$, the result is the opposite of the second bit. Thus, the \code{not} operation can be implemented by \code{xorq} with $1$ as the first argument, as follows, where $\Arg$ is the translation of $\Atm$ to x86: \[ \CASSIGN{\Var}{\CUNIOP{\key{not}}{\Atm}} \qquad\Rightarrow\qquad \begin{array}{l} \key{movq}~ \Arg\key{,} \Var\\ \key{xorq}~ \key{\$1,} \Var \end{array} \] \newcommand{\GrammarXIf}{ \begin{array}{lcl} \itm{bytereg} &::=& \key{ah} \MID \key{al} \MID \key{bh} \MID \key{bl} \MID \key{ch} \MID \key{cl} \MID \key{dh} \MID \key{dl} \\ \Arg &::=& \key{\%}\itm{bytereg}\\ \itm{cc} & ::= & \key{e} \MID \key{ne} \MID \key{l} \MID \key{le} \MID \key{g} \MID \key{ge} \\ \Instr &::=& \key{xorq}~\Arg\key{,}~\Arg \MID \key{cmpq}~\Arg\key{,}~\Arg \MID \key{set}cc~\Arg \MID \key{movzbq}~\Arg\key{,}~\Arg \\ &\MID& \key{j}cc~\itm{label} \\ \end{array} } \begin{figure}[tp] \begin{tcolorbox}[colback=white] \[ \begin{array}{l} \gray{\GrammarXInt} \\ \hline \GrammarXIf \\ \begin{array}{lcl} \LangXIfM{} &::= & \key{.globl main} \\ & & \key{main:} \; \Instr\ldots \end{array} \end{array} \] \end{tcolorbox} \caption{The concrete syntax of \LangXIf{} (extends \LangXInt{} of figure~\ref{fig:x86-int-concrete}).} \label{fig:x86-1-concrete} \end{figure} \newcommand{\ASTXIfRacket}{ \begin{array}{lcl} \itm{bytereg} &::=& \key{ah} \MID \key{al} \MID \key{bh} \MID \key{bl} \MID \key{ch} \MID \key{cl} \MID \key{dh} \MID \key{dl} \\ \Arg &::=& \BYTEREG{\itm{bytereg}} \\ \itm{cc} & ::= & \key{e} \MID \key{l} \MID \key{le} \MID \key{g} \MID \key{ge} \\ \Instr &::=& \BININSTR{\code{xorq}}{\Arg}{\Arg} \MID \BININSTR{\code{cmpq}}{\Arg}{\Arg}\\ &\MID& \BININSTR{\code{set}}{\itm{cc}}{\Arg} \MID \BININSTR{\code{movzbq}}{\Arg}{\Arg}\\ &\MID& \JMPIF{\itm{cc}}{\itm{label}} \end{array} } \newcommand{\ASTXIfPython}{ \begin{array}{lcl} \itm{bytereg} &::=& \skey{ah} \MID \skey{al} \MID \skey{bh} \MID \skey{bl} \MID \skey{ch} \MID \skey{cl} \MID \skey{dh} \MID \skey{dl} \\ \Arg &::=& \gray{\IMM{\Int} \MID \REG{\Reg} \MID \DEREF{\Reg}{\Int}} \MID \BYTEREG{\itm{bytereg}} \\ \itm{cc} & ::= & \skey{e} \MID \skey{ne} \MID \skey{l} \MID \skey{le} \MID \skey{g} \MID \skey{ge} \\ \Instr &::=& \python{\JMP{\itm{label}}}\\ &\MID& \BININSTR{\scode{xorq}}{\Arg}{\Arg} \MID \BININSTR{\scode{cmpq}}{\Arg}{\Arg}\\ &\MID& \UNIINSTR{\scode{set}\code{+}\itm{cc}}{\Arg} \MID \BININSTR{\scode{movzbq}}{\Arg}{\Arg}\\ &\MID& \JMPIF{\itm{cc}}{\itm{label}} \end{array} } \begin{figure}[tp] \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[\arraycolsep=3pt \begin{array}{l} \gray{\ASTXIntRacket} \\ \hline \ASTXIfRacket \\ \begin{array}{lcl} \LangXIfM{} &::= & \XPROGRAM{\itm{info}}{\LP\LP\itm{label} \,\key{.}\, \Block \RP\ldots\RP} \end{array} \end{array} \] \fi} % {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\ASTXIntPython} \\ \hline \ASTXIfPython \\ \begin{array}{lcl} \LangXIfM{} &::= & \XPROGRAM{\itm{info}}{\LC\itm{label} \,\key{:}\, \Block \key{,} \ldots \RC } \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangXIf{} (extends \LangXInt{} shown in figure~\ref{fig:x86-int-ast}).} \label{fig:x86-1} \end{figure} Next we consider the x86 instructions that are relevant for compiling the comparison operations. The \key{cmpq} instruction compares its two arguments to determine whether one argument is less than, equal to, or greater than the other argument. The \key{cmpq} instruction is unusual regarding the order of its arguments and where the result is placed. The argument order is backward: if you want to test whether $x < y$, then write \code{cmpq} $y$\code{,} $x$. The result of \key{cmpq} is placed in the special EFLAGS register. This register cannot be accessed directly, but it can be queried by a number of instructions, including the \key{set} instruction. The instruction $\key{set}cc~d$ puts a \key{1} or \key{0} into the destination $d$, depending on whether the contents of the EFLAGS register matches the condition code \itm{cc}: \key{e} for equal, \key{l} for less, \key{le} for less-or-equal, \key{g} for greater, \key{ge} for greater-or-equal. The \key{set} instruction has a quirk in that its destination argument must be a single-byte register, such as \code{al} (\code{l} for lower bits) or \code{ah} (\code{h} for higher bits), which are part of the \code{rax} register. Thankfully, the \key{movzbq} instruction can be used to move from a single-byte register to a normal 64-bit register. The abstract syntax for the \code{set} instruction differs from the concrete syntax in that it separates the instruction name from the condition code. \python{The x86 instructions for jumping are relevant to the compilation of \key{if} expressions.} % \python{The instruction $\key{jmp}\,\itm{label}$ updates the program counter to the address of the instruction after the specified label.} % \racket{The x86 instruction for conditional jump is relevant to the compilation of \key{if} expressions.} % The instruction $\key{j}\itm{cc}~\itm{label}$ updates the program counter to point to the instruction after \itm{label}, depending on whether the result in the EFLAGS register matches the condition code \itm{cc}; otherwise, the jump instruction falls through to the next instruction. Like the abstract syntax for \code{set}, the abstract syntax for conditional jump separates the instruction name from the condition code. For example, \JMPIF{\QUOTE{\code{le}}}{\QUOTE{\code{foo}}} corresponds to \code{jle foo}. Because the conditional jump instruction relies on the EFLAGS register, it is common for it to be immediately preceded by a \key{cmpq} instruction to set the EFLAGS register. \section{Shrink the \LangIf{} Language} \label{sec:shrink-Lif} The \code{shrink} pass translates some of the language features into other features, thereby reducing the kinds of expressions in the language. For example, the short-circuiting nature of the \code{and} and \code{or} logical operators can be expressed using \code{if} as follows. \begin{align*} \CAND{e_1}{e_2} & \quad \Rightarrow \quad \CIF{e_1}{e_2}{\FALSE{}}\\ \COR{e_1}{e_2} & \quad \Rightarrow \quad \CIF{e_1}{\TRUE{}}{e_2} \end{align*} By performing these translations in the front end of the compiler, subsequent passes of the compiler can be shorter. On the other hand, translations sometimes reduce the efficiency of the generated code by increasing the number of instructions. For example, expressing subtraction in terms of addition and negation \[ \CBINOP{\key{-}}{e_1}{e_2} \quad \Rightarrow \quad \CBINOP{\key{+}}{e_1}{ \CUNIOP{\key{-}}{e_2} } \] produces code with two x86 instructions (\code{negq} and \code{addq}) instead of just one (\code{subq}). Thus, we do not recommend translating subtraction into addition and negation. \begin{exercise}\normalfont\normalsize % Implement the pass \code{shrink} to remove \key{and} and \key{or} from the language by translating them to \code{if} expressions in \LangIf{}. % Create four test programs that involve these operators. % {\if\edition\racketEd In the \code{run-tests.rkt} script, add the following entry for \code{shrink} to the list of passes (it should be the only pass at this point). \begin{lstlisting} (list "shrink" shrink interp_Lif type-check-Lif) \end{lstlisting} This instructs \code{interp-tests} to run the interpreter \code{interp\_Lif} and the type checker \code{type-check-Lif} on the output of \code{shrink}. \fi} % Run the script to test your compiler on all the test programs. \end{exercise} {\if\edition\racketEd \section{Uniquify Variables} \label{sec:uniquify-Lif} Add cases to \code{uniquify\_exp} to handle Boolean constants and \code{if} expressions. \begin{exercise}\normalfont\normalsize Update the \code{uniquify\_exp} for \LangIf{} and add the following entry to the list of \code{passes} in the \code{run-tests.rkt} script: \begin{lstlisting} (list "uniquify" uniquify interp_Lif type_check_Lif) \end{lstlisting} Run the script to test your compiler. \end{exercise} \fi} \section{Remove Complex Operands} \label{sec:remove-complex-opera-Lif} The output language of \code{remove\_complex\_operands} is \LangIfANF{} (figure~\ref{fig:Lif-anf-syntax}), the monadic normal form of \LangIf{}. A Boolean constant is an atomic expression, but the \code{if} expression is not. All three subexpressions of an \code{if} are allowed to be complex expressions, but the operands of the \code{not} operator and comparison operators must be atomic. % \python{We add a new language form, the \code{Begin} expression, to aid in the translation of \code{if} expressions. When we recursively process the two branches of the \code{if}, we generate temporary variables and their initializing expressions. However, these expressions may contain side effects and should be executed only when the condition of the \code{if} is true (for the ``then'' branch) or false (for the ``else'' branch). The \code{Begin} provides a way to initialize the temporary variables within the two branches of the \code{if} expression. In general, the $\BEGIN{ss}{e}$ form executes the statements $ss$ and then returns the result of expression $e$.} Add cases to the \code{rco\_exp} and \code{rco\_atom} functions for the new features in \LangIf{}. In recursively processing subexpressions, recall that you should invoke \code{rco\_atom} when the output needs to be an \Atm{} (as specified in the grammar for \LangIfANF{}) and invoke \code{rco\_exp} when the output should be \Exp{}. Regarding \code{if}, it is particularly important \emph{not} to replace its condition with a temporary variable, because that would interfere with the generation of high-quality output in the upcoming \code{explicate\_control} pass. \newcommand{\LifMonadASTRacket}{ \begin{array}{rcl} \Atm &::=& \BOOL{\itm{bool}}\\ \Exp &::=& \UNIOP{\key{not}}{\Atm} \MID \BINOP{\itm{cmp}}{\Atm}{\Atm} \MID \IF{\Exp}{\Exp}{\Exp} \end{array} } \newcommand{\LifMonadASTPython}{ \begin{array}{rcl} \Atm &::=& \BOOL{\itm{bool}}\\ \Exp &::=& \CMP{\Atm}{\itm{cmp}}{\Atm} \MID \IF{\Exp}{\Exp}{\Exp} \\ &\MID& \BEGIN{\Stmt^{*}}{\Exp}\\ \Stmt{} &::=& \IFSTMT{\Exp}{\Stmt^{*}}{\Stmt^{*}} \end{array} } \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] {\if\edition\racketEd \[ \begin{array}{l} \gray{\LvarMonadASTRacket} \\ \hline \LifMonadASTRacket \\ \begin{array}{rcl} \LangIfANF &::=& \PROGRAM{\code{()}}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LvarMonadASTPython} \\ \hline \LifMonadASTPython \\ \begin{array}{rcl} \LangIfANF &::=& \PROGRAM{\code{()}}{\Stmt^{*}} \end{array} \end{array} \] \fi} \end{tcolorbox} \python{\index{subject}{Begin@\texttt{Begin}}} \caption{\LangIfANF{} is \LangIf{} in monadic normal form (extends \LangVarANF in figure~\ref{fig:Lvar-anf-syntax}).} \label{fig:Lif-anf-syntax} \end{figure} \begin{exercise}\normalfont\normalsize % Add cases for Boolean constants and \code{if} to the \code{rco\_atom} and \code{rco\_exp} functions. % Create three new \LangIf{} programs that exercise the interesting code in this pass. % {\if\edition\racketEd In the \code{run-tests.rkt} script, add the following entry to the list of \code{passes} and then run the script to test your compiler. \begin{lstlisting} (list "remove-complex" remove_complex_operands interp-Lif type-check-Lif) \end{lstlisting} \fi} \end{exercise} \section{Explicate Control} \label{sec:explicate-control-Lif} \racket{Recall that the purpose of \code{explicate\_control} is to make the order of evaluation explicit in the syntax of the program. With the addition of \key{if}, this becomes more interesting.} % The \code{explicate\_control} pass translates from \LangIf{} to \LangCIf{}. % The main challenge to overcome is that the condition of an \key{if} can be an arbitrary expression in \LangIf{}, whereas in \LangCIf{} the condition must be a comparison. As a motivating example, consider the following program that has an \key{if} expression nested in the condition of another \key{if}:% \python{\footnote{Programmers rarely write nested \code{if} expressions, but they do write nested expressions involving logical \code{and}, which, as we have seen, translates to \code{if}.}} % cond_test_41.rkt, if_lt_eq.py \begin{center} \begin{minipage}{0.96\textwidth} {\if\edition\racketEd \begin{lstlisting} (let ([x (read)]) (let ([y (read)]) (if (if (< x 1) (eq? x 0) (eq? x 2)) (+ y 2) (+ y 10)))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} x = input_int() y = input_int() print(y + 2 if (x == 0 if x < 1 else x == 2) else y + 10) \end{lstlisting} \fi} \end{minipage} \end{center} % The naive way to compile \key{if} and the comparison operations would be to handle each of them in isolation, regardless of their context. Each comparison would be translated into a \key{cmpq} instruction followed by several instructions to move the result from the EFLAGS register into a general purpose register or stack location. Each \key{if} would be translated into a \key{cmpq} instruction followed by a conditional jump. The generated code for the inner \key{if} in this example would be as follows: \begin{center} \begin{minipage}{0.96\textwidth} \begin{lstlisting} cmpq $1, x setl %al movzbq %al, tmp cmpq $1, tmp je then_branch_1 jmp else_branch_1 \end{lstlisting} \end{minipage} \end{center} Notice that the three instructions starting with \code{setl} are redundant; the conditional jump could come immediately after the first \code{cmpq}. Our goal is to compile \key{if} expressions so that the relevant comparison instruction appears directly before the conditional jump. For example, we want to generate the following code for the inner \code{if}: \begin{center} \begin{minipage}{0.96\textwidth} \begin{lstlisting} cmpq $1, x jl then_branch_1 jmp else_branch_1 \end{lstlisting} \end{minipage} \end{center} One way to achieve this goal is to reorganize the code at the level of \LangIf{}, pushing the outer \key{if} inside the inner one, yielding the following code: \begin{center} \begin{minipage}{0.96\textwidth} {\if\edition\racketEd \begin{lstlisting} (let ([x (read)]) (let ([y (read)]) (if (< x 1) (if (eq? x 0) (+ y 2) (+ y 10)) (if (eq? x 2) (+ y 2) (+ y 10))))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} x = input_int() y = input_int() print(((y + 2) if x == 0 else (y + 10)) \ if (x < 1) \ else ((y + 2) if (x == 2) else (y + 10))) \end{lstlisting} \fi} \end{minipage} \end{center} Unfortunately, this approach duplicates the two branches from the outer \code{if}, and a compiler must never duplicate code! After all, the two branches could be very large expressions. How can we apply this transformation without duplicating code? In other words, how can two different parts of a program refer to one piece of code? % The answer is that we must move away from abstract syntax \emph{trees} and instead use \emph{graphs}. % At the level of x86 assembly, this is straightforward because we can label the code for each branch and insert jumps in all the places that need to execute the branch. In this way, jump instructions are edges in the graph and the basic blocks are the nodes. % Likewise, our language \LangCIf{} provides the ability to label a sequence of statements and to jump to a label via \code{goto}. As a preview of what \code{explicate\_control} will do, figure~\ref{fig:explicate-control-s1-38} shows the output of \code{explicate\_control} on this example. Note how the condition of every \code{if} is a comparison operation and that we have not duplicated any code but instead have used labels and \code{goto} to enable sharing of code. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{tabular}{lll} \begin{minipage}{0.4\textwidth} % cond_test_41.rkt \begin{lstlisting} (let ([x (read)]) (let ([y (read)]) (if (if (< x 1) (eq? x 0) (eq? x 2)) (+ y 2) (+ y 10)))) \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.55\textwidth} \begin{lstlisting} start: x = (read); y = (read); if (< x 1) goto block_4; else goto block_5; block_4: if (eq? x 0) goto block_2; else goto block_3; block_5: if (eq? x 2) goto block_2; else goto block_3; block_2: return (+ y 2); block_3: return (+ y 10); \end{lstlisting} \end{minipage} \end{tabular} \fi} {\if\edition\pythonEd\pythonColor \begin{tabular}{lll} \begin{minipage}{0.4\textwidth} % cond_test_41.rkt \begin{lstlisting} x = input_int() y = input_int() print(y + 2 \ if (x == 0 \ if x < 1 \ else x == 2) \ else y + 10) \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.55\textwidth} \begin{lstlisting} start: x = input_int() y = input_int() if x < 1: goto block_8 else: goto block_9 block_8: if x == 0: goto block_4 else: goto block_5 block_9: if x == 2: goto block_6 else: goto block_7 block_4: goto block_2 block_5: goto block_3 block_6: goto block_2 block_7: goto block_3 block_2: tmp_0 = y + 2 goto block_1 block_3: tmp_0 = y + 10 goto block_1 block_1: print(tmp_0) return 0 \end{lstlisting} \end{minipage} \end{tabular} \fi} \end{tcolorbox} \caption{Translation from \LangIf{} to \LangCIf{} via the \code{explicate\_control}.} \label{fig:explicate-control-s1-38} \end{figure} {\if\edition\racketEd % Recall that in section~\ref{sec:explicate-control-Lvar} we implement \code{explicate\_control} for \LangVar{} using two recursive functions, \code{explicate\_tail} and \code{explicate\_assign}. The former function translates expressions in tail position, whereas the latter function translates expressions on the right-hand side of a \key{let}. With the addition of \key{if} expression to \LangIf{} we have a new kind of position to deal with: the predicate position of the \key{if}. We need another function, \code{explicate\_pred}, that decides how to compile an \key{if} by analyzing its condition. So, \code{explicate\_pred} takes an \LangIf{} expression and two \LangCIf{} tails for the \emph{then} branch and \emph{else} branch and outputs a tail. In the following paragraphs we discuss specific cases in the \code{explicate\_tail}, \code{explicate\_assign}, and \code{explicate\_pred} functions. % \fi} % {\if\edition\pythonEd\pythonColor % We recommend implementing \code{explicate\_control} using the following four auxiliary functions. \begin{description} \item[\code{explicate\_effect}] generates code for expressions as statements, so their result is ignored and only their side effects matter. \item[\code{explicate\_assign}] generates code for expressions on the right-hand side of an assignment. \item[\code{explicate\_pred}] generates code for an \code{if} expression or statement by analyzing the condition expression. \item[\code{explicate\_stmt}] generates code for statements. \end{description} These four functions should build the dictionary of basic blocks. The following auxiliary function can be used to create a new basic block from a list of statements. It returns a \code{goto} statement that jumps to the new basic block. \begin{center} \begin{minipage}{\textwidth} \begin{lstlisting} def create_block(stmts, basic_blocks): label = label_name(generate_name('block')) basic_blocks[label] = stmts return [Goto(label)] \end{lstlisting} \end{minipage} \end{center} Figure~\ref{fig:explicate-control-Lif} provides a skeleton for the \code{explicate\_control} pass. The \code{explicate\_effect} function has three parameters: (1) the expression to be compiled; (2) the already-compiled code for this expression's \emph{continuation}, that is, the list of statements that should execute after this expression; and (3) the dictionary of generated basic blocks. The \code{explicate\_effect} function returns a list of \LangCIf{} statements and it may add to the dictionary of basic blocks. % Let's consider a few of the cases for the expression to be compiled. If the expression to be compiled is a constant, then it can be discarded because it has no side effects. If it's a \CREAD{}, then it has a side effect and should be preserved. So the expression should be translated into a statement using the \code{Expr} AST class. If the expression to be compiled is an \code{if} expression, we translate the two branches using \code{explicate\_effect} and then translate the condition expression using \code{explicate\_pred}, which generates code for the entire \code{if}. The \code{explicate\_assign} function has four parameters: (1) the right-hand side of the assignment, (2) the left-hand side of the assignment (the variable), (3) the continuation, and (4) the dictionary of basic blocks. The \code{explicate\_assign} function returns a list of \LangCIf{} statements, and it may add to the dictionary of basic blocks. When the right-hand side is an \code{if} expression, there is some work to do. In particular, the two branches should be translated using \code{explicate\_assign}, and the condition expression should be translated using \code{explicate\_pred}. Otherwise we can simply generate an assignment statement, with the given left- and right-hand sides, concatenated with its continuation. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] def explicate_effect(e, cont, basic_blocks): match e: case IfExp(test, body, orelse): ... case Call(func, args): ... case Begin(body, result): ... case _: ... def explicate_assign(rhs, lhs, cont, basic_blocks): match rhs: case IfExp(test, body, orelse): ... case Begin(body, result): ... case _: return [Assign([lhs], rhs)] + cont def explicate_pred(cnd, thn, els, basic_blocks): match cnd: case Compare(left, [op], [right]): goto_thn = create_block(thn, basic_blocks) goto_els = create_block(els, basic_blocks) return [If(cnd, goto_thn, goto_els)] case Constant(True): return thn; case Constant(False): return els; case UnaryOp(Not(), operand): ... case IfExp(test, body, orelse): ... case Begin(body, result): ... case _: return [If(Compare(cnd, [Eq()], [Constant(False)]), create_block(els, basic_blocks), create_block(thn, basic_blocks))] def explicate_stmt(s, cont, basic_blocks): match s: case Assign([lhs], rhs): return explicate_assign(rhs, lhs, cont, basic_blocks) case Expr(value): return explicate_effect(value, cont, basic_blocks) case If(test, body, orelse): ... def explicate_control(p): match p: case Module(body): new_body = [Return(Constant(0))] basic_blocks = {} for s in reversed(body): new_body = explicate_stmt(s, new_body, basic_blocks) basic_blocks[label_name('start')] = new_body return CProgram(basic_blocks) \end{lstlisting} \end{tcolorbox} \caption{Skeleton for the \code{explicate\_control} pass.} \label{fig:explicate-control-Lif} \end{figure} \fi} {\if\edition\racketEd \subsection{Explicate Tail and Assign} The \code{explicate\_tail} and \code{explicate\_assign} functions need additional cases for Boolean constants and \key{if}. The cases for \code{if} should recursively compile the two branches using either \code{explicate\_tail} or \code{explicate\_assign}, respectively. The cases should then invoke \code{explicate\_pred} on the condition expression, passing in the generated code for the two branches. For example, consider the following program with an \code{if} in tail position. % cond_test_6.rkt \begin{lstlisting} (let ([x (read)]) (if (eq? x 0) 42 777)) \end{lstlisting} The two branches are recursively compiled to return statements. We then delegate to \code{explicate\_pred}, passing the condition \code{(eq? x 0)} and the two return statements. We return to this example shortly when we discuss \code{explicate\_pred}. Next let us consider a program with an \code{if} on the right-hand side of a \code{let}. \begin{lstlisting} (let ([y (read)]) (let ([x (if (eq? y 0) 40 777)]) (+ x 2))) \end{lstlisting} Note that the body of the inner \code{let} will have already been compiled to \code{return (+ x 2);} and passed as the \code{cont} parameter of \code{explicate\_assign}. We'll need to use \code{cont} to recursively process both branches of the \code{if}, and we do not want to duplicate code, so we generate the following block using an auxiliary function named \code{create\_block}, discussed in the next section. \begin{lstlisting} block_6: return (+ x 2) \end{lstlisting} We then use \code{goto block\_6;} as the \code{cont} argument for compiling the branches. So the two branches compile to \begin{center} \begin{minipage}{0.2\textwidth} \begin{lstlisting} x = 40; goto block_6; \end{lstlisting} \end{minipage} \hspace{0.5in} and \hspace{0.5in} \begin{minipage}{0.2\textwidth} \begin{lstlisting} x = 777; goto block_6; \end{lstlisting} \end{minipage} \end{center} Finally, we delegate to \code{explicate\_pred}, passing the condition \code{(eq? y 0)} and the previously presented code for the branches. \subsection{Create Block} We recommend implementing the \code{create\_block} auxiliary function as follows, using a global variable \code{basic-blocks} to store a dictionary that maps labels to $\Tail$ expressions. The main idea is that \code{create\_block} generates a new label and then associates the given \code{tail} with the new label in the \code{basic-blocks} dictionary. The result of \code{create\_block} is a \code{Goto} to the new label. However, if the given \code{tail} is already a \code{Goto}, then there is no need to generate a new label and entry in \code{basic-blocks}; we can simply return that \code{Goto}. % \begin{lstlisting} (define (create_block tail) (match tail [(Goto label) (Goto label)] [else (let ([label (gensym 'block)]) (set! basic-blocks (cons (cons label tail) basic-blocks)) (Goto label))])) \end{lstlisting} \fi} {\if\edition\racketEd \subsection{Explicate Predicate} The skeleton for the \code{explicate\_pred} function is given in figure~\ref{fig:explicate-pred}. It takes three parameters: (1) \code{cnd}, the condition expression of the \code{if}; (2) \code{thn}, the code generated by explicate for the \emph{then} branch; and (3) \code{els}, the code generated by explicate for the \emph{else} branch. The \code{explicate\_pred} function should match on \code{cnd} with a case for every kind of expression that can have type \BOOLTY{}. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting} (define (explicate_pred cnd thn els) (match cnd [(Var x) ___] [(Let x rhs body) ___] [(Prim 'not (list e)) ___] [(Prim op es) #:when (or (eq? op 'eq?) (eq? op '<)) (IfStmt (Prim op es) (create_block thn) (create_block els))] [(Bool b) (if b thn els)] [(If cnd^ thn^ els^) ___] [else (error "explicate_pred unhandled case" cnd)])) \end{lstlisting} \end{tcolorbox} \caption{Skeleton for the \key{explicate\_pred} auxiliary function.} \label{fig:explicate-pred} \end{figure} \fi} % {\if\edition\pythonEd\pythonColor The \code{explicate\_pred} function has four parameters: (1) the condition expression, (2) the generated statements for the ``then'' branch, (3) the generated statements for the ``else'' branch, and (4) the dictionary of basic blocks. The \code{explicate\_pred} function returns a list of \LangCIf{} statements, and it may add to the dictionary of basic blocks. \fi} Consider the case for comparison operators. We translate the comparison to an \code{if} statement whose branches are \code{goto} statements created by applying \code{create\_block} to the code generated for the \code{thn} and \code{els} branches. Let us illustrate this translation by returning to the program with an \code{if} expression in tail position, shown next. We invoke \code{explicate\_pred} on its condition \racket{\code{(eq? x 0)}}\python{\code{x == 0}}. % {\if\edition\racketEd \begin{lstlisting} (let ([x (read)]) (if (eq? x 0) 42 777)) \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor \begin{lstlisting} x = input_int() 42 if x == 0 else 777 \end{lstlisting} \fi} % \noindent The two branches \code{42} and \code{777} were already compiled to \code{return} statements, from which we now create the following blocks: % \begin{center} \begin{minipage}{\textwidth} \begin{lstlisting} block_1: return 42; block_2: return 777; \end{lstlisting} \end{minipage} \end{center} % After that, \code{explicate\_pred} compiles the comparison \racket{\code{(eq? x 0)}} \python{\code{x == 0}} to the following \code{if} statement: % {\if\edition\racketEd \begin{center} \begin{minipage}{\textwidth} \begin{lstlisting} if (eq? x 0) goto block_1; else goto block_2; \end{lstlisting} \end{minipage} \end{center} \fi} {\if\edition\pythonEd\pythonColor \begin{center} \begin{minipage}{\textwidth} \begin{lstlisting} if x == 0: goto block_1; else goto block_2; \end{lstlisting} \end{minipage} \end{center} \fi} Next consider the case for Boolean constants. We perform a kind of partial evaluation\index{subject}{partialevaluation@partial evaluation} and output either the \code{thn} or \code{els} branch, depending on whether the constant is \TRUE{} or \FALSE{}. Let us illustrate this with the following program: {\if\edition\racketEd \begin{lstlisting} (if #t 42 777) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} 42 if True else 777 \end{lstlisting} \fi} % \noindent Again, the two branches \code{42} and \code{777} were compiled to \code{return} statements, so \code{explicate\_pred} compiles the constant \racket{\code{\#t}} \python{\code{True}} to the code for the \emph{then} branch. \begin{lstlisting} return 42; \end{lstlisting} This case demonstrates that we sometimes discard the \code{thn} or \code{els} blocks that are input to \code{explicate\_pred}. The case for \key{if} expressions in \code{explicate\_pred} is particularly illuminating because it deals with the challenges discussed previously regarding nested \key{if} expressions (figure~\ref{fig:explicate-control-s1-38}). The \racket{\lstinline{thn^}}\python{\code{body}} and \racket{\lstinline{els^}}\python{\code{orelse}} branches of the \key{if} inherit their context from the current one, that is, predicate context. So, you should recursively apply \code{explicate\_pred} to the \racket{\lstinline{thn^}}\python{\code{body}} and \racket{\lstinline{els^}}\python{\code{orelse}} branches. For both of those recursive calls, pass \code{thn} and \code{els} as the extra parameters. Thus, \code{thn} and \code{els} may be used twice, once inside each recursive call. As discussed previously, to avoid duplicating code, we need to add them to the dictionary of basic blocks so that we can instead refer to them by name and execute them with a \key{goto}. {\if\edition\pythonEd\pythonColor % The last of the auxiliary functions is \code{explicate\_stmt}. It has three parameters: (1) the statement to be compiled, (2) the code for its continuation, and (3) the dictionary of basic blocks. The \code{explicate\_stmt} returns a list of statements, and it may add to the dictionary of basic blocks. The cases for assignment and an expression-statement are given in full in the skeleton code: they simply dispatch to \code{explicate\_assign} and \code{explicate\_effect}, respectively. The case for \code{if} statements is not given; it is similar to the case for \code{if} expressions. The \code{explicate\_control} function itself is given in figure~\ref{fig:explicate-control-Lif}. It applies \code{explicate\_stmt} to each statement in the program, from back to front. Thus, the result so far, stored in \code{new\_body}, can be used as the continuation parameter in the next call to \code{explicate\_stmt}. The \code{new\_body} is initialized to a \code{Return} statement. Once complete, we add the \code{new\_body} to the dictionary of basic blocks, labeling it the ``start'' block. % \fi} %% Getting back to the case for \code{if} in \code{explicate\_pred}, we %% make the recursive calls to \code{explicate\_pred} on the ``then'' and %% ``else'' branches with the arguments \code{(create_block} $B_1$\code{)} %% and \code{(create_block} $B_2$\code{)}. Let $B_3$ and $B_4$ be the %% results from the two recursive calls. We complete the case for %% \code{if} by recursively apply \code{explicate\_pred} to the condition %% of the \code{if} with the promised blocks $B_3$ and $B_4$ to obtain %% the result $B_5$. %% \[ %% (\key{if}\; \itm{cnd}\; \itm{thn}\; \itm{els}) %% \quad\Rightarrow\quad %% B_5 %% \] %% In the case for \code{if} in \code{explicate\_tail}, the two branches %% inherit the current context, so they are in tail position. Thus, the %% recursive calls on the ``then'' and ``else'' branch should be calls to %% \code{explicate\_tail}. %% % %% We need to pass $B_0$ as the accumulator argument for both of these %% recursive calls, but we need to be careful not to duplicate $B_0$. %% Thus, we first apply \code{create_block} to $B_0$ so that it gets added %% to the control-flow graph and obtain a promised goto $G_0$. %% % %% Let $B_1$ be the result of \code{explicate\_tail} on the ``then'' %% branch and $G_0$ and let $B_2$ be the result of \code{explicate\_tail} %% on the ``else'' branch and $G_0$. Let $B_3$ be the result of applying %% \code{explicate\_pred} to the condition of the \key{if}, $B_1$, and %% $B_2$. Then the \key{if} as a whole translates to promise $B_3$. %% \[ %% (\key{if}\; \itm{cnd}\; \itm{thn}\; \itm{els}) \quad\Rightarrow\quad B_3 %% \] %% In the above discussion, we use the metavariables $B_1$, $B_2$, and %% $B_3$ to refer to blocks for the purposes of our discussion, but they %% should not be confused with the labels for the blocks that appear in %% the generated code. We initially construct unlabeled blocks; we only %% attach labels to blocks when we add them to the control-flow graph, as %% we see in the next case. %% Next consider the case for \key{if} in the \code{explicate\_assign} %% function. The context of the \key{if} is an assignment to some %% variable $x$ and then the control continues to some promised block %% $B_1$. The code that we generate for both the ``then'' and ``else'' %% branches needs to continue to $B_1$, so to avoid duplicating $B_1$ we %% apply \code{create_block} to it and obtain a promised goto $G_1$. The %% branches of the \key{if} inherit the current context, so they are in %% assignment positions. Let $B_2$ be the result of applying %% \code{explicate\_assign} to the ``then'' branch, variable $x$, and %% $G_1$. Let $B_3$ be the result of applying \code{explicate\_assign} to %% the ``else'' branch, variable $x$, and $G_1$. Finally, let $B_4$ be %% the result of applying \code{explicate\_pred} to the predicate %% $\itm{cnd}$ and the promises $B_2$ and $B_3$. The \key{if} as a whole %% translates to the promise $B_4$. %% \[ %% (\key{if}\; \itm{cnd}\; \itm{thn}\; \itm{els}) \quad\Rightarrow\quad B_4 %% \] %% This completes the description of \code{explicate\_control} for \LangIf{}. Figure~\ref{fig:explicate-control-s1-38} shows the output of the \code{remove\_complex\_operands} pass and then the \code{explicate\_control} pass on the example program. We walk through the output program. % Following the order of evaluation in the output of \code{remove\_complex\_operands}, we first have two calls to \CREAD{} and then the comparison \racket{\code{(< x 1)}}\python{\code{x < 1}} in the predicate of the inner \key{if}. In the output of \code{explicate\_control}, in the block labeled \code{start}, two assignment statements are followed by an \code{if} statement that branches to \code{block\_4} or \code{block\_5}. The blocks associated with those labels contain the translations of the code \racket{\code{(eq? x 0)}}\python{\code{x == 0}} and \racket{\code{(eq? x 2)}}\python{\code{x == 2}}, respectively. In particular, we start \code{block\_4} with the comparison \racket{\code{(eq? x 0)}}\python{\code{x == 0}} and then branch to \code{block\_2} or \code{block\_3}, which correspond to the two branches of the outer \key{if}, that is, \racket{\code{(+ y 2)}}\python{\code{y + 2}} and \racket{\code{(+ y 10)}}\python{\code{y + 10}}. % The story for \code{block\_5} is similar to that of \code{block\_4}. % \python{The \code{block\_1} corresponds to the \code{print} statement at the end of the program.} {\if\edition\racketEd \subsection{Interactions between Explicate and Shrink} The way in which the \code{shrink} pass transforms logical operations such as \code{and} and \code{or} can impact the quality of code generated by \code{explicate\_control}. For example, consider the following program: % cond_test_21.rkt, and_eq_input.py \begin{lstlisting} (if (and (eq? (read) 0) (eq? (read) 1)) 0 42) \end{lstlisting} The \code{and} operation should transform into something that the \code{explicate\_pred} function can analyze and descend through to reach the underlying \code{eq?} conditions. Ideally, for this program your \code{explicate\_control} pass should generate code similar to the following: \begin{center} \begin{minipage}{\textwidth} \begin{lstlisting} start: tmp1 = (read); if (eq? tmp1 0) goto block40; else goto block39; block40: tmp2 = (read); if (eq? tmp2 1) goto block38; else goto block39; block38: return 0; block39: return 42; \end{lstlisting} \end{minipage} \end{center} \fi} \begin{exercise}\normalfont\normalsize \racket{ Implement the pass \code{explicate\_control} by adding the cases for Boolean constants and \key{if} to the \code{explicate\_tail} and \code{explicate\_assign} functions. Implement the auxiliary function \code{explicate\_pred} for predicate contexts.} \python{Implement \code{explicate\_control} pass with its four auxiliary functions.} % Create test cases that exercise all the new cases in the code for this pass. % {\if\edition\racketEd Add the following entry to the list of \code{passes} in \code{run-tests.rkt}: \begin{lstlisting} (list "explicate_control" explicate_control interp-Cif type-check-Cif) \end{lstlisting} and then run \code{run-tests.rkt} to test your compiler. \fi} \end{exercise} \section{Select Instructions} \label{sec:select-Lif} \index{subject}{select instructions} The \code{select\_instructions} pass translates \LangCIf{} to \LangXIfVar{}. % \racket{Recall that we implement this pass using three auxiliary functions, one for each of the nonterminals $\Atm$, $\Stmt$, and $\Tail$ in \LangCIf{} (figure~\ref{fig:c1-syntax}).} % \racket{For $\Atm$, we have new cases for the Booleans.} % \python{We begin with the Boolean constants.} As previously discussed, we encode them as integers. \[ \TRUE{} \quad\Rightarrow\quad \key{1} \qquad\qquad \FALSE{} \quad\Rightarrow\quad \key{0} \] For translating statements, we discuss some of the cases. The \code{not} operation can be implemented in terms of \code{xorq}, as we discussed at the beginning of this section. Given an assignment, if the left-hand-side variable is the same as the argument of \code{not}, then just the \code{xorq} instruction suffices. \[ \CASSIGN{\Var}{ \CUNIOP{\key{not}}{\Var} } \quad\Rightarrow\quad \key{xorq}~\key{\$}1\key{,}~\Var \] Otherwise, a \key{movq} is needed to adapt to the update-in-place semantics of x86. In the following translation, let $\Arg$ be the result of translating $\Atm$ to x86. \[ \CASSIGN{\Var}{ \CUNIOP{\key{not}}{\Atm} } \quad\Rightarrow\quad \begin{array}{l} \key{movq}~\Arg\key{,}~\Var\\ \key{xorq}~\key{\$}1\key{,}~\Var \end{array} \] Next consider the cases for equality comparisons. Translating this operation to x86 is slightly involved due to the unusual nature of the \key{cmpq} instruction that we discussed in section~\ref{sec:x86-if}. We recommend translating an assignment with an equality on the right-hand side into a sequence of three instructions. \\ \begin{tabular}{lll} \begin{minipage}{0.4\textwidth} $\CASSIGN{\Var}{ \LP\CEQ{\Atm_1}{\Atm_2} \RP }$ \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.4\textwidth} \begin{lstlisting} cmpq |$\Arg_2$|, |$\Arg_1$| sete %al movzbq %al, |$\Var$| \end{lstlisting} \end{minipage} \end{tabular} \\ The translations for the other comparison operators are similar to this but use different condition codes for the \code{set} instruction. \racket{Regarding the $\Tail$ nonterminal, we have two new cases: \key{goto} and \key{if} statements. Both are straightforward to translate to x86.} % A \key{goto} statement becomes a jump instruction. \[ \key{goto}\; \ell\racket{\key{;}} \quad \Rightarrow \quad \key{jmp}\;\ell \] % An \key{if} statement becomes a compare instruction followed by a conditional jump (for the \emph{then} branch), and the fall-through is to a regular jump (for the \emph{else} branch).\\ \begin{tabular}{lll} \begin{minipage}{0.4\textwidth} \begin{lstlisting} if |$\CEQ{\Atm_1}{\Atm_2}$||$\python{\key{:}}$| goto |$\ell_1$||$\racket{\key{;}}$| else|$\python{\key{:}}$| goto |$\ell_2$||$\racket{\key{;}}$| \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.4\textwidth} \begin{lstlisting} cmpq |$\Arg_2$|, |$\Arg_1$| je |$\ell_1$| jmp |$\ell_2$| \end{lstlisting} \end{minipage} \end{tabular} \\ Again, the translations for the other comparison operators are similar to this but use different condition codes for the conditional jump instruction. \python{Regarding the \key{return} statement, we recommend treating it as an assignment to the \key{rax} register followed by a jump to the conclusion of the \code{main} function. (See section~\ref{sec:prelude-conclusion-cond} for more about the conclusion of \code{main}.)} \begin{exercise}\normalfont\normalsize Expand your \code{select\_instructions} pass to handle the new features of the \LangCIf{} language. % {\if\edition\racketEd Add the following entry to the list of \code{passes} in \code{run-tests.rkt} \begin{lstlisting} (list "select_instructions" select_instructions interp-pseudo-x86-1) \end{lstlisting} \fi} % Run the script to test your compiler on all the test programs. \end{exercise} \section{Register Allocation} \label{sec:register-allocation-Lif} \index{subject}{register allocation} The changes required for compiling \LangIf{} affect liveness analysis, building the interference graph, and assigning homes, but the graph coloring algorithm itself does not change. \subsection{Liveness Analysis} \label{sec:liveness-analysis-Lif} \index{subject}{liveness analysis} Recall that for \LangVar{} we implemented liveness analysis for a single basic block (section~\ref{sec:liveness-analysis-Lvar}). With the addition of \key{if} expressions to \LangIf{}, \code{explicate\_control} produces many basic blocks. %% We recommend that you create a new auxiliary function named %% \code{uncover\_live\_CFG} that applies liveness analysis to a %% control-flow graph. The first question is, in what order should we process the basic blocks? Recall that to perform liveness analysis on a basic block we need to know the live-after set for the last instruction in the block. If a basic block has no successors (i.e., contains no jumps to other blocks), then it has an empty live-after set and we can immediately apply liveness analysis to it. If a basic block has some successors, then we need to complete liveness analysis on those blocks first. These ordering constraints are the reverse of a \emph{topological order}\index{subject}{topological order} on a graph representation of the program. In particular, the \emph{control flow graph} (CFG)\index{subject}{control-flow graph}~\citep{Allen:1970uq} of a program has a node for each basic block and an edge for each jump from one block to another. It is straightforward to generate a CFG from the dictionary of basic blocks. One then transposes the CFG and applies the topological sort algorithm. % % \racket{We recommend using the \code{tsort} and \code{transpose} functions of the Racket \code{graph} package to accomplish this.} % \python{We provide implementations of \code{topological\_sort} and \code{transpose} in the file \code{graph.py} of the support code.} % As an aside, a topological ordering is only guaranteed to exist if the graph does not contain any cycles. This is the case for the control-flow graphs that we generate from \LangIf{} programs. However, in chapter~\ref{ch:Lwhile} we add loops to create \LangLoop{} and learn how to handle cycles in the control-flow graph. \racket{You need to construct a directed graph to represent the control-flow graph. Do not use the \code{directed-graph} of the \code{graph} package because that allows at most one edge between each pair of vertices, whereas a control-flow graph may have multiple edges between a pair of vertices. The \code{multigraph.rkt} file in the support code implements a graph representation that allows multiple edges between a pair of vertices.} {\if\edition\racketEd The next question is how to analyze jump instructions. Recall that in section~\ref{sec:liveness-analysis-Lvar} we maintain an alist named \code{label->live} that maps each label to the set of live locations at the beginning of its block. We use \code{label->live} to determine the live-before set for each $\JMP{\itm{label}}$ instruction. Now that we have many basic blocks, \code{label->live} needs to be updated as we process the blocks. In particular, after performing liveness analysis on a block, we take the live-before set of its first instruction and associate that with the block's label in the \code{label->live} alist. \fi} % {\if\edition\pythonEd\pythonColor % The next question is how to analyze jump instructions. The locations that are live before a \code{jmp} should be the locations in $L_{\mathsf{before}}$ at the target of the jump. So we recommend maintaining a dictionary named \code{live\_before\_block} that maps each label to the $L_{\mathsf{before}}$ for the first instruction in its block. After performing liveness analysis on each block, we take the live-before set of its first instruction and associate that with the block's label in the \code{live\_before\_block} dictionary. % \fi} In \LangXIfVar{} we also have the conditional jump $\JMPIF{\itm{cc}}{\itm{label}}$ to deal with. Liveness analysis for this instruction is particularly interesting because during compilation, we do not know which way a conditional jump will go. Thus we do not know whether to use the live-before set for the block associated with the $\itm{label}$ or the live-before set for the following instruction. So we use both, by taking the union of the live-before sets from the following instruction and from the mapping for $\itm{label}$ in \racket{\code{label->live}}\python{\code{live\_before\_block}}. The auxiliary functions for computing the variables in an instruction's argument and for computing the variables read-from ($R$) or written-to ($W$) by an instruction need to be updated to handle the new kinds of arguments and instructions in \LangXIfVar{}. \begin{exercise}\normalfont\normalsize {\if\edition\racketEd % Update the \code{uncover\_live} pass to apply liveness analysis to every basic block in the program. % Add the following entry to the list of \code{passes} in the \code{run-tests.rkt} script: \begin{lstlisting} (list "uncover_live" uncover_live interp-pseudo-x86-1) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor % Update the \code{uncover\_live} function to perform liveness analysis, in reverse topological order, on all the basic blocks in the program. % \fi} % Check that the live-after sets that you generate for % example X matches the following... -Jeremy \end{exercise} \subsection{Build the Interference Graph} \label{sec:build-interference-Lif} Many of the new instructions in \LangXIfVar{} can be handled in the same way as the instructions in \LangXVar{}. % Thus, if your code was % already quite general, it will not need to be changed to handle the % new instructions. If your code is not general enough, we recommend that % you change your code to be more general. For example, you can factor % out the computing of the the read and write sets for each kind of % instruction into auxiliary functions. % Some instructions, such as the \key{movzbq} instruction, require special care, similar to the \key{movq} instruction. Refer to rule number 1 in section~\ref{sec:build-interference}. \begin{exercise}\normalfont\normalsize Update the \code{build\_interference} pass for \LangXIfVar{}. {\if\edition\racketEd Add the following entries to the list of \code{passes} in the \code{run-tests.rkt} script: \begin{lstlisting} (list "build_interference" build_interference interp-pseudo-x86-1) (list "allocate_registers" allocate_registers interp-pseudo-x86-1) \end{lstlisting} \fi} % Check that the interference graph that you generate for % example X matches the following graph G... -Jeremy \end{exercise} \section{Patch Instructions} The new instructions \key{cmpq} and \key{movzbq} have some special restrictions that need to be handled in the \code{patch\_instructions} pass. % The second argument of the \key{cmpq} instruction must not be an immediate value (such as an integer). So, if you are comparing two immediates, we recommend inserting a \key{movq} instruction to put the second argument in \key{rax}. On the other hand, if you implemented the partial evaluator (section~\ref{sec:pe-Lvar}), you could update it for \LangIf{} and then this situation would not arise. % As usual, \key{cmpq} may have at most one memory reference. % The second argument of the \key{movzbq} must be a register. \begin{exercise}\normalfont\normalsize % Update \code{patch\_instructions} pass for \LangXIfVar{}. % {\if\edition\racketEd Add the following entry to the list of \code{passes} in \code{run-tests.rkt}, and then run this script to test your compiler. \begin{lstlisting} (list "patch_instructions" patch_instructions interp-x86-1) \end{lstlisting} \fi} \end{exercise} {\if\edition\pythonEd\pythonColor \section{Prelude and Conclusion} \label{sec:prelude-conclusion-cond} The generation of the \code{main} function with its prelude and conclusion must change to accommodate how the program now consists of one or more basic blocks. After the prelude in \code{main}, jump to the \code{start} block. Place the conclusion in a basic block labeled with \code{conclusion}. \fi} Figure~\ref{fig:if-example-x86} shows a simple example program in \LangIf{} translated to x86, showing the results of \code{explicate\_control}, \code{select\_instructions}, and the final x86 assembly. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{tabular}{lll} \begin{minipage}{0.4\textwidth} % cond_test_20.rkt, eq_input.py \begin{lstlisting} (if (eq? (read) 1) 42 0) \end{lstlisting} $\Downarrow$ \begin{lstlisting} start: tmp7951 = (read); if (eq? tmp7951 1) goto block7952; else goto block7953; block7952: return 42; block7953: return 0; \end{lstlisting} $\Downarrow$ \begin{lstlisting} start: callq read_int movq %rax, tmp7951 cmpq $1, tmp7951 je block7952 jmp block7953 block7953: movq $0, %rax jmp conclusion block7952: movq $42, %rax jmp conclusion \end{lstlisting} \end{minipage} & $\Rightarrow\qquad$ \begin{minipage}{0.4\textwidth} \begin{lstlisting} start: callq read_int movq %rax, %rcx cmpq $1, %rcx je block7952 jmp block7953 block7953: movq $0, %rax jmp conclusion block7952: movq $42, %rax jmp conclusion .globl main main: pushq %rbp movq %rsp, %rbp pushq %r13 pushq %r12 pushq %rbx pushq %r14 subq $0, %rsp jmp start conclusion: addq $0, %rsp popq %r14 popq %rbx popq %r12 popq %r13 popq %rbp retq \end{lstlisting} \end{minipage} \end{tabular} \fi} {\if\edition\pythonEd\pythonColor \begin{tabular}{lll} \begin{minipage}{0.4\textwidth} % cond_test_20.rkt, eq_input.py \begin{lstlisting} print(42 if input_int() == 1 else 0) \end{lstlisting} $\Downarrow$ \begin{lstlisting} start: tmp_0 = input_int() if tmp_0 == 1: goto block_3 else: goto block_4 block_3: tmp_1 = 42 goto block_2 block_4: tmp_1 = 0 goto block_2 block_2: print(tmp_1) return 0 \end{lstlisting} $\Downarrow$ \begin{lstlisting} start: callq read_int movq %rax, tmp_0 cmpq 1, tmp_0 je block_3 jmp block_4 block_3: movq 42, tmp_1 jmp block_2 block_4: movq 0, tmp_1 jmp block_2 block_2: movq tmp_1, %rdi callq print_int movq 0, %rax jmp conclusion \end{lstlisting} \end{minipage} & $\Rightarrow\qquad$ \begin{minipage}{0.4\textwidth} \begin{lstlisting} .globl main main: pushq %rbp movq %rsp, %rbp subq $0, %rsp jmp start start: callq read_int movq %rax, %rcx cmpq $1, %rcx je block_3 jmp block_4 block_3: movq $42, %rcx jmp block_2 block_4: movq $0, %rcx jmp block_2 block_2: movq %rcx, %rdi callq print_int movq $0, %rax jmp conclusion conclusion: addq $0, %rsp popq %rbp retq \end{lstlisting} \end{minipage} \end{tabular} \fi} \end{tcolorbox} \caption{Example compilation of an \key{if} expression to x86, showing the results of \code{explicate\_control}, \code{select\_instructions}, and the final x86 assembly code. } \label{fig:if-example-x86} \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.90] \node (Lif-2) at (0,2) {\large \LangIf{}}; \node (Lif-3) at (3,2) {\large \LangIf{}}; \node (Lif-4) at (6,2) {\large \LangIf{}}; \node (Lif-5) at (10,2) {\large \LangIfANF{}}; \node (C1-1) at (0,0) {\large \LangCIf{}}; \node (x86-2) at (0,-2) {\large \LangXIfVar{}}; \node (x86-2-1) at (0,-4) {\large \LangXIfVar{}}; \node (x86-2-2) at (4,-4) {\large \LangXIfVar{}}; \node (x86-3) at (4,-2) {\large \LangXIfVar{}}; \node (x86-4) at (8,-2) {\large \LangXIf{}}; \node (x86-5) at (8,-4) {\large \LangXIf{}}; \path[->,bend left=15] (Lif-2) edge [above] node {\ttfamily\footnotesize shrink} (Lif-3); \path[->,bend left=15] (Lif-3) edge [above] node {\ttfamily\footnotesize uniquify} (Lif-4); \path[->,bend left=15] (Lif-4) edge [above] node {\ttfamily\footnotesize remove\_complex\_operands} (Lif-5); \path[->,bend left=10] (Lif-5) edge [right] node {\ttfamily\footnotesize \ \ \ explicate\_control} (C1-1); \path[->,bend right=15] (C1-1) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend left=15] (x86-2) edge [right] node {\ttfamily\footnotesize uncover\_live} (x86-2-1); \path[->,bend right=15] (x86-2-1) edge [below] node {\ttfamily\footnotesize build\_interference} (x86-2-2); \path[->,bend right=15] (x86-2-2) edge [right] node {\ttfamily\footnotesize allocate\_registers} (x86-3); \path[->,bend left=15] (x86-3) edge [above] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend left=15] (x86-4) edge [right] node {\ttfamily\footnotesize prelude\_and\_conclusion } (x86-5); \end{tikzpicture} \fi} {\if\edition\pythonEd\pythonColor \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.90] \node (Lif-1) at (0,2) {\large \LangIf{}}; \node (Lif-2) at (4,2) {\large \LangIf{}}; \node (Lif-3) at (8,2) {\large \LangIfANF{}}; \node (C-1) at (0,0) {\large \LangCIf{}}; \node (x86-1) at (0,-2) {\large \LangXIfVar{}}; \node (x86-2) at (4,-2) {\large \LangXIfVar{}}; \node (x86-3) at (8,-2) {\large \LangXIf{}}; \node (x86-4) at (12,-2) {\large \LangXIf{}}; \path[->,bend left=15] (Lif-1) edge [above] node {\ttfamily\footnotesize shrink} (Lif-2); \path[->,bend left=15] (Lif-2) edge [above] node {\ttfamily\footnotesize remove\_complex\_operands} (Lif-3); \path[->,bend left=15] (Lif-3) edge [right] node {\ttfamily\footnotesize \ \ explicate\_control} (C-1); \path[->,bend right=15] (C-1) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-1); \path[->,bend right=15] (x86-1) edge [below] node {\ttfamily\footnotesize assign\_homes} (x86-2); \path[->,bend left=15] (x86-2) edge [above] node {\ttfamily\footnotesize patch\_instructions} (x86-3); \path[->,bend right=15] (x86-3) edge [below] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-4); \end{tikzpicture} \fi} \end{tcolorbox} \caption{Diagram of the passes for \LangIf{}, a language with conditionals.} \label{fig:Lif-passes} \end{figure} Figure~\ref{fig:Lif-passes} lists all the passes needed for the compilation of \LangIf{}. \section{Challenge: Optimize Blocks and Remove Jumps} \label{sec:opt-jumps} We discuss two challenges that involve optimizing the control-flow of the program. \subsection{Optimize Blocks} The algorithm for \code{explicate\_control} that we discussed in section~\ref{sec:explicate-control-Lif} sometimes generates too many blocks. It creates a block whenever a continuation \emph{might} get used more than once (for example, whenever the \code{cont} parameter is passed into two or more recursive calls). However, some continuation arguments may not be used at all. Consider the case for the constant \TRUE{} in \code{explicate\_pred}, in which we discard the \code{els} continuation. % {\if\edition\racketEd The following example program falls into this case, and it creates two unused blocks. \begin{center} \begin{tabular}{lll} \begin{minipage}{0.4\textwidth} % cond_test_82.rkt \begin{lstlisting} (let ([y (if #t (read) (if (eq? (read) 0) 777 (let ([x (read)]) (+ 1 x))))]) (+ y 2)) \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.55\textwidth} \begin{lstlisting} start: y = (read); goto block_5; block_5: return (+ y 2); block_6: y = 777; goto block_5; block_7: x = (read); y = (+ 1 x2); goto block_5; \end{lstlisting} \end{minipage} \end{tabular} \end{center} \fi} The question is, how can we decide whether to create a basic block? \emph{Lazy evaluation}\index{subject}{lazy evaluation}~\citep{Friedman:1976aa} can solve this conundrum by delaying the creation of a basic block until the point in time at which we know that it will be used. % {\if\edition\racketEd % Racket provides support for lazy evaluation with the \href{https://docs.racket-lang.org/reference/Delayed_Evaluation.html}{\code{racket/promise}} package. The expression \key{(delay} $e_1 \ldots e_n$\key{)} \index{subject}{delay} creates a \emph{promise}\index{subject}{promise} in which the evaluation of the expressions is postponed. When \key{(force} $p$\key{)}\index{subject}{force} is applied to a promise $p$ for the first time, the expressions $e_1 \ldots e_n$ are evaluated and the result of $e_n$ is cached in the promise and returned. If \code{force} is applied again to the same promise, then the cached result is returned. If \code{force} is applied to an argument that is not a promise, \code{force} simply returns the argument. % \fi} % {\if\edition\pythonEd\pythonColor % Although Python does not provide direct support for lazy evaluation, it is easy to mimic. We \emph{delay} the evaluation of a computation by wrapping it inside a function with no parameters. We \emph{force} its evaluation by calling the function. However, we might need to force multiple times, so we store the result of calling the function instead of recomputing it each time. The following \code{Promise} class handles this memoization process. % \begin{lstlisting} @dataclass class Promise: fun : typing.Any cache : list[stmt] = None def force(self): if self.cache is None: self.cache = self.fun(); return self.cache else: return self.cache \end{lstlisting} % However, in some cases of \code{explicate\_pred}, we return a list of statements, and in other cases we return a function that computes a list of statements. To uniformly deal with both regular data and promises, we define the following \code{force} function that checks whether its input is delayed (i.e., whether it is a \code{Promise}) and then either (1) forces the promise or (2) returns the input. % \begin{lstlisting} def force(promise): if isinstance(promise, Promise): return promise.force() else: return promise \end{lstlisting} % \fi} We use promises for the input and output of the functions \code{explicate\_pred}, \code{explicate\_assign}, % \racket{ and \code{explicate\_tail}}\python{ \code{explicate\_effect}, and \code{explicate\_stmt}}. % So, instead of taking and returning \racket{$\Tail$ expressions}\python{lists of statements}, they take and return promises. Furthermore, when we come to a situation in which a continuation might be used more than once, as in the case for \code{if} in \code{explicate\_pred}, we create a delayed computation that creates a basic block for each continuation (if there is not already one) and then returns a \code{goto} statement to that basic block. When we come to a situation in which we have a promise but need an actual piece of code, for example, to create a larger piece of code with a constructor such as \code{Seq}, then insert a call to \code{force}. % {\if\edition\racketEd % Also, we must modify the \code{create\_block} function to begin with \code{delay} to create a promise. When forced, this promise forces the original promise. If that returns a \code{Goto} (because the block was already added to \code{basic-blocks}), then we return the \code{Goto}. Otherwise, we add the block to \code{basic-blocks} and return a \code{Goto} to the new label. \begin{center} \begin{minipage}{\textwidth} \begin{lstlisting} (define (create_block tail) (delay (define t (force tail)) (match t [(Goto label) (Goto label)] [else (let ([label (gensym 'block)]) (set! basic-blocks (cons (cons label t) basic-blocks)) (Goto label))]))) \end{lstlisting} \end{minipage} \end{center} \fi} {\if\edition\pythonEd\pythonColor % Here is the new version of the \code{create\_block} auxiliary function that works on promises and that checks whether the block consists of a solitary \code{goto} statement.\\ \begin{minipage}{\textwidth} \begin{lstlisting} def create_block(promise, basic_blocks): def delay(): stmts = force(promise) match stmts: case [Goto(l)]: return [Goto(l)] case _: label = label_name(generate_name('block')) basic_blocks[label] = stmts return [Goto(label)] return Promise(delay) \end{lstlisting} \end{minipage} \fi} Figure~\ref{fig:explicate-control-challenge} shows the output of improved \code{explicate\_control} on this example. As you can see, the number of basic blocks has been reduced from four blocks (see figure~\ref{fig:explicate-control-s1-38}) to two blocks. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{tabular}{lll} \begin{minipage}{0.4\textwidth} % cond_test_82.rkt \begin{lstlisting} (let ([y (if #t (read) (if (eq? (read) 0) 777 (let ([x (read)]) (+ 1 x))))]) (+ y 2)) \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.55\textwidth} \begin{lstlisting} start: y = (read); goto block_5; block_5: return (+ y 2); \end{lstlisting} \end{minipage} \end{tabular} \fi} {\if\edition\pythonEd\pythonColor \begin{tabular}{lll} \begin{minipage}{0.4\textwidth} % cond_test_41.rkt \begin{lstlisting} x = input_int() y = input_int() print(y + 2 \ if (x == 0 \ if x < 1 \ else x == 2) \ else y + 10) \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.55\textwidth} \begin{lstlisting} start: x = input_int() y = input_int() if x < 1: goto block_4 else: goto block_5 block_4: if x == 0: goto block_2 else: goto block_3 block_5: if x == 2: goto block_2 else: goto block_3 block_2: tmp_0 = y + 2 goto block_1 block_3: tmp_0 = y + 10 goto block_1 block_1: print(tmp_0) return 0 \end{lstlisting} \end{minipage} \end{tabular} \fi} \end{tcolorbox} \caption{Translation from \LangIf{} to \LangCIf{} via the improved \code{explicate\_control}.} \label{fig:explicate-control-challenge} \end{figure} %% Recall that in the example output of \code{explicate\_control} in %% figure~\ref{fig:explicate-control-s1-38}, \code{block57} through %% \code{block60} are trivial blocks, they do nothing but jump to another %% block. The first goal of this challenge assignment is to remove those %% blocks. Figure~\ref{fig:optimize-jumps} repeats the result of %% \code{explicate\_control} on the left and shows the result of bypassing %% the trivial blocks on the right. Let us focus on \code{block61}. The %% \code{then} branch jumps to \code{block57}, which in turn jumps to %% \code{block55}. The optimized code on the right of %% figure~\ref{fig:optimize-jumps} bypasses \code{block57}, with the %% \code{then} branch jumping directly to \code{block55}. The story is %% similar for the \code{else} branch, as well as for the two branches in %% \code{block62}. After the jumps in \code{block61} and \code{block62} %% have been optimized in this way, there are no longer any jumps to %% blocks \code{block57} through \code{block60}, so they can be removed. %% \begin{figure}[tbp] %% \begin{tabular}{lll} %% \begin{minipage}{0.4\textwidth} %% \begin{lstlisting} %% block62: %% tmp54 = (read); %% if (eq? tmp54 2) then %% goto block59; %% else %% goto block60; %% block61: %% tmp53 = (read); %% if (eq? tmp53 0) then %% goto block57; %% else %% goto block58; %% block60: %% goto block56; %% block59: %% goto block55; %% block58: %% goto block56; %% block57: %% goto block55; %% block56: %% return (+ 700 77); %% block55: %% return (+ 10 32); %% start: %% tmp52 = (read); %% if (eq? tmp52 1) then %% goto block61; %% else %% goto block62; %% \end{lstlisting} %% \end{minipage} %% & %% $\Rightarrow$ %% & %% \begin{minipage}{0.55\textwidth} %% \begin{lstlisting} %% block62: %% tmp54 = (read); %% if (eq? tmp54 2) then %% goto block55; %% else %% goto block56; %% block61: %% tmp53 = (read); %% if (eq? tmp53 0) then %% goto block55; %% else %% goto block56; %% block56: %% return (+ 700 77); %% block55: %% return (+ 10 32); %% start: %% tmp52 = (read); %% if (eq? tmp52 1) then %% goto block61; %% else %% goto block62; %% \end{lstlisting} %% \end{minipage} %% \end{tabular} %% \caption{Optimize jumps by removing trivial blocks.} %% \label{fig:optimize-jumps} %% \end{figure} %% The name of this pass is \code{optimize-jumps}. We recommend %% implementing this pass in two phases. The first phrase builds a hash %% table that maps labels to possibly improved labels. The second phase %% changes the target of each \code{goto} to use the improved label. If %% the label is for a trivial block, then the hash table should map the %% label to the first non-trivial block that can be reached from this %% label by jumping through trivial blocks. If the label is for a %% non-trivial block, then the hash table should map the label to itself; %% we do not want to change jumps to non-trivial blocks. %% The first phase can be accomplished by constructing an empty hash %% table, call it \code{short-cut}, and then iterating over the control %% flow graph. Each time you encounter a block that is just a \code{goto}, %% then update the hash table, mapping the block's source to the target %% of the \code{goto}. Also, the hash table may already have mapped some %% labels to the block's source, to you must iterate through the hash %% table and update all of those so that they instead map to the target %% of the \code{goto}. %% For the second phase, we recommend iterating through the $\Tail$ of %% each block in the program, updating the target of every \code{goto} %% according to the mapping in \code{short-cut}. \begin{exercise}\normalfont\normalsize Implement the improvements to the \code{explicate\_control} pass. Check that it removes trivial blocks in a few example programs. Then check that your compiler still passes all your tests. \end{exercise} \subsection{Remove Jumps} There is an opportunity for removing jumps that is apparent in the example of figure~\ref{fig:if-example-x86}. The \code{start} block ends with a jump to \code{block\_5}, and there are no other jumps to \code{block\_5} in the rest of the program. In this situation we can avoid the runtime overhead of this jump by merging \code{block\_5} into the preceding block, which in this case is the \code{start} block. Figure~\ref{fig:remove-jumps} shows the output of \code{allocate\_registers} on the left and the result of this optimization on the right. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{tabular}{lll} \begin{minipage}{0.5\textwidth} % cond_test_82.rkt \begin{lstlisting} start: callq read_int movq %rax, %rcx jmp block_5 block_5: movq %rcx, %rax addq $2, %rax jmp conclusion \end{lstlisting} \end{minipage} & $\Rightarrow\qquad$ \begin{minipage}{0.4\textwidth} \begin{lstlisting} start: callq read_int movq %rax, %rcx movq %rcx, %rax addq $2, %rax jmp conclusion \end{lstlisting} \end{minipage} \end{tabular} \fi} {\if\edition\pythonEd\pythonColor \begin{tabular}{lll} \begin{minipage}{0.5\textwidth} % cond_test_20.rkt \begin{lstlisting} start: callq read_int movq %rax, tmp_0 cmpq 1, tmp_0 je block_3 jmp block_4 block_3: movq 42, tmp_1 jmp block_2 block_4: movq 0, tmp_1 jmp block_2 block_2: movq tmp_1, %rdi callq print_int movq 0, %rax jmp conclusion \end{lstlisting} \end{minipage} & $\Rightarrow\qquad$ \begin{minipage}{0.4\textwidth} \begin{lstlisting} start: callq read_int movq %rax, tmp_0 cmpq 1, tmp_0 je block_3 movq 0, tmp_1 jmp block_2 block_3: movq 42, tmp_1 jmp block_2 block_2: movq tmp_1, %rdi callq print_int movq 0, %rax jmp conclusion \end{lstlisting} \end{minipage} \end{tabular} \fi} \end{tcolorbox} \caption{Merging basic blocks by removing unnecessary jumps.} \label{fig:remove-jumps} \end{figure} \begin{exercise}\normalfont\normalsize % Implement a pass named \code{remove\_jumps} that merges basic blocks into their preceding basic block, when there is only one preceding block. The pass should translate from \LangXIfVar{} to \LangXIfVar{}. % {\if\edition\racketEd In the \code{run-tests.rkt} script, add the following entry to the list of \code{passes} between \code{allocate\_registers} and \code{patch\_instructions}: \begin{lstlisting} (list "remove_jumps" remove_jumps interp-pseudo-x86-1) \end{lstlisting} \fi} % Run the script to test your compiler. % Check that \code{remove\_jumps} accomplishes the goal of merging basic blocks on several test programs. \end{exercise} \section{Further Reading} \label{sec:cond-further-reading} The algorithm for the \code{explicate\_control} pass is based on the \code{expose-basic-blocks} pass in the course notes of \citet{Dybvig:2010aa}. % It has similarities to the algorithms of \citet{Danvy:2003fk} and \citet{Appel:2003fk}, and is related to translations into continuation passing style~\citep{Wijngaarden:1966,Fischer:1972,reynolds72:_def_interp,Plotkin:1975,Friedman:2001}. % The treatment of conditionals in the \code{explicate\_control} pass is similar to short-cut Boolean evaluation~\citep{Logothetis:1981,Aho:2006wb,Clarke:1989,Danvy:2003fk} and the case-of-case transformation~\citep{PeytonJones:1998}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Loops and Dataflow Analysis} \label{ch:Lwhile} \setcounter{footnote}{0} % TODO: define R'_8 % TODO: multi-graph {\if\edition\racketEd % In this chapter we study two features that are the hallmarks of imperative programming languages: loops and assignments to local variables. The following example demonstrates these new features by computing the sum of the first five positive integers: % similar to loop_test_1.rkt \begin{lstlisting} (let ([sum 0]) (let ([i 5]) (begin (while (> i 0) (begin (set! sum (+ sum i)) (set! i (- i 1)))) sum))) \end{lstlisting} The \code{while} loop consists of a condition and a body.\footnote{The \code{while} loop is not a built-in feature of the Racket language, but Racket includes many looping constructs and it is straightforward to define \code{while} as a macro.} The body is evaluated repeatedly so long as the condition remains true. % The \code{set!} consists of a variable and a right-hand side expression. The \code{set!} updates value of the variable to the value of the right-hand side. % The primary purpose of both the \code{while} loop and \code{set!} is to cause side effects, so they do not give a meaningful result value. Instead, their result is the \code{\#} value. The expression \code{(void)} is an explicit way to create the \code{\#} value, and it has type \code{Void}. The \code{\#} value can be passed around just like other values inside an \LangLoop{} program, and it can be compared for equality with another \code{\#} value. However, there are no other operations specific to the \code{\#} value in \LangLoop{}. In contrast, Racket defines the \code{void?} predicate that returns \code{\#t} when applied to \code{\#} and \code{\#f} otherwise.% % \footnote{Racket's \code{Void} type corresponds to what is often called the \code{Unit} type. Racket's \code{Void} type is inhabited by a single value \code{\#}, which corresponds to \code{unit} or \code{()} in the literature~\citep{Pierce:2002hj}.} % With the addition of side effect-producing features such as \code{while} loop and \code{set!}, it is helpful to include a language feature for sequencing side effects: the \code{begin} expression. It consists of one or more subexpressions that are evaluated left to right. % \fi} {\if\edition\pythonEd\pythonColor % In this chapter we study loops, one of the hallmarks of imperative programming languages. The following example demonstrates the \code{while} loop by computing the sum of the first five positive integers. \begin{lstlisting} sum = 0 i = 5 while i > 0: sum = sum + i i = i - 1 print(sum) \end{lstlisting} The \code{while} loop consists of a condition expression and a body (a sequence of statements). The body is evaluated repeatedly so long as the condition remains true. % \fi} \section{The \LangLoop{} Language} \newcommand{\LwhileGrammarRacket}{ \begin{array}{lcl} \Type &::=& \key{Void}\\ \Exp &::=& \CSETBANG{\Var}{\Exp} \MID \CBEGIN{\Exp^{*}}{\Exp} \MID \CWHILE{\Exp}{\Exp} \MID \LP\key{void}\RP \end{array} } \newcommand{\LwhileASTRacket}{ \begin{array}{lcl} \Type &::=& \key{Void}\\ \Exp &::=& \SETBANG{\Var}{\Exp} \MID \BEGIN{\Exp^{*}}{\Exp} \MID \WHILE{\Exp}{\Exp} \MID \VOID{} \end{array} } \newcommand{\LwhileGrammarPython}{ \begin{array}{rcl} \Stmt &::=& \key{while}~ \Exp \key{:}~ \Stmt^{+} \end{array} } \newcommand{\LwhileASTPython}{ \begin{array}{lcl} \Stmt{} &::=& \WHILESTMT{\Exp}{\Stmt^{+}} \end{array} } \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintGrammarRacket{}} \\ \hline \gray{\LvarGrammarRacket{}} \\ \hline \gray{\LifGrammarRacket{}} \\ \hline \LwhileGrammarRacket \\ \begin{array}{lcl} \LangLoopM{} &::=& \Exp \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintGrammarPython} \\ \hline \gray{\LvarGrammarPython} \\ \hline \gray{\LifGrammarPython} \\ \hline \LwhileGrammarPython \\ \begin{array}{rcl} \LangLoopM{} &::=& \Stmt^{*} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The concrete syntax of \LangLoop{}, extending \LangIf{} (figure~\ref{fig:Lif-concrete-syntax}).} \label{fig:Lwhile-concrete-syntax} \end{figure} \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintOpAST} \\ \hline \gray{\LvarASTRacket{}} \\ \hline \gray{\LifASTRacket{}} \\ \hline \LwhileASTRacket{} \\ \begin{array}{lcl} \LangLoopM{} &::=& \gray{ \PROGRAM{\code{'()}}{\Exp} } \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintASTPython} \\ \hline \gray{\LvarASTPython} \\ \hline \gray{\LifASTPython} \\ \hline \LwhileASTPython \\ \begin{array}{lcl} \LangLoopM{} &::=& \PROGRAM{\code{'()}}{\Stmt^{*}} \end{array} \end{array} \] \fi} \end{tcolorbox} \python{ \index{subject}{While@\texttt{While}} } \caption{The abstract syntax of \LangLoop{}, extending \LangIf{} (figure~\ref{fig:Lif-syntax}).} \label{fig:Lwhile-syntax} \end{figure} Figure~\ref{fig:Lwhile-concrete-syntax} shows the definition of the concrete syntax of \LangLoop{}, and figure~\ref{fig:Lwhile-syntax} shows the definition of its abstract syntax. % The definitional interpreter for \LangLoop{} is shown in figure~\ref{fig:interp-Lwhile}. % {\if\edition\racketEd % We add new cases for \code{SetBang}, \code{WhileLoop}, \code{Begin}, and \code{Void}, and we make changes to the cases for \code{Var} and \code{Let} regarding variables. To support assignment to variables and to make their lifetimes indefinite (see the second example in section~\ref{sec:assignment-scoping}), we box the value that is bound to each variable (in \code{Let}). The case for \code{Var} unboxes the value. % Now we discuss the new cases. For \code{SetBang}, we find the variable in the environment to obtain a boxed value, and then we change it using \code{set-box!} to the result of evaluating the right-hand side. The result value of a \code{SetBang} is \code{\#}. % For the \code{WhileLoop}, we repeatedly (1) evaluate the condition, and if the result is true, (2) evaluate the body. The result value of a \code{while} loop is also \code{\#}. % The $\BEGIN{\itm{es}}{\itm{body}}$ expression evaluates the subexpressions \itm{es} for their effects and then evaluates and returns the result from \itm{body}. % The $\VOID{}$ expression produces the \code{\#} value. % \fi} {\if\edition\pythonEd\pythonColor % We add a new case for \code{While} in the \code{interp\_stmts} function, in which we repeatedly interpret the \code{body} so long as the \code{test} expression remains true. % \fi} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define interp-Lwhile-class (class interp-Lif-class (super-new) (define/override ((interp-exp env) e) (define recur (interp-exp env)) (match e [(Let x e body) (define new-env (dict-set env x (box (recur e)))) ((interp-exp new-env) body)] [(Var x) (unbox (dict-ref env x))] [(SetBang x rhs) (set-box! (dict-ref env x) (recur rhs))] [(WhileLoop cnd body) (define (loop) (cond [(recur cnd) (recur body) (loop)] [else (void)])) (loop)] [(Begin es body) (for ([e es]) (recur e)) (recur body)] [(Void) (void)] [else ((super interp-exp env) e)])) )) (define (interp-Lwhile p) (send (new interp-Lwhile-class) interp-program p)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} class InterpLwhile(InterpLif): def interp_stmt(self, s, env, cont): match s: case While(test, body, []): if self.interp_exp(test, env): self.interp_stmts(body + [s] + cont, env) else: return self.interp_stmts(cont, env) case _: return super().interp_stmt(s, env, cont) \end{lstlisting} \fi} \end{tcolorbox} \caption{Interpreter for \LangLoop{}.} \label{fig:interp-Lwhile} \end{figure} The definition of the type checker for \LangLoop{} is shown in figure~\ref{fig:type-check-Lwhile}. % {\if\edition\racketEd % The type checking of the \code{SetBang} expression requires the type of the variable and the right-hand side to agree. The result type is \code{Void}. For \code{while}, the condition must be a \BOOLTY{} and the result type is \code{Void}. For \code{Begin}, the result type is the type of its last subexpression. % \fi} % {\if\edition\pythonEd\pythonColor % A \code{while} loop is well typed if the type of the \code{test} expression is \code{bool} and the statements in the \code{body} are well typed. % \fi} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define type-check-Lwhile-class (class type-check-Lif-class (super-new) (inherit check-type-equal?) (define/override (type-check-exp env) (lambda (e) (define recur (type-check-exp env)) (match e [(SetBang x rhs) (define-values (rhs^ rhsT) (recur rhs)) (define varT (dict-ref env x)) (check-type-equal? rhsT varT e) (values (SetBang x rhs^) 'Void)] [(WhileLoop cnd body) (define-values (cnd^ Tc) (recur cnd)) (check-type-equal? Tc 'Boolean e) (define-values (body^ Tbody) ((type-check-exp env) body)) (values (WhileLoop cnd^ body^) 'Void)] [(Begin es body) (define-values (es^ ts) (for/lists (l1 l2) ([e es]) (recur e))) (define-values (body^ Tbody) (recur body)) (values (Begin es^ body^) Tbody)] [else ((super type-check-exp env) e)]))) )) (define (type-check-Lwhile p) (send (new type-check-Lwhile-class) type-check-program p)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} class TypeCheckLwhile(TypeCheckLif): def type_check_stmts(self, ss, env): if len(ss) == 0: return match ss[0]: case While(test, body, []): test_t = self.type_check_exp(test, env) check_type_equal(bool, test_t, test) body_t = self.type_check_stmts(body, env) return self.type_check_stmts(ss[1:], env) case _: return super().type_check_stmts(ss, env) \end{lstlisting} \fi} \end{tcolorbox} \caption{Type checker for the \LangLoop{} language.} \label{fig:type-check-Lwhile} \end{figure} {\if\edition\racketEd % At first glance, the translation of these language features to x86 seems straightforward because the \LangCIf{} intermediate language already supports all the ingredients that we need: assignment, \code{goto}, conditional branching, and sequencing. However, complications arise, which we discuss in the next section. After that we introduce the changes necessary to the existing passes. % \fi} {\if\edition\pythonEd\pythonColor % At first glance, the translation of \code{while} loops to x86 seems straightforward because the \LangCIf{} intermediate language already supports \code{goto} and conditional branching. However, there are complications that arise, which we discuss in the next section. After that we introduce the changes necessary to the existing passes. % \fi} \section{Cyclic Control Flow and Dataflow Analysis} \label{sec:dataflow-analysis} Up until this point, the programs generated in \code{explicate\_control} were guaranteed to be acyclic. However, each \code{while} loop introduces a cycle. Does that matter? % Indeed, it does. Recall that for register allocation, the compiler performs liveness analysis to determine which variables can share the same register. To accomplish this, we analyzed the control-flow graph in reverse topological order (section~\ref{sec:liveness-analysis-Lif}), but topological order is well defined only for acyclic graphs. Let us return to the example of computing the sum of the first five positive integers. Here is the program after instruction selection\index{subject}{instruction selection} but before register allocation. \begin{center} {\if\edition\racketEd \begin{minipage}{0.45\textwidth} \begin{lstlisting} (define (main) : Integer mainstart: movq $0, sum movq $5, i jmp block5 block5: movq i, tmp3 cmpq tmp3, $0 jl block7 jmp block8 \end{lstlisting} \end{minipage} \begin{minipage}{0.45\textwidth} \begin{lstlisting} block7: addq i, sum movq $1, tmp4 negq tmp4 addq tmp4, i jmp block5 block8: movq $27, %rax addq sum, %rax jmp mainconclusion) \end{lstlisting} \end{minipage} \fi} {\if\edition\pythonEd\pythonColor \begin{minipage}{0.45\textwidth} \begin{lstlisting} mainstart: movq $0, sum movq $5, i jmp block5 block5: cmpq $0, i jg block7 jmp block8 \end{lstlisting} \end{minipage} \begin{minipage}{0.45\textwidth} \begin{lstlisting} block7: addq i, sum subq $1, i jmp block5 block8: movq sum, %rdi callq print_int movq $0, %rax jmp mainconclusion \end{lstlisting} \end{minipage} \fi} \end{center} Recall that liveness analysis works backward, starting at the end of each function. For this example we could start with \code{block8} because we know what is live at the beginning of the conclusion: only \code{rax} and \code{rsp}. So the live-before set for \code{block8} is \code{\{rsp,sum\}}. % Next we might try to analyze \code{block5} or \code{block7}, but \code{block5} jumps to \code{block7} and vice versa, so it seems that we are stuck. The way out of this impasse is to realize that we can compute an underapproximation of each live-before set by starting with empty live-after sets. By \emph{underapproximation}, we mean that the set contains only variables that are live for some execution of the program, but the set may be missing some variables that are live. Next, the underapproximations for each block can be improved by (1) updating the live-after set for each block using the approximate live-before sets from the other blocks, and (2) performing liveness analysis again on each block. In fact, by iterating this process, the underapproximations eventually become the correct solutions! % This approach of iteratively analyzing a control-flow graph is applicable to many static analysis problems and goes by the name \emph{dataflow analysis}\index{subject}{dataflow analysis}. It was invented by \citet{Kildall:1973vn} in his PhD thesis at the University of Washington. Let us apply this approach to the previously presented example. We use the empty set for the initial live-before set for each block. Let $m_0$ be the following mapping from label names to sets of locations (variables and registers): \begin{center} \begin{lstlisting} mainstart: {}, block5: {}, block7: {}, block8: {} \end{lstlisting} \end{center} Using the above live-before approximations, we determine the live-after for each block and then apply liveness analysis to each block. This produces our next approximation $m_1$ of the live-before sets. \begin{center} \begin{lstlisting} mainstart: {}, block5: {i}, block7: {i, sum}, block8: {rsp, sum} \end{lstlisting} \end{center} For the second round, the live-after for \code{mainstart} is the current live-before for \code{block5}, which is \code{\{i\}}. Therefore the liveness analysis for \code{mainstart} computes the empty set. The live-after for \code{block5} is the union of the live-before sets for \code{block7} and \code{block8}, which is \code{\{i, rsp, sum\}}. So the liveness analysis for \code{block5} computes \code{\{i, rsp, sum\}}. The live-after for \code{block7} is the live-before for \code{block5} (from the previous iteration), which is \code{\{i\}}. So the liveness analysis for \code{block7} remains \code{\{i, sum\}}. Together these yield the following approximation $m_2$ of the live-before sets: \begin{center} \begin{lstlisting} mainstart: {}, block5: {i, rsp, sum}, block7: {i, sum}, block8: {rsp, sum} \end{lstlisting} \end{center} In the preceding iteration, only \code{block5} changed, so we can limit our attention to \code{mainstart} and \code{block7}, the two blocks that jump to \code{block5}. As a result, the live-before sets for \code{mainstart} and \code{block7} are updated to include \code{rsp}, yielding the following approximation $m_3$: \begin{center} \begin{lstlisting} mainstart: {rsp}, block5: {i,rsp,sum}, block7: {i,rsp,sum}, block8: {rsp,sum} \end{lstlisting} \end{center} Because \code{block7} changed, we analyze \code{block5} once more, but its live-before set remains \code{\{i,rsp,sum\}}. At this point our approximations have converged, so $m_3$ is the solution. This iteration process is guaranteed to converge to a solution by the Kleene fixed-point theorem, a general theorem about functions on lattices~\citep{Kleene:1952aa}. Roughly speaking, a \emph{lattice} is any collection that comes with a partial ordering\index{subject}{partialordering@partial ordering} $\sqsubseteq$ on its elements, a least element $\bot$ (pronounced \emph{bottom}), and a join operator $\sqcup$.\index{subject}{lattice}\index{subject}{bottom}\index{subject}{join}\footnote{Technically speaking, we will be working with join semilattices.} When two elements are ordered $m_i \sqsubseteq m_j$, it means that $m_j$ contains at least as much information as $m_i$, so we can think of $m_j$ as a better-than-or-equal-to approximation in relation to $m_i$. The bottom element $\bot$ represents the complete lack of information, that is, the worst approximation. The join operator takes two lattice elements and combines their information; that is, it produces the least upper bound of the two.\index{subject}{least upper bound} A dataflow analysis typically involves two lattices: one lattice to represent abstract states and another lattice that aggregates the abstract states of all the blocks in the control-flow graph. For liveness analysis, an abstract state is a set of locations. We form the lattice $L$ by taking its elements to be sets of locations, the ordering to be set inclusion ($\subseteq$), the bottom to be the empty set, and the join operator to be set union. % We form a second lattice $M$ by taking its elements to be mappings from the block labels to sets of locations (elements of $L$). We order the mappings point-wise, using the ordering of $L$. So, given any two mappings $m_i$ and $m_j$, $m_i \sqsubseteq_M m_j$ when $m_i(\ell) \subseteq m_j(\ell)$ for every block label $\ell$ in the program. The bottom element of $M$ is the mapping $\bot_M$ that sends every label to the empty set; that is, $\bot_M(\ell) = \emptyset$. We can think of one iteration of liveness analysis applied to the whole program as being a function $f$ on the lattice $M$. It takes a mapping as input and computes a new mapping. \[ f(m_i) = m_{i+1} \] Next let us think for a moment about what a final solution $m_s$ should look like. If we perform liveness analysis using the solution $m_s$ as input, we should get $m_s$ again as the output. That is, the solution should be a \emph{fixed point} of the function $f$.\index{subject}{fixed point} \[ f(m_s) = m_s \] Furthermore, the solution should include only locations that are forced to be there by performing liveness analysis on the program, so the solution should be the \emph{least} fixed point.\index{subject}{least fixed point} The Kleene fixed-point theorem states that if a function $f$ is monotone (better inputs produce better outputs), then the least fixed point of $f$ is the least upper bound of the \emph{ascending Kleene chain} obtained by starting at $\bot$ and iterating $f$, as follows:\index{subject}{Kleene fixed-point theorem} \[ \bot \sqsubseteq f(\bot) \sqsubseteq f(f(\bot)) \sqsubseteq \cdots \sqsubseteq f^n(\bot) \sqsubseteq \cdots \] When a lattice contains only finitely long ascending chains, then every Kleene chain tops out at some fixed point after some number of iterations of $f$. \[ \bot \sqsubseteq f(\bot) \sqsubseteq f(f(\bot)) \sqsubseteq \cdots \sqsubseteq f^k(\bot) = f^{k+1}(\bot) = m_s \] The liveness analysis is indeed a monotone function and the lattice $M$ has finitely long ascending chains because there are only a finite number of variables and blocks in the program. Thus we are guaranteed that iteratively applying liveness analysis to all blocks in the program will eventually produce the least fixed point solution. Next let us consider dataflow analysis in general and discuss the generic work list algorithm (figure~\ref{fig:generic-dataflow}). % The algorithm has four parameters: the control-flow graph \code{G}, a function \code{transfer} that applies the analysis to one block, and the \code{bottom} and \code{join} operators for the lattice of abstract states. The \code{analyze\_dataflow} function is formulated as a \emph{forward} dataflow analysis; that is, the inputs to the transfer function come from the predecessor nodes in the control-flow graph. However, liveness analysis is a \emph{backward} dataflow analysis, so in that case one must supply the \code{analyze\_dataflow} function with the transpose of the control-flow graph. The algorithm begins by creating the bottom mapping, represented by a hash table. It then pushes all the nodes in the control-flow graph onto the work list (a queue). The algorithm repeats the \code{while} loop as long as there are items in the work list. In each iteration, a node is popped from the work list and processed. The \code{input} for the node is computed by taking the join of the abstract states of all the predecessor nodes. The \code{transfer} function is then applied to obtain the \code{output} abstract state. If the output differs from the previous state for this block, the mapping for this block is updated and its successor nodes are pushed onto the work list. \begin{figure}[tb] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define (analyze_dataflow G transfer bottom join) (define mapping (make-hash)) (for ([v (in-vertices G)]) (dict-set! mapping v bottom)) (define worklist (make-queue)) (for ([v (in-vertices G)]) (enqueue! worklist v)) (define trans-G (transpose G)) (while (not (queue-empty? worklist)) (define node (dequeue! worklist)) (define input (for/fold ([state bottom]) ([pred (in-neighbors trans-G node)]) (join state (dict-ref mapping pred)))) (define output (transfer node input)) (cond [(not (equal? output (dict-ref mapping node))) (dict-set! mapping node output) (for ([v (in-neighbors G node)]) (enqueue! worklist v))])) mapping) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def analyze_dataflow(G, transfer, bottom, join): trans_G = transpose(G) mapping = dict((v, bottom) for v in G.vertices()) worklist = deque(G.vertices) while worklist: node = worklist.pop() inputs = [mapping[v] for v in trans_G.adjacent(node)] input = reduce(join, inputs, bottom) output = transfer(node, input) if output != mapping[node]: mapping[node] = output worklist.extend(G.adjacent(node)) \end{lstlisting} \fi} \end{tcolorbox} \caption{Generic work list algorithm for dataflow analysis.} \label{fig:generic-dataflow} \end{figure} {\if\edition\racketEd \section{Mutable Variables and Remove Complex Operands} There is a subtle interaction between the \code{remove\_complex\_operands} pass, the addition of \code{set!}, and the left-to-right order of evaluation of Racket. Consider the following example: \begin{lstlisting} (let ([x 2]) (+ x (begin (set! x 40) x))) \end{lstlisting} The result of this program is \code{42} because the first read from \code{x} produces \code{2} and the second produces \code{40}. However, if we naively apply the \code{remove\_complex\_operands} pass to this example we obtain the following program whose result is \code{80}! \begin{lstlisting} (let ([x 2]) (let ([tmp (begin (set! x 40) x)]) (+ x tmp))) \end{lstlisting} The problem is that with mutable variables, the ordering between reads and writes is important, and the \code{remove\_complex\_operands} pass moved the \code{set!} to happen before the first read of \code{x}. We recommend solving this problem by giving special treatment to reads from mutable variables, that is, variables that occur on the left-hand side of a \code{set!}. We mark each read from a mutable variable with the form \code{get!} (\code{GetBang} in abstract syntax) to indicate that the read operation is effectful in that it can produce different results at different points in time. Let's apply this idea to the following variation that also involves a variable that is not mutated: % loop_test_24.rkt \begin{lstlisting} (let ([x 2]) (let ([y 0]) (+ y (+ x (begin (set! x 40) x))))) \end{lstlisting} We first analyze this program to discover that variable \code{x} is mutable but \code{y} is not. We then transform the program as follows, replacing each occurrence of \code{x} with \code{(get! x)}: \begin{lstlisting} (let ([x 2]) (let ([y 0]) (+ y (+ (get! x) (begin (set! x 40) (get! x)))))) \end{lstlisting} Now that we have a clear distinction between reads from mutable and immutable variables, we can apply the \code{remove\_complex\_operands} pass, where reads from immutable variables are still classified as atomic expressions but reads from mutable variables are classified as complex. Thus, \code{remove\_complex\_operands} yields the following program:\\ \begin{minipage}{\textwidth} \begin{lstlisting} (let ([x 2]) (let ([y 0]) (let ([t1 x]) (let ([t2 (begin (set! x 40) x)]) (let ([t3 (+ t1 t2)]) (+ y t3)))))) \end{lstlisting} \end{minipage} The temporary variable \code{t1} gets the value of \code{x} before the \code{set!}, so it is \code{2}. The temporary variable \code{t2} gets the value of \code{x} after the \code{set!}, so it is \code{40}. We do not generate a temporary variable for the occurrence of \code{y} because it's an immutable variable. We want to avoid such unnecessary extra temporaries because they would needlessly increase the number of variables, making it more likely for some of them to be spilled. The result of this program is \code{42}, the same as the result prior to \code{remove\_complex\_operands}. The approach that we've sketched requires only a small modification to \code{remove\_complex\_operands} to handle \code{get!}. However, it requires a new pass, called \code{uncover-get!}, that we discuss in section~\ref{sec:uncover-get-bang}. As an aside, this problematic interaction between \code{set!} and the pass \code{remove\_complex\_operands} is particular to Racket and not its predecessor, the Scheme language. The key difference is that Scheme does not specify an order of evaluation for the arguments of an operator or function call~\citep{SPERBER:2009aa}. Thus, a compiler for Scheme is free to choose any ordering: both \code{42} and \code{80} would be correct results for the example program. Interestingly, Racket is implemented on top of the Chez Scheme compiler~\citep{Dybvig:2006aa} and an approach similar to the one presented in this section (using extra \code{let} bindings to control the order of evaluation) is used in the translation from Racket to Scheme~\citep{Flatt:2019tb}. \fi} % racket Having discussed the complications that arise from adding support for assignment and loops, we turn to discussing the individual compilation passes. {\if\edition\racketEd \section{Uncover \texttt{get!}} \label{sec:uncover-get-bang} The goal of this pass is to mark uses of mutable variables so that \code{remove\_complex\_operands} can treat them as complex expressions and thereby preserve their ordering relative to the side effects in other operands. So, the first step is to collect all the mutable variables. We recommend creating an auxiliary function for this, named \code{collect-set!}, that recursively traverses expressions, returning the set of all variables that occur on the left-hand side of a \code{set!}. Here's an excerpt of its implementation. \begin{center} \begin{minipage}{\textwidth} \begin{lstlisting} (define (collect-set! e) (match e [(Var x) (set)] [(Int n) (set)] [(Let x rhs body) (set-union (collect-set! rhs) (collect-set! body))] [(SetBang var rhs) (set-union (set var) (collect-set! rhs))] ...)) \end{lstlisting} \end{minipage} \end{center} By placing this pass after \code{uniquify}, we need not worry about variable shadowing, and our logic for \code{Let} can remain simple, as in this excerpt. The second step is to mark the occurrences of the mutable variables with the new \code{GetBang} AST node (\code{get!} in concrete syntax). The following is an excerpt of the \code{uncover-get!-exp} function, which takes two parameters: the set of mutable variables \code{set!-vars} and the expression \code{e} to be processed. The case for \code{(Var x)} replaces it with \code{(GetBang x)} if it is a mutable variable or leaves it alone if not. \begin{center} \begin{minipage}{\textwidth} \begin{lstlisting} (define ((uncover-get!-exp set!-vars) e) (match e [(Var x) (if (set-member? set!-vars x) (GetBang x) (Var x))] ...)) \end{lstlisting} \end{minipage} \end{center} To wrap things up, define the \code{uncover-get!} function for processing a whole program, using \code{collect-set!} to obtain the set of mutable variables and then \code{uncover-get!-exp} to replace their occurrences with \code{GetBang}. \fi} \section{Remove Complex Operands} \label{sec:rco-loop} {\if\edition\racketEd % The new language forms, \code{get!}, \code{set!}, \code{begin}, and \code{while} are all complex expressions. The subexpressions of \code{set!}, \code{begin}, and \code{while} are allowed to be complex. % \fi} {\if\edition\pythonEd\pythonColor % The change needed for this pass is to add a case for the \code{while} statement. The condition of a \code{while} loop is allowed to be a complex expression, just like the condition of the \code{if} statement. % \fi} % Figure~\ref{fig:Lwhile-anf-syntax} defines the output language \LangLoopANF{} of this pass. \newcommand{\LwhileMonadASTRacket}{ \begin{array}{rcl} \Atm &::=& \VOID{} \\ \Exp &::=& \GETBANG{\Var} \MID \SETBANG{\Var}{\Exp} \MID \BEGIN{\LP\Exp\ldots\RP}{\Exp} \\ &\MID& \WHILE{\Exp}{\Exp} \end{array} } \newcommand{\LwhileMonadASTPython}{ \begin{array}{rcl} \Stmt{} &::=& \WHILESTMT{\Exp}{\Stmt^{+}} \end{array} } \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LvarMonadASTRacket} \\ \hline \gray{\LifMonadASTRacket} \\ \hline \LwhileMonadASTRacket \\ \begin{array}{rcl} \LangLoopANF &::=& \PROGRAM{\code{'()}}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LvarMonadASTPython} \\ \hline \gray{\LifMonadASTPython} \\ \hline \LwhileMonadASTPython \\ \begin{array}{rcl} \LangLoopANF &::=& \PROGRAM{\code{()}}{\Stmt^{*}} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{\LangLoopANF{} is \LangLoop{} in monadic normal form.} \label{fig:Lwhile-anf-syntax} \end{figure} {\if\edition\racketEd % As usual, when a complex expression appears in a grammar position that needs to be atomic, such as the argument of a primitive operator, we must introduce a temporary variable and bind it to the complex expression. This approach applies, unchanged, to handle the new language forms. For example, in the following code there are two \code{begin} expressions appearing as arguments to the \code{+} operator. The output of \code{rco\_exp} is then shown, in which the \code{begin} expressions have been bound to temporary variables. Recall that \code{let} expressions in \LangLoopANF{} are allowed to have arbitrary expressions in their right-hand side expression, so it is fine to place \code{begin} there. % \begin{center} \begin{tabular}{lcl} \begin{minipage}{0.4\textwidth} \begin{lstlisting} (let ([x2 10]) (let ([y3 0]) (+ (+ (begin (set! y3 (read)) (get! x2)) (begin (set! x2 (read)) (get! y3))) (get! x2)))) \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.4\textwidth} \begin{lstlisting} (let ([x2 10]) (let ([y3 0]) (let ([tmp4 (begin (set! y3 (read)) x2)]) (let ([tmp5 (begin (set! x2 (read)) y3)]) (let ([tmp6 (+ tmp4 tmp5)]) (let ([tmp7 x2]) (+ tmp6 tmp7))))))) \end{lstlisting} \end{minipage} \end{tabular} \end{center} \fi} \section{Explicate Control \racket{and \LangCLoop{}}} \label{sec:explicate-loop} \newcommand{\CloopASTRacket}{ \begin{array}{lcl} \Atm &::=& \VOID \\ \Stmt &::=& \READ{} \end{array} } {\if\edition\racketEd Recall that in the \code{explicate\_control} pass we define one helper function for each kind of position in the program. For the \LangVar{} language of integers and variables, we needed assignment and tail positions. The \code{if} expressions of \LangIf{} introduced predicate positions. For \LangLoop{}, the \code{begin} expression introduces yet another kind of position: effect position. Except for the last subexpression, the subexpressions inside a \code{begin} are evaluated only for their effect. Their result values are discarded. We can generate better code by taking this fact into account. The output language of \code{explicate\_control} is \LangCLoop{} (figure~\ref{fig:c7-syntax}), which is nearly identical to \LangCIf{}. The only syntactic differences are the addition of \VOID{} and that \code{read} may appear as a statement. The most significant difference between the programs generated by \code{explicate\_control} in chapter~\ref{ch:Lif} versus \code{explicate\_control} in this chapter is that the control-flow graphs of the latter may contain cycles. \begin{figure}[tp] \begin{tcolorbox}[colback=white] \small \[ \begin{array}{l} \gray{\CvarASTRacket} \\ \hline \gray{\CifASTRacket} \\ \hline \CloopASTRacket \\ \begin{array}{lcl} \LangCLoopM{} & ::= & \CPROGRAM{\itm{info}}{\LP\LP\itm{label}\,\key{.}\,\Tail\RP\ldots\RP} \end{array} \end{array} \] \end{tcolorbox} \caption{The abstract syntax of \LangCLoop{}, extending \LangCIf{} (figure~\ref{fig:c1-syntax}).} \label{fig:c7-syntax} \end{figure} The new auxiliary function \code{explicate\_effect} takes an expression (in an effect position) and the code for its continuation. The function returns a $\Tail$ that includes the generated code for the input expression followed by the continuation. If the expression is obviously pure, that is, never causes side effects, then the expression can be removed, so the result is just the continuation. % The case for $\WHILE{\itm{cnd}}{\itm{body}}$ expressions is interesting; the generated code is depicted in the following diagram: \begin{center} \begin{minipage}{0.3\textwidth} \xymatrix{ *+[F=]{\txt{\code{goto} \itm{loop}}} \ar[r] & *+[F]{\txt{\itm{loop}: \\ \itm{cnd'}}} \ar[r]^{else} \ar[d]^{then} & *+[F]{\txt{\itm{cont}}} \\ & *+[F]{\txt{\itm{body'} \\ \code{goto} \itm{loop}}} \ar@/^50pt/[u] } \end{minipage} \end{center} We start by creating a fresh label $\itm{loop}$ for the top of the loop. Next, recursively process the \itm{body} (in effect position) with a \code{goto} to $\itm{loop}$ as the continuation, producing \itm{body'}. Process the \itm{cnd} (in predicate position) with \itm{body'} as the \emph{then} branch and the continuation block as the \emph{else} branch. The result should be added to the dictionary of \code{basic-blocks} with the label \itm{loop}. The result for the whole \code{while} loop is a \code{goto} to the \itm{loop} label. The auxiliary functions for tail, assignment, and predicate positions need to be updated. The three new language forms, \code{while}, \code{set!}, and \code{begin}, can appear in assignment and tail positions. Only \code{begin} may appear in predicate positions; the other two have result type \code{Void}. \fi} % {\if\edition\pythonEd\pythonColor % The output of this pass is the language \LangCIf{}. No new language features are needed in the output, because a \code{while} loop can be expressed in terms of \code{goto} and \code{if} statements, which are already in \LangCIf{}. % Add a case for the \code{while} statement to the \code{explicate\_stmt} method, using \code{explicate\_pred} to process the condition expression. % \fi} {\if\edition\racketEd \section{Select Instructions} \label{sec:select-instructions-loop} \index{subject}{select instructions} Only two small additions are needed in the \code{select\_instructions} pass to handle the changes to \LangCLoop{}. First, to handle the addition of \VOID{} we simply translate it to \code{0}. Second, \code{read} may appear as a stand-alone statement instead of appearing only on the right-hand side of an assignment statement. The code generation is nearly identical to the one for assignment; just leave off the instruction for moving the result into the left-hand side. \fi} \section{Register Allocation} \label{sec:register-allocation-loop} As discussed in section~\ref{sec:dataflow-analysis}, the presence of loops in \LangLoop{} means that the control-flow graphs may contain cycles, which complicates the liveness analysis needed for register allocation. % We recommend using the generic \code{analyze\_dataflow} function that was presented at the end of section~\ref{sec:dataflow-analysis} to perform liveness analysis, replacing the code in \code{uncover\_live} that processed the basic blocks in topological order (section~\ref{sec:liveness-analysis-Lif}). The \code{analyze\_dataflow} function has the following four parameters. \begin{enumerate} \item The first parameter \code{G} should be passed the transpose of the control-flow graph. \item The second parameter \code{transfer} should be passed a function that applies liveness analysis to a basic block. It takes two parameters: the label for the block to analyze and the live-after set for that block. The transfer function should return the live-before set for the block. % \racket{Also, as a side effect, it should update the block's $\itm{info}$ with the liveness information for each instruction.} % \python{Also, as a side effect, it should update the live-before and live-after sets for each instruction.} % To implement the \code{transfer} function, you should be able to reuse the code you already have for analyzing basic blocks. \item The third and fourth parameters of \code{analyze\_dataflow} are \code{bottom} and \code{join} for the lattice of abstract states, that is, sets of locations. For liveness analysis, the bottom of the lattice is the empty set, and the join operator is set union. \end{enumerate} \begin{figure}[tp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.90] \node (Lfun) at (0,2) {\large \LangLoop{}}; \node (Lfun-2) at (3,2) {\large \LangLoop{}}; \node (F1-4) at (6,2) {\large \LangLoop{}}; \node (F1-5) at (9,2) {\large \LangLoop{}}; \node (F1-6) at (9,0) {\large \LangLoopANF{}}; \node (C3-2) at (0,0) {\large \racket{\LangCLoop{}}\python{\LangCIf{}}}; \node (x86-2) at (0,-2) {\large \LangXIfVar{}}; \node (x86-2-1) at (0,-4) {\large \LangXIfVar{}}; \node (x86-2-2) at (4,-4) {\large \LangXIfVar{}}; \node (x86-3) at (4,-2) {\large \LangXIfVar{}}; \node (x86-4) at (8,-2) {\large \LangXIf{}}; \node (x86-5) at (8,-4) {\large \LangXIf{}}; \path[->,bend left=15] (Lfun) edge [above] node {\ttfamily\footnotesize shrink} (Lfun-2); \path[->,bend left=15] (Lfun-2) edge [above] node {\ttfamily\footnotesize uniquify} (F1-4); \path[->,bend left=15] (F1-4) edge [above] node {\ttfamily\footnotesize uncover\_get!} (F1-5); \path[->,bend left=15] (F1-5) edge [left] node {\ttfamily\footnotesize remove\_complex\_operands} (F1-6); \path[->,bend left=10] (F1-6) edge [above] node {\ttfamily\footnotesize explicate\_control} (C3-2); \path[->,bend left=15] (C3-2) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend right=15] (x86-2) edge [right] node {\ttfamily\footnotesize uncover\_live} (x86-2-1); \path[->,bend right=15] (x86-2-1) edge [below] node {\ttfamily\footnotesize build\_interference} (x86-2-2); \path[->,bend right=15] (x86-2-2) edge [right] node {\ttfamily\footnotesize allocate\_registers} (x86-3); \path[->,bend left=15] (x86-3) edge [above] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend left=15] (x86-4) edge [right] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \fi} {\if\edition\pythonEd\pythonColor \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.90] \node (Lfun) at (0,2) {\large \LangLoop{}}; \node (Lfun-2) at (4,2) {\large \LangLoop{}}; \node (F1-6) at (8,2) {\large \LangLoopANF{}}; \node (C3-2) at (0,0) {\large \racket{\LangCLoop{}}\python{\LangCIf{}}}; \node (x86-2) at (0,-2) {\large \LangXIfVar{}}; \node (x86-3) at (4,-2) {\large \LangXIfVar{}}; \node (x86-4) at (8,-2) {\large \LangXIf{}}; \node (x86-5) at (12,-2) {\large \LangXIf{}}; \path[->,bend left=15] (Lfun) edge [above] node {\ttfamily\footnotesize shrink} (Lfun-2); \path[->,bend left=15] (Lfun-2) edge [above] node {\ttfamily\footnotesize remove\_complex\_operands} (F1-6); \path[->,bend left=10] (F1-6) edge [right] node {\ttfamily\footnotesize \ \ explicate\_control} (C3-2); \path[->,bend right=15] (C3-2) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend right=15] (x86-2) edge [below] node {\ttfamily\footnotesize assign\_homes} (x86-3); \path[->,bend left=15] (x86-3) edge [above] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend right=15] (x86-4) edge [below] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \fi} \end{tcolorbox} \caption{Diagram of the passes for \LangLoop{}.} \label{fig:Lwhile-passes} \end{figure} Figure~\ref{fig:Lwhile-passes} provides an overview of all the passes needed for the compilation of \LangLoop{}. % Further Reading: dataflow analysis %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Tuples and Garbage Collection} \label{ch:Lvec} \index{subject}{tuple} \index{subject}{vector} \setcounter{footnote}{0} %% \margincomment{\scriptsize To do: Flesh out this chapter, e.g., make sure %% all the IR grammars are spelled out! \\ --Jeremy} %% \margincomment{\scriptsize Be more explicit about how to deal with %% the root stack. \\ --Jeremy} In this chapter we study the implementation of tuples\racket{, called vectors in Racket}. A tuple is a fixed-length sequence of elements in which each element may have a different type. % This language feature is the first to use the computer's \emph{heap}\index{subject}{heap}, because the lifetime of a tuple is indefinite; that is, a tuple lives forever from the programmer's viewpoint. Of course, from an implementer's viewpoint, it is important to reclaim the space associated with a tuple when it is no longer needed, which is why we also study \emph{garbage collection} \index{subject}{garbage collection} techniques in this chapter. Section~\ref{sec:r3} introduces the \LangVec{} language, including its interpreter and type checker. The \LangVec{} language extends the \LangLoop{} language (chapter~\ref{ch:Lwhile}) with tuples. % Section~\ref{sec:GC} describes a garbage collection algorithm based on copying live tuples back and forth between two halves of the heap. The garbage collector requires coordination with the compiler so that it can find all the live tuples. % Sections~\ref{sec:expose-allocation} through \ref{sec:print-x86-gc} discuss the necessary changes and additions to the compiler passes, including a new compiler pass named \code{expose\_allocation}. \section{The \LangVec{} Language} \label{sec:r3} Figure~\ref{fig:Lvec-concrete-syntax} shows the definition of the concrete syntax for \LangVec{}, and figure~\ref{fig:Lvec-syntax} shows the definition of the abstract syntax. % \racket{The \LangVec{} language includes the forms \code{vector} for creating a tuple, \code{vector-ref} for reading an element of a tuple, \code{vector-set!} for writing to an element of a tuple, and \code{vector-length} for obtaining the number of elements of a tuple.} % \python{The \LangVec{} language adds (1) tuple creation via a comma-separated list of expressions; (2) accessing an element of a tuple with the square bracket notation (i.e., \code{t[n]} returns the element at index \code{n} of tuple \code{t}); (3) the \code{is} comparison operator; and (4) obtaining the number of elements (the length) of a tuple. In this chapter, we restrict access indices to constant integers.} % The following program shows an example of the use of tuples. It creates a tuple \code{t} containing the elements \code{40}, \racket{\code{\#t}}\python{\code{True}}, and another tuple that contains just \code{2}. The element at index $1$ of \code{t} is \racket{\code{\#t}}\python{\code{True}}, so the \emph{then} branch of the \key{if} is taken. The element at index $0$ of \code{t} is \code{40}, to which we add \code{2}, the element at index $0$ of the tuple. The result of the program is \code{42}. % {\if\edition\racketEd \begin{lstlisting} (let ([t (vector 40 #t (vector 2))]) (if (vector-ref t 1) (+ (vector-ref t 0) (vector-ref (vector-ref t 2) 0)) 44)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} t = 40, True, (2,) print(t[0] + t[2][0] if t[1] else 44) \end{lstlisting} \fi} \newcommand{\LtupGrammarRacket}{ \begin{array}{lcl} \Type &::=& \LP\key{Vector}\;\Type^{*}\RP \\ \Exp &::=& \LP\key{vector}\;\Exp^{*}\RP \MID \LP\key{vector-length}\;\Exp\RP \\ &\MID& \LP\key{vector-ref}\;\Exp\;\Int\RP \MID \LP\key{vector-set!}\;\Exp\;\Int\;\Exp\RP \end{array} } \newcommand{\LtupASTRacket}{ \begin{array}{lcl} \Type &::=& \LP\key{Vector}\;\Type^{*}\RP \\ \itm{op} &::=& \code{vector} \MID \code{vector-length} \\ \Exp &::=& \VECREF{\Exp}{\INT{\Int}} \\ &\MID& \VECSET{\Exp}{\INT{\Int}}{\Exp} % &\MID& \LP\key{HasType}~\Exp~\Type \RP \end{array} } \newcommand{\LtupGrammarPython}{ \begin{array}{rcl} \itm{cmp} &::= & \key{is} \\ \Exp &::=& \Exp \key{,} \ldots \key{,} \Exp \MID \CGET{\Exp}{\Int} \MID \CLEN{\Exp} \end{array} } \newcommand{\LtupASTPython}{ \begin{array}{lcl} \itm{cmp} &::= & \code{Is()} \\ \Exp &::=& \TUPLE{\Exp^{+}} \MID \GET{\Exp}{\INT{\Int}} \\ &\MID& \LEN{\Exp} \end{array} } \begin{figure}[tbp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintGrammarRacket{}} \\ \hline \gray{\LvarGrammarRacket{}} \\ \hline \gray{\LifGrammarRacket{}} \\ \hline \gray{\LwhileGrammarRacket} \\ \hline \LtupGrammarRacket \\ \begin{array}{lcl} \LangVecM{} &::=& \Exp \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintGrammarPython{}} \\ \hline \gray{\LvarGrammarPython{}} \\ \hline \gray{\LifGrammarPython{}} \\ \hline \gray{\LwhileGrammarPython} \\ \hline \LtupGrammarPython \\ \begin{array}{rcl} \LangVecM{} &::=& \Stmt^{*} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The concrete syntax of \LangVec{}, extending \LangLoop{} (figure~\ref{fig:Lwhile-concrete-syntax}).} \label{fig:Lvec-concrete-syntax} \end{figure} \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintOpAST} \\ \hline \gray{\LvarASTRacket{}} \\ \hline \gray{\LifASTRacket{}} \\ \hline \gray{\LwhileASTRacket{}} \\ \hline \LtupASTRacket{} \\ \begin{array}{lcl} \LangVecM{} &::=& \PROGRAM{\key{'()}}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintASTPython} \\ \hline \gray{\LvarASTPython} \\ \hline \gray{\LifASTPython} \\ \hline \gray{\LwhileASTPython} \\ \hline \LtupASTPython \\ \begin{array}{lcl} \LangVecM{} &::=& \PROGRAM{\code{'()}}{\Stmt^{*}} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangVec{}.} \label{fig:Lvec-syntax} \end{figure} Tuples raise several interesting new issues. First, variable binding performs a shallow copy in dealing with tuples, which means that different variables can refer to the same tuple; that is, two variables can be \emph{aliases}\index{subject}{alias} for the same entity. Consider the following example, in which \code{t1} and \code{t2} refer to the same tuple value and \code{t3} refers to a different tuple value with equal elements. The result of the program is \code{42}. \begin{center} \begin{minipage}{0.96\textwidth} {\if\edition\racketEd \begin{lstlisting} (let ([t1 (vector 3 7)]) (let ([t2 t1]) (let ([t3 (vector 3 7)]) (if (and (eq? t1 t2) (not (eq? t1 t3))) 42 0)))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} t1 = 3, 7 t2 = t1 t3 = 3, 7 print(42 if (t1 is t2) and not (t1 is t3) else 0) \end{lstlisting} \fi} \end{minipage} \end{center} {\if\edition\racketEd Whether two variables are aliased or not affects what happens when the underlying tuple is mutated\index{subject}{mutation}. Consider the following example in which \code{t1} and \code{t2} again refer to the same tuple value. \begin{center} \begin{minipage}{0.96\textwidth} \begin{lstlisting} (let ([t1 (vector 3 7)]) (let ([t2 t1]) (let ([_ (vector-set! t2 0 42)]) (vector-ref t1 0)))) \end{lstlisting} \end{minipage} \end{center} The mutation through \code{t2} is visible in referencing the tuple from \code{t1}, so the result of this program is \code{42}. \fi} The next issue concerns the lifetime of tuples. When does a tuple's lifetime end? Notice that \LangVec{} does not include an operation for deleting tuples. Furthermore, the lifetime of a tuple is not tied to any notion of static scoping. % {\if\edition\racketEd % For example, the following program returns \code{42} even though the variable \code{w} goes out of scope prior to the \code{vector-ref} that reads from the vector to which it was bound. \begin{center} \begin{minipage}{0.96\textwidth} \begin{lstlisting} (let ([v (vector (vector 44))]) (let ([x (let ([w (vector 42)]) (let ([_ (vector-set! v 0 w)]) 0))]) (+ x (vector-ref (vector-ref v 0) 0)))) \end{lstlisting} \end{minipage} \end{center} \fi} % {\if\edition\pythonEd\pythonColor % For example, the following program returns \code{42} even though the variable \code{x} goes out of scope when the function returns, prior to reading the tuple element at index $0$. (We study the compilation of functions in chapter~\ref{ch:Lfun}.) % \begin{center} \begin{minipage}{0.96\textwidth} \begin{lstlisting} def f(): x = 42, 43 return x t = f() print(t[0]) \end{lstlisting} \end{minipage} \end{center} \fi} % From the perspective of programmer-observable behavior, tuples live forever. However, if they really lived forever then many long-running programs would run out of memory. To solve this problem, the language's runtime system performs automatic garbage collection. Figure~\ref{fig:interp-Lvec} shows the definitional interpreter for the \LangVec{} language. % \racket{We define the \code{vector}, \code{vector-ref}, \code{vector-set!}, and \code{vector-length} operations for \LangVec{} in terms of the corresponding operations in Racket. One subtle point is that the \code{vector-set!} operation returns the \code{\#} value.} % \python{We represent tuples with Python lists in the interpreter because we need to write to them (section~\ref{sec:expose-allocation}). (Python tuples are immutable.) We define element access, the \code{is} operator, and the \code{len} operator for \LangVec{} in terms of the corresponding operations in Python.} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define interp-Lvec-class (class interp-Lwhile-class (super-new) (define/override (interp-op op) (match op ['eq? (lambda (v1 v2) (cond [(or (and (fixnum? v1) (fixnum? v2)) (and (boolean? v1) (boolean? v2)) (and (vector? v1) (vector? v2)) (and (void? v1) (void? v2))) (eq? v1 v2)]))] ['vector vector] ['vector-length vector-length] ['vector-ref vector-ref] ['vector-set! vector-set!] [else (super interp-op op)] )) (define/override ((interp-exp env) e) (match e [(HasType e t) ((interp-exp env) e)] [else ((super interp-exp env) e)] )) )) (define (interp-Lvec p) (send (new interp-Lvec-class) interp-program p)) \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor \begin{lstlisting} class InterpLtup(InterpLwhile): def interp_cmp(self, cmp): match cmp: case Is(): return lambda x, y: x is y case _: return super().interp_cmp(cmp) def interp_exp(self, e, env): match e: case Tuple(es, Load()): return tuple([self.interp_exp(e, env) for e in es]) case Subscript(tup, index, Load()): t = self.interp_exp(tup, env) n = self.interp_exp(index, env) return t[n] case _: return super().interp_exp(e, env) \end{lstlisting} \fi} \end{tcolorbox} \caption{Interpreter for the \LangVec{} language.} \label{fig:interp-Lvec} \end{figure} Figure~\ref{fig:type-check-Lvec} shows the type checker for \LangVec{}. % The type of a tuple is a \racket{\code{Vector}}\python{\code{TupleType}} type that contains a type for each of its elements. % \racket{To create the s-expression for the \code{Vector} type, we use the \href{https://docs.racket-lang.org/reference/quasiquote.html}{unquote-splicing operator} \code{,@} to insert the list \code{t*} without its usual start and end parentheses. \index{subject}{unquote-splicing}} % The type of accessing the ith element of a tuple is the ith element type of the tuple's type, if there is one. If not, an error is signaled. Note that the index \code{i} is required to be a constant integer (and not, for example, a call to \racket{\code{read}}\python{input\_int}) so that the type checker can determine the element's type given the tuple type. % \racket{ Regarding writing an element to a tuple, the element's type must be equal to the ith element type of the tuple's type. The result type is \code{Void}.} %% When allocating a tuple, %% we need to know which elements of the tuple are themselves tuples for %% the purposes of garbage collection. We can obtain this information %% during type checking. The type checker shown in %% figure~\ref{fig:type-check-Lvec} not only computes the type of an %% expression; it also %% % %% \racket{wraps every tuple creation with the form $(\key{HasType}~e~T)$, %% where $T$ is the tuple's type. % %records the type of each tuple expression in a new field named \code{has\_type}. \begin{figure}[tp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define type-check-Lvec-class (class type-check-Lif-class (super-new) (inherit check-type-equal?) (define/override (type-check-exp env) (lambda (e) (define recur (type-check-exp env)) (match e [(Prim 'vector es) (define-values (e* t*) (for/lists (e* t*) ([e es]) (recur e))) (define t `(Vector ,@t*)) (values (Prim 'vector e*) t)] [(Prim 'vector-ref (list e1 (Int i))) (define-values (e1^ t) (recur e1)) (match t [`(Vector ,ts ...) (unless (and (0 . <= . i) (i . < . (length ts))) (error 'type-check "index ~a out of bounds\nin ~v" i e)) (values (Prim 'vector-ref (list e1^ (Int i))) (list-ref ts i))] [else (error 'type-check "expect Vector, not ~a\nin ~v" t e)])] [(Prim 'vector-set! (list e1 (Int i) elt) ) (define-values (e-vec t-vec) (recur e1)) (define-values (e-elt^ t-elt) (recur elt)) (match t-vec [`(Vector ,ts ...) (unless (and (0 . <= . i) (i . < . (length ts))) (error 'type-check "index ~a out of bounds\nin ~v" i e)) (check-type-equal? (list-ref ts i) t-elt e) (values (Prim 'vector-set! (list e-vec (Int i) e-elt^)) 'Void)] [else (error 'type-check "expect Vector, not ~a\nin ~v" t-vec e)])] [(Prim 'vector-length (list e)) (define-values (e^ t) (recur e)) (match t [`(Vector ,ts ...) (values (Prim 'vector-length (list e^)) 'Integer)] [else (error 'type-check "expect Vector, not ~a\nin ~v" t e)])] [(Prim 'eq? (list arg1 arg2)) (define-values (e1 t1) (recur arg1)) (define-values (e2 t2) (recur arg2)) (match* (t1 t2) [(`(Vector ,ts1 ...) `(Vector ,ts2 ...)) (void)] [(other wise) (check-type-equal? t1 t2 e)]) (values (Prim 'eq? (list e1 e2)) 'Boolean)] [else ((super type-check-exp env) e)] ))) )) (define (type-check-Lvec p) (send (new type-check-Lvec-class) type-check-program p)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} class TypeCheckLtup(TypeCheckLwhile): def type_check_exp(self, e, env): match e: case Compare(left, [cmp], [right]) if isinstance(cmp, Is): l = self.type_check_exp(left, env) r = self.type_check_exp(right, env) check_type_equal(l, r, e) return bool case Tuple(es, Load()): ts = [self.type_check_exp(e, env) for e in es] e.has_type = TupleType(ts) return e.has_type case Subscript(tup, Constant(i), Load()): tup_ty = self.type_check_exp(tup, env) i_ty = self.type_check_exp(Constant(i), env) check_type_equal(i_ty, int, i) match tup_ty: case TupleType(ts): return ts[i] case _: raise Exception('error: expected a tuple, not ' + repr(tup_ty)) case _: return super().type_check_exp(e, env) \end{lstlisting} \fi} \end{tcolorbox} \caption{Type checker for the \LangVec{} language.} \label{fig:type-check-Lvec} \end{figure} \section{Garbage Collection} \label{sec:GC} Garbage collection is a runtime technique for reclaiming space on the heap that will not be used in the future of the running program. We use the term \emph{object}\index{subject}{object} to refer to any value that is stored in the heap, which for now includes only tuples.% % \footnote{The term \emph{object} as it is used in the context of object-oriented programming has a more specific meaning than the way in which we use the term here.} % Unfortunately, it is impossible to know precisely which objects will be accessed in the future and which will not. Instead, garbage collectors overapproximate the set of objects that will be accessed by identifying which objects can possibly be accessed. The running program can directly access objects that are in registers and on the procedure call stack. It can also transitively access the elements of tuples, starting with a tuple whose address is in a register or on the procedure call stack. We define the \emph{root set}\index{subject}{root set} to be all the tuple addresses that are in registers or on the procedure call stack. We define the \emph{live objects}\index{subject}{live objects} to be the objects that are reachable from the root set. Garbage collectors reclaim the space that is allocated to objects that are no longer live. \index{subject}{allocate} That means that some objects may not get reclaimed as soon as they could be, but at least garbage collectors do not reclaim the space dedicated to objects that will be accessed in the future! The programmer can influence which objects get reclaimed by causing them to become unreachable. So the goal of the garbage collector is twofold: \begin{enumerate} \item to preserve all the live objects, and \item to reclaim the memory of everything else, that is, the \emph{garbage}. \end{enumerate} \subsection{Two-Space Copying Collector} Here we study a relatively simple algorithm for garbage collection that is the basis of many state-of-the-art garbage collectors~\citep{Lieberman:1983aa,Ungar:1984aa,Jones:1996aa,Detlefs:2004aa,Dybvig:2006aa,Tene:2011kx}. In particular, we describe a two-space copying collector~\citep{Wilson:1992fk} that uses Cheney's algorithm to perform the copy~\citep{Cheney:1970aa}. \index{subject}{copying collector} \index{subject}{two-space copying collector} Figure~\ref{fig:copying-collector} gives a coarse-grained depiction of what happens in a two-space collector, showing two time steps, prior to garbage collection (on the top) and after garbage collection (on the bottom). In a two-space collector, the heap is divided into two parts named the FromSpace\index{subject}{FromSpace} and the ToSpace\index{subject}{ToSpace}. Initially, all allocations go to the FromSpace until there is not enough room for the next allocation request. At that point, the garbage collector goes to work to make room for the next allocation. A copying collector makes more room by copying all the live objects from the FromSpace into the ToSpace and then performs a sleight of hand, treating the ToSpace as the new FromSpace and the old FromSpace as the new ToSpace. In the example shown in figure~\ref{fig:copying-collector}, the root set consists of three pointers, one in a register and two on the stack. All the live objects have been copied to the ToSpace (the right-hand side of figure~\ref{fig:copying-collector}) in a way that preserves the pointer relationships. For example, the pointer in the register still points to a tuple that in turn points to two other tuples. There are four tuples that are not reachable from the root set and therefore do not get copied into the ToSpace. The exact situation shown in figure~\ref{fig:copying-collector} cannot be created by a well-typed program in \LangVec{} because it contains a cycle. However, creating cycles will be possible once we get to \LangDyn{} (chapter~\ref{ch:Ldyn}). We design the garbage collector to deal with cycles to begin with, so we will not need to revisit this issue. \begin{figure}[tbp] \centering \begin{tcolorbox}[colback=white] \racket{\includegraphics[width=\textwidth]{figs/copy-collect-1}} \python{\includegraphics[width=\textwidth]{figs/copy-collect-1-python}} \\[5ex] \racket{\includegraphics[width=\textwidth]{figs/copy-collect-2}} \python{\includegraphics[width=\textwidth]{figs/copy-collect-2-python}} \end{tcolorbox} \caption{A copying collector in action.} \label{fig:copying-collector} \end{figure} \subsection{Graph Copying via Cheney's Algorithm} \label{sec:cheney} \index{subject}{Cheney's algorithm} Let us take a closer look at the copying of the live objects. The allocated\index{subject}{allocate} objects and pointers can be viewed as a graph, and we need to copy the part of the graph that is reachable from the root set. To make sure that we copy all the reachable vertices in the graph, we need an exhaustive graph traversal algorithm, such as depth-first search or breadth-first search~\citep{Moore:1959aa,Cormen:2001uq}. Recall that such algorithms take into account the possibility of cycles by marking which vertices have already been visited, so to ensure termination of the algorithm. These search algorithms also use a data structure such as a stack or queue as a to-do list to keep track of the vertices that need to be visited. We use breadth-first search and a trick due to \citet{Cheney:1970aa} for simultaneously representing the queue and copying tuples into the ToSpace. Figure~\ref{fig:cheney} shows several snapshots of the ToSpace as the copy progresses. The queue is represented by a chunk of contiguous memory at the beginning of the ToSpace, using two pointers to track the front and the back of the queue, called the \emph{free pointer} and the \emph{scan pointer}, respectively. The algorithm starts by copying all tuples that are immediately reachable from the root set into the ToSpace to form the initial queue. When we copy a tuple, we mark the old tuple to indicate that it has been visited. We discuss how this marking is accomplished in section~\ref{sec:data-rep-gc}. Note that any pointers inside the copied tuples in the queue still point back to the FromSpace. Once the initial queue has been created, the algorithm enters a loop in which it repeatedly processes the tuple at the front of the queue and pops it off the queue. To process a tuple, the algorithm copies all the objects that are directly reachable from it to the ToSpace, placing them at the back of the queue. The algorithm then updates the pointers in the popped tuple so that they point to the newly copied objects. \begin{figure}[tbp] \centering \begin{tcolorbox}[colback=white] \racket{\includegraphics[width=0.8\textwidth]{figs/cheney}} \python{\includegraphics[width=0.8\textwidth]{figs/cheney-python}} \end{tcolorbox} \caption{Depiction of the Cheney algorithm copying the live tuples.} \label{fig:cheney} \end{figure} As shown in figure~\ref{fig:cheney}, in the first step we copy the tuple whose second element is $42$ to the back of the queue. The other pointer goes to a tuple that has already been copied, so we do not need to copy it again, but we do need to update the pointer to the new location. This can be accomplished by storing a \emph{forwarding pointer}\index{subject}{forwarding pointer} to the new location in the old tuple, when we initially copied the tuple into the ToSpace. This completes one step of the algorithm. The algorithm continues in this way until the queue is empty; that is, when the scan pointer catches up with the free pointer. \subsection{Data Representation} \label{sec:data-rep-gc} The garbage collector places some requirements on the data representations used by our compiler. First, the garbage collector needs to distinguish between pointers and other kinds of data such as integers. The following are several ways to accomplish this: \begin{enumerate} \item Attach a tag to each object that identifies what type of object it is~\citep{McCarthy:1960dz}. \item Store different types of objects in different regions~\citep{Steele:1977ab}. \item Use type information from the program to either (a) generate type-specific code for collecting, or (b) generate tables that guide the collector~\citep{Appel:1989aa,Goldberg:1991aa,Diwan:1992aa}. \end{enumerate} Dynamically typed languages, such as \racket{Racket}\python{Python}, need to tag objects in any case, so option 1 is a natural choice for those languages. However, \LangVec{} is a statically typed language, so it would be unfortunate to require tags on every object, especially small and pervasive objects like integers and Booleans. Option 3 is the best-performing choice for statically typed languages, but it comes with a relatively high implementation complexity. To keep this chapter within a reasonable scope of complexity, we recommend a combination of options 1 and 2, using separate strategies for the stack and the heap. Regarding the stack, we recommend using a separate stack for pointers, which we call the \emph{root stack}\index{subject}{root stack} (aka \emph{shadow stack})~\citep{Siebert:2001aa,Henderson:2002aa,Baker:2009aa}. That is, when a local variable needs to be spilled and is of type \racket{\code{Vector}}\python{\code{TupleType}}, we put it on the root stack instead of putting it on the procedure call stack. Furthermore, we always spill tuple-typed variables if they are live during a call to the collector, thereby ensuring that no pointers are in registers during a collection. Figure~\ref{fig:shadow-stack} reproduces the example shown in figure~\ref{fig:copying-collector} and contrasts it with the data layout using a root stack. The root stack contains the two pointers from the regular stack and also the pointer in the second register. \begin{figure}[tbp] \centering \begin{tcolorbox}[colback=white] \racket{\includegraphics[width=0.60\textwidth]{figs/root-stack}} \python{\includegraphics[width=0.60\textwidth]{figs/root-stack-python}} \end{tcolorbox} \caption{Maintaining a root stack to facilitate garbage collection.} \label{fig:shadow-stack} \end{figure} The problem of distinguishing between pointers and other kinds of data also arises inside each tuple on the heap. We solve this problem by attaching a tag, an extra 64 bits, to each tuple. Figure~\ref{fig:tuple-rep} shows a zoomed-in view of the tags for two of the tuples in the example given in figure~\ref{fig:copying-collector}. Note that we have drawn the bits in a big-endian way, from right to left, with bit location 0 (the least significant bit) on the far right, which corresponds to the direction of the x86 shifting instructions \key{salq} (shift left) and \key{sarq} (shift right). Part of each tag is dedicated to specifying which elements of the tuple are pointers, the part labeled \emph{pointer mask}. Within the pointer mask, a 1 bit indicates that there is a pointer, and a 0 bit indicates some other kind of data. The pointer mask starts at bit location 7. We limit tuples to a maximum size of fifty elements, so we need 50 bits for the pointer mask.% % \footnote{A production-quality compiler would handle arbitrarily sized tuples and use a more complex approach.} % The tag also contains two other pieces of information. The length of the tuple (number of elements) is stored in bits at locations 1 through 6. Finally, the bit at location 0 indicates whether the tuple has yet to be copied to the ToSpace. If the bit has value 1, then this tuple has not yet been copied. If the bit has value 0, then the entire tag is a forwarding pointer. (The lower 3 bits of a pointer are always zero in any case, because our tuples are 8-byte aligned.) \begin{figure}[tbp] \centering \begin{tcolorbox}[colback=white] \includegraphics[width=0.8\textwidth]{figs/tuple-rep} \end{tcolorbox} \caption{Representation of tuples in the heap.} \label{fig:tuple-rep} \end{figure} \subsection{Implementation of the Garbage Collector} \label{sec:organize-gz} \index{subject}{prelude} An implementation of the copying collector is provided in the \code{runtime.c} file. Figure~\ref{fig:gc-header} defines the interface to the garbage collector that is used by the compiler. The \code{initialize} function creates the FromSpace, ToSpace, and root stack and should be called in the prelude of the \code{main} function. The arguments of \code{initialize} are the root stack size and the heap size. Both need to be multiples of sixty-four, and $16,384$ is a good choice for both. The \code{initialize} function puts the address of the beginning of the FromSpace into the global variable \code{free\_ptr}. The global variable \code{fromspace\_end} points to the address that is one past the last element of the FromSpace. We use half-open intervals to represent chunks of memory~\citep{Dijkstra:1982aa}. The \code{rootstack\_begin} variable points to the first element of the root stack. As long as there is room left in the FromSpace, your generated code can allocate\index{subject}{allocate} tuples simply by moving the \code{free\_ptr} forward. % The amount of room left in the FromSpace is the difference between the \code{fromspace\_end} and the \code{free\_ptr}. The \code{collect} function should be called when there is not enough room left in the FromSpace for the next allocation. The \code{collect} function takes a pointer to the current top of the root stack (one past the last item that was pushed) and the number of bytes that need to be allocated. The \code{collect} function performs the copying collection and leaves the heap in a state such that there is enough room for the next allocation. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting} void initialize(uint64_t rootstack_size, uint64_t heap_size); void collect(int64_t** rootstack_ptr, uint64_t bytes_requested); int64_t* free_ptr; int64_t* fromspace_begin; int64_t* fromspace_end; int64_t** rootstack_begin; \end{lstlisting} \end{tcolorbox} \caption{The compiler's interface to the garbage collector.} \label{fig:gc-header} \end{figure} %% \begin{exercise} %% In the file \code{runtime.c} you will find the implementation of %% \code{initialize} and a partial implementation of \code{collect}. %% The \code{collect} function calls another function, \code{cheney}, %% to perform the actual copy, and that function is left to the reader %% to implement. The following is the prototype for \code{cheney}. %% \begin{lstlisting} %% static void cheney(int64_t** rootstack_ptr); %% \end{lstlisting} %% The parameter \code{rootstack\_ptr} is a pointer to the top of the %% rootstack (which is an array of pointers). The \code{cheney} function %% also communicates with \code{collect} through the global %% variables \code{fromspace\_begin} and \code{fromspace\_end} %% mentioned in figure~\ref{fig:gc-header} as well as the pointers for %% the ToSpace: %% \begin{lstlisting} %% static int64_t* tospace_begin; %% static int64_t* tospace_end; %% \end{lstlisting} %% The job of the \code{cheney} function is to copy all the live %% objects (reachable from the root stack) into the ToSpace, update %% \code{free\_ptr} to point to the next unused spot in the ToSpace, %% update the root stack so that it points to the objects in the %% ToSpace, and finally to swap the global pointers for the FromSpace %% and ToSpace. %% \end{exercise} The introduction of garbage collection has a nontrivial impact on our compiler passes. We introduce a new compiler pass named \code{expose\_allocation} that elaborates the code for allocating tuples. We also make significant changes to \code{select\_instructions}, \code{build\_interference}, \code{allocate\_registers}, and \code{prelude\_and\_conclusion} and make minor changes in several more passes. The following program serves as our running example. It creates two tuples, one nested inside the other. Both tuples have length one. The program accesses the element in the inner tuple. % tests/vectors_test_17.rkt {\if\edition\racketEd \begin{lstlisting} (vector-ref (vector-ref (vector (vector 42)) 0) 0) \end{lstlisting} \fi} % tests/tuple/get_get.py {\if\edition\pythonEd\pythonColor \begin{lstlisting} v1 = (42,) v2 = (v1,) print(v2[0][0]) \end{lstlisting} \fi} %% {\if\edition\racketEd %% \section{Shrink} %% \label{sec:shrink-Lvec} %% Recall that the \code{shrink} pass translates the primitives operators %% into a smaller set of primitives. %% % %% This pass comes after type checking, and the type checker adds a %% \code{HasType} AST node around each \code{vector} AST node, so you'll %% need to add a case for \code{HasType} to the \code{shrink} pass. %% \fi} \section{Expose Allocation} \label{sec:expose-allocation} The pass \code{expose\_allocation} lowers tuple creation into making a conditional call to the collector followed by allocating the appropriate amount of memory and initializing it. We choose to place the \code{expose\_allocation} pass before \code{remove\_complex\_operands} because it generates code that contains complex operands. The output of \code{expose\_allocation} is a language \LangAlloc{} that replaces tuple creation with new lower-level forms that we use in the translation of tuple creation. % {\if\edition\racketEd \[ \begin{array}{lcl} \Exp &::=& (\key{collect} \,\itm{int}) \MID (\key{allocate} \,\itm{int}\,\itm{type}) \MID (\key{global-value} \,\itm{name}) \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{lcl} \Exp &::=& \cdots\\ &\MID& \key{collect}(\itm{int}) \MID \key{allocate}(\itm{int},\itm{type}) \MID \key{global\_value}(\itm{name}) \\ \Stmt &::= & \CASSIGN{\CPUT{\Exp}{\itm{int}}}{\Exp} \end{array} \] \fi} % The \CCOLLECT{$n$} form runs the garbage collector, requesting that it make sure that there are $n$ bytes ready to be allocated. During instruction selection\index{subject}{instruction selection}, the \CCOLLECT{$n$} form will become a call to the \code{collect} function in \code{runtime.c}. % The \CALLOCATE{$n$}{$\itm{type}$} form obtains memory for $n$ elements (and space at the front for the 64-bit tag), but the elements are not initialized. \index{subject}{allocate} The $\itm{type}$ parameter is the type of the tuple: % \VECTY{\racket{$\Type_1 \ldots \Type_n$}\python{$\Type_1, \ldots, \Type_n$}} % where $\Type_i$ is the type of the $i$th element. % The \CGLOBALVALUE{\itm{name}} form reads the value of a global variable, such as \code{free\_ptr}. \racket{ The type information that you need for \CALLOCATE{$n$}{$\itm{type}$} can be obtained by running the \code{type-check-Lvec-has-type} type checker immediately before the \code{expose\_allocation} pass. This version of the type checker places a special AST node of the form $(\key{HasType}~e~\itm{type})$ around each tuple creation. The concrete syntax for \code{HasType} is \code{has-type}.} The following shows the transformation of tuple creation into (1) a sequence of temporary variable bindings for the initializing expressions, (2) a conditional call to \code{collect}, (3) a call to \code{allocate}, and (4) the initialization of the tuple. The \itm{len} placeholder refers to the length of the tuple, and \itm{bytes} is the total number of bytes that need to be allocated for the tuple, which is 8 for the tag plus \itm{len} times 8. % \python{The \itm{type} needed for the second argument of the \code{allocate} form can be obtained from the \code{has\_type} field of the tuple AST node, which is stored there by running the type checker for \LangVec{} immediately before this pass.} % \begin{center} \begin{minipage}{\textwidth} {\if\edition\racketEd \begin{lstlisting} (has-type (vector |$e_0 \ldots e_{n-1}$|) |\itm{type}|) |$\Longrightarrow$| (let ([|$x_0$| |$e_0$|]) ... (let ([|$x_{n-1}$| |$e_{n-1}$|]) (let ([_ (if (< (+ (global-value free_ptr) |\itm{bytes}|) (global-value fromspace_end)) (void) (collect |\itm{bytes}|))]) (let ([|$v$| (allocate |\itm{len}| |\itm{type}|)]) (let ([_ (vector-set! |$v$| |$0$| |$x_0$|)]) ... (let ([_ (vector-set! |$v$| |$n-1$| |$x_{n-1}$|)]) |$v$|) ... )))) ...) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} (|$e_0$|, |$\ldots$|, |$e_{n-1}$|) |$\Longrightarrow$| begin: |$x_0$| = |$e_0$| |$\vdots$| |$x_{n-1}$| = |$e_{n-1}$| if global_value(free_ptr) + |\itm{bytes}| < global_value(fromspace_end): 0 else: collect(|\itm{bytes}|) |$v$| = allocate(|\itm{len}|, |\itm{type}|) |$v$|[0] = |$x_0$| |$\vdots$| |$v$|[|$n-1$|] = |$x_{n-1}$| |$v$| \end{lstlisting} \fi} \end{minipage} \end{center} % \noindent The sequencing of the initializing expressions $e_0,\ldots,e_{n-1}$ prior to the \code{allocate} is important because they may trigger garbage collection and we cannot have an allocated but uninitialized tuple on the heap during a collection. Figure~\ref{fig:expose-alloc-output} shows the output of the \code{expose\_allocation} pass on our running example. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] % tests/s2_17.rkt {\if\edition\racketEd \begin{lstlisting} (vector-ref (vector-ref (let ([vecinit6 (let ([_4 (if (< (+ (global-value free_ptr) 16) (global-value fromspace_end)) (void) (collect 16))]) (let ([alloc2 (allocate 1 (Vector Integer))]) (let ([_3 (vector-set! alloc2 0 42)]) alloc2)))]) (let ([_8 (if (< (+ (global-value free_ptr) 16) (global-value fromspace_end)) (void) (collect 16))]) (let ([alloc5 (allocate 1 (Vector (Vector Integer)))]) (let ([_7 (vector-set! alloc5 0 vecinit6)]) alloc5)))) 0) 0) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} v1 = begin: init.514 = 42 if (free_ptr + 16) < fromspace_end: else: collect(16) alloc.513 = allocate(1,tuple[int]) alloc.513[0] = init.514 alloc.513 v2 = begin: init.516 = v1 if (free_ptr + 16) < fromspace_end: else: collect(16) alloc.515 = allocate(1,tuple[tuple[int]]) alloc.515[0] = init.516 alloc.515 print(v2[0][0]) \end{lstlisting} \fi} \end{tcolorbox} \caption{Output of the \code{expose\_allocation} pass.} \label{fig:expose-alloc-output} \end{figure} \section{Remove Complex Operands} \label{sec:remove-complex-opera-Lvec} {\if\edition\racketEd % The forms \code{collect}, \code{allocate}, and \code{global\_value} should be treated as complex operands. % \fi} % {\if\edition\pythonEd\pythonColor % The expressions \code{allocate}, \code{global\_value}, \code{begin}, and tuple access should be treated as complex operands. The subexpressions of tuple access must be atomic. % \fi} %% A new case for %% \code{HasType} is needed and the case for \code{Prim} needs to be %% handled carefully to prevent the \code{Prim} node from being separated %% from its enclosing \code{HasType}. Figure~\ref{fig:Lvec-anf-syntax} shows the grammar for the output language \LangAllocANF{} of this pass, which is \LangAlloc{} in monadic normal form. \newcommand{\LtupMonadASTRacket}{ \begin{array}{rcl} \Exp &::=& \COLLECT{\Int} \RP \MID \ALLOCATE{\Int}{\Type} \MID \GLOBALVALUE{\Var} \end{array} } \newcommand{\LtupMonadASTPython}{ \begin{array}{rcl} \Exp &::=& \GET{\Atm}{\Atm} \\ &\MID& \LEN{\Atm}\\ &\MID& \ALLOCATE{\Int}{\Type} \MID \GLOBALVALUE{\Var} \\ \Stmt{} &::=& \ASSIGN{\PUT{\Atm}{\Atm}}{\Atm} \\ &\MID& \COLLECT{\Int} \end{array} } \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LvarMonadASTRacket} \\ \hline \gray{\LifMonadASTRacket} \\ \hline \gray{\LwhileMonadASTRacket} \\ \hline \LtupMonadASTRacket \\ \begin{array}{rcl} \LangAllocANFM{} &::=& \PROGRAM{\code{'()}}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LvarMonadASTPython} \\ \hline \gray{\LifMonadASTPython} \\ \hline \gray{\LwhileMonadASTPython} \\ \hline \LtupMonadASTPython \\ \begin{array}{rcl} \LangAllocANFM{} &::=& \PROGRAM{\code{'()}}{\Stmt^{*}} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{\LangAllocANF{} is \LangAlloc{} in monadic normal form.} \label{fig:Lvec-anf-syntax} \end{figure} \section{Explicate Control and the \LangCVec{} Language} \label{sec:explicate-control-r3} \newcommand{\CtupASTRacket}{ \begin{array}{lcl} \Exp &::= & \LP\key{Allocate} \,\itm{int}\,\itm{type}\RP \\ &\MID& \VECREF{\Atm}{\INT{\Int}} \\ &\MID& \VECSET{\Atm}{\INT{\Int}}{\Atm} \\ &\MID& \VECLEN{\Atm} \\ &\MID& \GLOBALVALUE{\Var} \\ \Stmt &::=& \VECSET{\Atm}{\INT{\Int}}{\Atm} \\ &\MID& \LP\key{Collect} \,\itm{int}\RP \end{array} } \newcommand{\CtupASTPython}{ \begin{array}{lcl} \Exp &::= & \GET{\Atm}{\Atm} \MID \ALLOCATE{\Int}{\Type} \\ &\MID& \GLOBALVALUE{\Var} \MID \LEN{\Atm} \\ \Stmt &::=& \COLLECT{\Int} \\ &\MID& \ASSIGN{\PUT{\Atm}{\Atm}}{\Atm} \end{array} } \begin{figure}[tp] \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\CvarASTRacket} \\ \hline \gray{\CifASTRacket} \\ \hline \gray{\CloopASTRacket} \\ \hline \CtupASTRacket \\ \begin{array}{lcl} \LangCVecM{} & ::= & \CPROGRAM{\itm{info}}{\LP\LP\itm{label}\,\key{.}\,\Tail\RP\ldots\RP} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\CifASTPython} \\ \hline \CtupASTPython \\ \begin{array}{lcl} \LangCVecM{} & ::= & \CPROGRAM{\itm{info}}{\LC\itm{label}\key{:}\,\Stmt^{*}\;\Tail, \ldots \RC} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangCVec{}, extending \racket{\LangCLoop{} (figure~\ref{fig:c7-syntax})}\python{\LangCIf{} (figure~\ref{fig:c1-syntax})}.} \label{fig:c2-syntax} \end{figure} The output of \code{explicate\_control} is a program in the intermediate language \LangCVec{}, for which figure~\ref{fig:c2-syntax} shows the definition of the abstract syntax. % %% \racket{(The concrete syntax is defined in %% figure~\ref{fig:c2-concrete-syntax} of the Appendix.)} % The new expressions of \LangCVec{} include \key{allocate}, % \racket{\key{vector-ref}, and \key{vector-set!},} % \python{accessing tuple elements,} % and \key{global\_value}. % \python{\LangCVec{} also includes the \code{collect} statement and assignment to a tuple element.} % \racket{\LangCVec{} also includes the new \code{collect} statement.} % The \code{explicate\_control} pass can treat these new forms much like the other forms that we've already encountered. The output of the \code{explicate\_control} pass on the running example is shown on the left side of figure~\ref{fig:select-instr-output-gc} in the next section. \section{Select Instructions and the \LangXGlobal{} Language} \label{sec:select-instructions-gc} \index{subject}{select instructions} %% void (rep as zero) %% allocate %% collect (callq collect) %% vector-ref %% vector-set! %% vector-length %% global (postpone) In this pass we generate x86 code for most of the new operations that are needed to compile tuples, including \code{Allocate}, \code{Collect}, accessing tuple elements, and the \code{Is} comparison. % We compile \code{GlobalValue} to \code{Global} because the latter has a different concrete syntax (see figures~\ref{fig:x86-2-concrete} and \ref{fig:x86-2}). \index{subject}{x86} The tuple read and write forms translate into \code{movq} instructions. (The $+1$ in the offset serves to move past the tag at the beginning of the tuple representation.) % \begin{center} \begin{minipage}{\textwidth} {\if\edition\racketEd \begin{lstlisting} |$\itm{lhs}$| = (vector-ref |$\itm{tup}$| |$n$|); |$\Longrightarrow$| movq |$\itm{tup}'$|, %r11 movq |$8(n+1)$|(%r11), |$\itm{lhs'}$| |$\itm{lhs}$| = (vector-set! |$\itm{tup}$| |$n$| |$\itm{rhs}$|); |$\Longrightarrow$| movq |$\itm{tup}'$|, %r11 movq |$\itm{rhs}'$|, |$8(n+1)$|(%r11) movq $0, |$\itm{lhs'}$| \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} |$\itm{lhs}$| = |$\itm{tup}$|[|$n$|] |$\Longrightarrow$| movq |$\itm{tup}'$|, %r11 movq |$8(n+1)$|(%r11), |$\itm{lhs'}$| |$\itm{tup}$|[|$n$|] = |$\itm{rhs}$| |$\Longrightarrow$| movq |$\itm{tup}'$|, %r11 movq |$\itm{rhs}'$|, |$8(n+1)$|(%r11) \end{lstlisting} \fi} \end{minipage} \end{center} \racket{The $\itm{lhs}'$, $\itm{tup}'$, and $\itm{rhs}'$} \python{The $\itm{tup}'$ and $\itm{rhs}'$} are obtained by translating from \LangCVec{} to x86. % The move of $\itm{tup}'$ to register \code{r11} ensures that the offset expression \code{$8(n+1)$(\%r11)} contains a register operand. This requires removing \code{r11} from consideration by the register allocating. Why not use \code{rax} instead of \code{r11}? Suppose that we instead used \code{rax}. Then the generated code for tuple assignment would be \begin{lstlisting} movq |$\itm{tup}'$|, %rax movq |$\itm{rhs}'$|, |$8(n+1)$|(%rax) \end{lstlisting} Next, suppose that $\itm{rhs}'$ ends up as a stack location, so \code{patch\_instructions} would insert a move through \code{rax} as follows: \begin{lstlisting} movq |$\itm{tup}'$|, %rax movq |$\itm{rhs}'$|, %rax movq %rax, |$8(n+1)$|(%rax) \end{lstlisting} However, this sequence of instructions does not work because we're trying to use \code{rax} for two different values ($\itm{tup}'$ and $\itm{rhs}'$) at the same time! The \racket{\code{vector-length}}\python{\code{len}} operation should be translated into a sequence of instructions that read the tag of the tuple and extract the 6 bits that represent the tuple length, which are the bits starting at index 1 and going up to and including bit 6. The x86 instructions \code{andq} (for bitwise-and) and \code{sarq} (shift right) can be used to accomplish this. We compile the \code{allocate} form to operations on the \code{free\_ptr}, as shown next. This approach is called \emph{inline allocation} because it implements allocation without a function call by simply incrementing the allocation pointer. It is much more efficient than calling a function for each allocation. The address in the \code{free\_ptr} is the next free address in the FromSpace, so we copy it into \code{r11} and then move it forward by enough space for the tuple being allocated, which is $8(\itm{len}+1)$ bytes because each element is 8 bytes (64 bits) and we use 8 bytes for the tag. We then initialize the \itm{tag} and finally copy the address in \code{r11} to the left-hand side. Refer to figure~\ref{fig:tuple-rep} to see how the tag is organized. % \racket{We recommend using the Racket operations \code{bitwise-ior} and \code{arithmetic-shift} to compute the tag during compilation.} % \python{We recommend using the bitwise-or operator \code{|} and the shift-left operator \code{<<} to compute the tag during compilation.} % The type annotation in the \code{allocate} form is used to determine the pointer mask region of the tag. % The addressing mode \verb!free_ptr(%rip)! essentially stands for the address of the \code{free\_ptr} global variable using a special instruction-pointer-relative addressing mode of the x86-64 processor. In particular, the assembler computes the distance $d$ between the address of \code{free\_ptr} and where the \code{rip} would be at that moment and then changes the \code{free\_ptr(\%rip)} argument to \code{$d$(\%rip)}, which at runtime will compute the address of \code{free\_ptr}. % {\if\edition\racketEd \begin{lstlisting} |$\itm{lhs}$| = (allocate |$\itm{len}$| (Vector |$\itm{type} \ldots$|)); |$\Longrightarrow$| movq free_ptr(%rip), %r11 addq |$8(\itm{len}+1)$|, free_ptr(%rip) movq $|$\itm{tag}$|, 0(%r11) movq %r11, |$\itm{lhs}'$| \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} |$\itm{lhs}$| = allocate(|$\itm{len}$|, TupleType([|$\itm{type}, \ldots$])|); |$\Longrightarrow$| movq free_ptr(%rip), %r11 addq |$8(\itm{len}+1)$|, free_ptr(%rip) movq $|$\itm{tag}$|, 0(%r11) movq %r11, |$\itm{lhs}'$| \end{lstlisting} \fi} % The \code{collect} form is compiled to a call to the \code{collect} function in the runtime. The arguments to \code{collect} are (1) the top of the root stack, and (2) the number of bytes that need to be allocated. We use another dedicated register, \code{r15}, to store the pointer to the top of the root stack. Therefore \code{r15} is not available for use by the register allocator. % {\if\edition\racketEd \begin{lstlisting} (collect |$\itm{bytes}$|) |$\Longrightarrow$| movq %r15, %rdi movq $|\itm{bytes}|, %rsi callq collect \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} collect(|$\itm{bytes}$|) |$\Longrightarrow$| movq %r15, %rdi movq $|\itm{bytes}|, %rsi callq collect \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor The \code{is} comparison is compiled similarly to the other comparison operators, using the \code{cmpq} instruction. Because the value of a tuple is its address, we can translate \code{is} into a simple check for equality using the \code{e} condition code. \\ \begin{tabular}{lll} \begin{minipage}{0.4\textwidth} $\CASSIGN{\Var}{ \LP\CIS{\Atm_1}{\Atm_2} \RP }$ \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.4\textwidth} \begin{lstlisting} cmpq |$\Arg_2$|, |$\Arg_1$| sete %al movzbq %al, |$\Var$| \end{lstlisting} \end{minipage} \end{tabular} \fi} \newcommand{\GrammarXGlobal}{ \begin{array}{lcl} \Arg &::=& \itm{label} \key{(\%rip)} \end{array} } \newcommand{\ASTXGlobalRacket}{ \begin{array}{lcl} \Arg &::=& \GLOBAL{\itm{label}} \end{array} } \begin{figure}[tp] \begin{tcolorbox}[colback=white] \[ \begin{array}{l} \gray{\GrammarXInt} \\ \hline \gray{\GrammarXIf} \\ \hline \GrammarXGlobal \\ \begin{array}{lcl} \LangXGlobalM{} &::= & \key{.globl main} \\ & & \key{main:} \; \Instr^{*} \end{array} \end{array} \] \end{tcolorbox} \caption{The concrete syntax of \LangXGlobal{} (extends \LangXIf{} shown in figure~\ref{fig:x86-1-concrete}).} \label{fig:x86-2-concrete} \end{figure} \begin{figure}[tp] \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\ASTXIntRacket} \\ \hline \gray{\ASTXIfRacket} \\ \hline \ASTXGlobalRacket \\ \begin{array}{lcl} \LangXGlobalM{} &::= & \XPROGRAM{\itm{info}}{\LP\LP\itm{label} \,\key{.}\, \Block \RP\ldots\RP} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\ASTXIntPython} \\ \hline \gray{\ASTXIfPython} \\ \hline \ASTXGlobalRacket \\ \begin{array}{lcl} \LangXGlobalM{} &::= & \XPROGRAM{\itm{info}}{\LC\itm{label} \,\key{:}\, \Block \key{,} \ldots \RC } \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangXGlobal{} (extends \LangXIf{} shown in figure~\ref{fig:x86-1}).} \label{fig:x86-2} \end{figure} The definitions of the concrete and abstract syntax of the \LangXGlobal{} language are shown in figures~\ref{fig:x86-2-concrete} and \ref{fig:x86-2}. It differs from \LangXIf{} only in the addition of global variables. % Figure~\ref{fig:select-instr-output-gc} shows the output of the \code{select\_instructions} pass on the running example. \begin{figure}[tbp] \centering \begin{tcolorbox}[colback=white] {\if\edition\racketEd % tests/s2_17.rkt \begin{tabular}{lll} \begin{minipage}{0.5\textwidth} \begin{lstlisting}[basicstyle=\ttfamily\scriptsize] start: tmp9 = (global-value free_ptr); tmp0 = (+ tmp9 16); tmp1 = (global-value fromspace_end); if (< tmp0 tmp1) goto block0; else goto block1; block0: _4 = (void); goto block9; block1: (collect 16) goto block9; block9: alloc2 = (allocate 1 (Vector Integer)); _3 = (vector-set! alloc2 0 42); vecinit6 = alloc2; tmp2 = (global-value free_ptr); tmp3 = (+ tmp2 16); tmp4 = (global-value fromspace_end); if (< tmp3 tmp4) goto block7; else goto block8; block7: _8 = (void); goto block6; block8: (collect 16) goto block6; block6: alloc5 = (allocate 1 (Vector (Vector Integer))); _7 = (vector-set! alloc5 0 vecinit6); tmp5 = (vector-ref alloc5 0); return (vector-ref tmp5 0); \end{lstlisting} \end{minipage} &$\Rightarrow$& \begin{minipage}{0.4\textwidth} \begin{lstlisting}[basicstyle=\ttfamily\scriptsize] start: movq free_ptr(%rip), tmp9 movq tmp9, tmp0 addq $16, tmp0 movq fromspace_end(%rip), tmp1 cmpq tmp1, tmp0 jl block0 jmp block1 block0: movq $0, _4 jmp block9 block1: movq %r15, %rdi movq $16, %rsi callq collect jmp block9 block9: movq free_ptr(%rip), %r11 addq $16, free_ptr(%rip) movq $3, 0(%r11) movq %r11, alloc2 movq alloc2, %r11 movq $42, 8(%r11) movq $0, _3 movq alloc2, vecinit6 movq free_ptr(%rip), tmp2 movq tmp2, tmp3 addq $16, tmp3 movq fromspace_end(%rip), tmp4 cmpq tmp4, tmp3 jl block7 jmp block8 block7: movq $0, _8 jmp block6 block8: movq %r15, %rdi movq $16, %rsi callq collect jmp block6 block6: movq free_ptr(%rip), %r11 addq $16, free_ptr(%rip) movq $131, 0(%r11) movq %r11, alloc5 movq alloc5, %r11 movq vecinit6, 8(%r11) movq $0, _7 movq alloc5, %r11 movq 8(%r11), tmp5 movq tmp5, %r11 movq 8(%r11), %rax jmp conclusion \end{lstlisting} \end{minipage} \end{tabular} \fi} {\if\edition\pythonEd % tests/tuple/get_get.py \begin{tabular}{lll} \begin{minipage}{0.5\textwidth} \begin{lstlisting}[basicstyle=\ttfamily\scriptsize] start: init.514 = 42 tmp.517 = free_ptr tmp.518 = (tmp.517 + 16) tmp.519 = fromspace_end if tmp.518 < tmp.519: goto block.529 else: goto block.530 block.529: goto block.528 block.530: collect(16) goto block.528 block.528: alloc.513 = allocate(1,tuple[int]) alloc.513:tuple[int][0] = init.514 v1 = alloc.513 init.516 = v1 tmp.520 = free_ptr tmp.521 = (tmp.520 + 16) tmp.522 = fromspace_end if tmp.521 < tmp.522: goto block.526 else: goto block.527 block.526: goto block.525 block.527: collect(16) goto block.525 block.525: alloc.515 = allocate(1,tuple[tuple[int]]) alloc.515:tuple[tuple[int]][0] = init.516 v2 = alloc.515 tmp.523 = v2[0] tmp.524 = tmp.523[0] print(tmp.524) return 0 \end{lstlisting} \end{minipage} &$\Rightarrow$& \begin{minipage}{0.4\textwidth} \begin{lstlisting}[basicstyle=\ttfamily\scriptsize] start: movq $42, init.514 movq free_ptr(%rip), tmp.517 movq tmp.517, tmp.518 addq $16, tmp.518 movq fromspace_end(%rip), tmp.519 cmpq tmp.519, tmp.518 jl block.529 jmp block.530 block.529: jmp block.528 block.530: movq %r15, %rdi movq $16, %rsi callq collect jmp block.528 block.528: movq free_ptr(%rip), %r11 addq $16, free_ptr(%rip) movq $3, 0(%r11) movq %r11, alloc.513 movq alloc.513, %r11 movq init.514, 8(%r11) movq alloc.513, v1 movq v1, init.516 movq free_ptr(%rip), tmp.520 movq tmp.520, tmp.521 addq $16, tmp.521 movq fromspace_end(%rip), tmp.522 cmpq tmp.522, tmp.521 jl block.526 jmp block.527 block.526: jmp block.525 block.527: movq %r15, %rdi movq $16, %rsi callq collect jmp block.525 block.525: movq free_ptr(%rip), %r11 addq $16, free_ptr(%rip) movq $131, 0(%r11) movq %r11, alloc.515 movq alloc.515, %r11 movq init.516, 8(%r11) movq alloc.515, v2 movq v2, %r11 movq 8(%r11), %r11 movq %r11, tmp.523 movq tmp.523, %r11 movq 8(%r11), %r11 movq %r11, tmp.524 movq tmp.524, %rdi callq print_int movq $0, %rax jmp conclusion \end{lstlisting} \end{minipage} \end{tabular} \fi} \end{tcolorbox} \caption{Output of \code{explicate\_control} (\emph{left}) and \code{select\_instructions} (\emph{right}) on the running example.} \label{fig:select-instr-output-gc} \end{figure} \clearpage \section{Register Allocation} \label{sec:reg-alloc-gc} \index{subject}{register allocation} As discussed previously in this chapter, the garbage collector needs to access all the pointers in the root set, that is, all variables that are tuples. It will be the responsibility of the register allocator to make sure that \begin{enumerate} \item the root stack is used for spilling tuple-typed variables, and \item if a tuple-typed variable is live during a call to the collector, it must be spilled to ensure that it is visible to the collector. \end{enumerate} The latter responsibility can be handled during construction of the interference graph, by adding interference edges between the call-live tuple-typed variables and all the callee-saved registers. (They already interfere with the caller-saved registers.) % \racket{The type information for variables is in the \code{Program} form, so we recommend adding another parameter to the \code{build\_interference} function to communicate this alist.} % \python{The type information for variables is generated by the type checker for \LangCVec{}, stored in a field named \code{var\_types} in the \code{CProgram} AST mode. You'll need to propagate that information so that it is available in this pass.} The spilling of tuple-typed variables to the root stack can be handled after graph coloring, in choosing how to assign the colors (integers) to registers and stack locations. The \racket{\code{Program}}\python{\code{CProgram}} output of this pass changes to also record the number of spills to the root stack. % build-interference % % callq % extra parameter for var->type assoc. list % update 'program' and 'if' % allocate-registers % allocate spilled vectors to the rootstack % don't change color-graph % TODO: %\section{Patch Instructions} %[mention that global variables are memory references] \section{Prelude and Conclusion} \label{sec:print-x86-gc} \label{sec:prelude-conclusion-x86-gc} \index{subject}{prelude}\index{subject}{conclusion} Figure~\ref{fig:print-x86-output-gc} shows the output of the \code{prelude\_and\_conclusion} pass on the running example. In the prelude of the \code{main} function, we allocate space on the root stack to make room for the spills of tuple-typed variables. We do so by incrementing the root stack pointer (\code{r15}), taking care that the root stack grows up instead of down. For the running example, there was just one spill, so we increment \code{r15} by 8 bytes. In the conclusion we subtract 8 bytes from \code{r15}. One issue that deserves special care is that there may be a call to \code{collect} prior to the initializing assignments for all the variables in the root stack. We do not want the garbage collector to mistakenly determine that some uninitialized variable is a pointer that needs to be followed. Thus, we zero out all locations on the root stack in the prelude of \code{main}. In figure~\ref{fig:print-x86-output-gc}, the instruction % \lstinline{movq $0, 0(%r15)} % is sufficient to accomplish this task because there is only one spill. In general, we have to clear as many words as there are spills of tuple-typed variables. The garbage collector tests each root to see if it is null prior to dereferencing it. \begin{figure}[htbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{minipage}[t]{0.5\textwidth} \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] .globl main main: pushq %rbp movq %rsp, %rbp subq $0, %rsp movq $65536, %rdi movq $65536, %rsi callq initialize movq rootstack_begin(%rip), %r15 movq $0, 0(%r15) addq $8, %r15 jmp start conclusion: subq $8, %r15 addq $0, %rsp popq %rbp retq \end{lstlisting} \end{minipage} \fi} {\if\edition\pythonEd \begin{minipage}[t]{0.5\textwidth} \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] .globl main main: pushq %rbp movq %rsp, %rbp pushq %rbx subq $8, %rsp movq $65536, %rdi movq $16, %rsi callq initialize movq rootstack_begin(%rip), %r15 movq $0, 0(%r15) addq $8, %r15 jmp start conclusion: subq $8, %r15 addq $8, %rsp popq %rbx popq %rbp retq \end{lstlisting} \end{minipage} \fi} \end{tcolorbox} \caption{The prelude and conclusion for the running example.} \label{fig:print-x86-output-gc} \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.90] \node (Lvec) at (0,2) {\large \LangVec{}}; \node (Lvec-2) at (3,2) {\large \LangVec{}}; \node (Lvec-3) at (6,2) {\large \LangVec{}}; \node (Lvec-4) at (10,2) {\large \LangAlloc{}}; \node (Lvec-5) at (10,0) {\large \LangAlloc{}}; \node (Lvec-6) at (5,0) {\large \LangAllocANF{}}; \node (C2-4) at (0,0) {\large \LangCVec{}}; \node (x86-2) at (0,-2) {\large \LangXGlobalVar{}}; \node (x86-2-1) at (0,-4) {\large \LangXGlobalVar{}}; \node (x86-2-2) at (4,-4) {\large \LangXGlobalVar{}}; \node (x86-3) at (4,-2) {\large \LangXGlobalVar{}}; \node (x86-4) at (8,-2) {\large \LangXGlobal{}}; \node (x86-5) at (8,-4) {\large \LangXGlobal{}}; \path[->,bend left=15] (Lvec) edge [above] node {\ttfamily\footnotesize shrink} (Lvec-2); \path[->,bend left=15] (Lvec-2) edge [above] node {\ttfamily\footnotesize uniquify} (Lvec-3); \path[->,bend left=15] (Lvec-3) edge [above] node {\ttfamily\footnotesize expose\_allocation} (Lvec-4); \path[->,bend left=15] (Lvec-4) edge [right] node {\ttfamily\footnotesize uncover\_get!} (Lvec-5); \path[->,bend left=10] (Lvec-5) edge [below] node {\ttfamily\footnotesize remove\_complex\_operands} (Lvec-6); \path[->,bend right=10] (Lvec-6) edge [above] node {\ttfamily\footnotesize explicate\_control} (C2-4); \path[->,bend left=15] (C2-4) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend right=15] (x86-2) edge [right] node {\ttfamily\footnotesize uncover\_live} (x86-2-1); \path[->,bend right=15] (x86-2-1) edge [below] node {\ttfamily\footnotesize build\_interference} (x86-2-2); \path[->,bend right=15] (x86-2-2) edge [right] node {\ttfamily\footnotesize allocate\_registers} (x86-3); \path[->,bend left=10] (x86-3) edge [above] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend left=15] (x86-4) edge [right] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \fi} {\if\edition\pythonEd\pythonColor \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.85] \node (Lvec) at (0,2) {\large \LangVec{}}; \node (Lvec-2) at (4,2) {\large \LangVec{}}; \node (Lvec-5) at (8,2) {\large \LangAlloc{}}; \node (Lvec-6) at (12,2) {\large \LangAllocANF{}}; \node (C2-4) at (0,0) {\large \LangCVec{}}; \node (x86-2) at (0,-2) {\large \LangXGlobalVar{}}; \node (x86-3) at (4,-2) {\large \LangXGlobalVar{}}; \node (x86-4) at (8,-2) {\large \LangXGlobal{}}; \node (x86-5) at (12,-2) {\large \LangXGlobal{}}; \path[->,bend left=15] (Lvec) edge [above] node {\ttfamily\footnotesize shrink} (Lvec-2); \path[->,bend left=15] (Lvec-2) edge [above] node {\ttfamily\footnotesize expose\_allocation} (Lvec-5); \path[->,bend left=15] (Lvec-5) edge [above] node {\ttfamily\footnotesize remove\_complex\_operands} (Lvec-6); \path[->,bend left=10] (Lvec-6) edge [right] node {\ttfamily\footnotesize \ \ \ explicate\_control} (C2-4); \path[->,bend left=15] (C2-4) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend right=15] (x86-2) edge [below] node {\ttfamily\footnotesize assign\_homes} (x86-3); \path[->,bend left=15] (x86-3) edge [above] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend right=15] (x86-4) edge [below] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \fi} \end{tcolorbox} \caption{Diagram of the passes for \LangVec{}, a language with tuples.} \label{fig:Lvec-passes} \end{figure} Figure~\ref{fig:Lvec-passes} gives an overview of all the passes needed for the compilation of \LangVec{}. \clearpage {\if\edition\racketEd \section{Challenge: Simple Structures} \label{sec:simple-structures} \index{subject}{struct} \index{subject}{structure} The language \LangStruct{} extends \LangVec{} with support for simple structures. The definition of its concrete syntax is shown in figure~\ref{fig:Lstruct-concrete-syntax}, and the abstract syntax is shown in figure~\ref{fig:Lstruct-syntax}. Recall that a \code{struct} in Typed Racket is a user-defined data type that contains named fields and that is heap allocated\index{subject}{heap allocated}, similarly to a vector. The following is an example of a structure definition, in this case the definition of a \code{point} type: \begin{lstlisting} (struct point ([x : Integer] [y : Integer]) #:mutable) \end{lstlisting} \newcommand{\LstructGrammarRacket}{ \begin{array}{lcl} \Type &::=& \Var \\ \Exp &::=& (\Var\;\Exp \ldots)\\ \Def &::=& (\key{struct}\; \Var \; ([\Var \,\key{:}\, \Type] \ldots)\; \code{\#:mutable})\\ \end{array} } \newcommand{\LstructASTRacket}{ \begin{array}{lcl} \Type &::=& \VAR{\Var} \\ \Exp &::=& \APPLY{\Var}{\Exp\ldots} \\ \Def &::=& \LP\key{StructDef}\; \Var \; \LP\LS\Var \,\key{:}\, \Type\RS \ldots\RP\RP \end{array} } \begin{figure}[tbp] \centering \begin{tcolorbox}[colback=white] \[ \begin{array}{l} \gray{\LintGrammarRacket{}} \\ \hline \gray{\LvarGrammarRacket{}} \\ \hline \gray{\LifGrammarRacket{}} \\ \hline \gray{\LwhileGrammarRacket} \\ \hline \gray{\LtupGrammarRacket} \\ \hline \LstructGrammarRacket \\ \begin{array}{lcl} \LangStruct{} &::=& \Def \ldots \; \Exp \end{array} \end{array} \] \end{tcolorbox} \caption{The concrete syntax of \LangStruct{}, extending \LangVec{} (figure~\ref{fig:Lvec-concrete-syntax}).} \label{fig:Lstruct-concrete-syntax} \end{figure} \begin{figure}[tbp] \centering \begin{tcolorbox}[colback=white] \small \[ \begin{array}{l} \gray{\LintASTRacket{}} \\ \hline \gray{\LvarASTRacket{}} \\ \hline \gray{\LifASTRacket{}} \\ \hline \gray{\LwhileASTRacket} \\ \hline \gray{\LtupASTRacket} \\ \hline \LstructASTRacket \\ \begin{array}{lcl} \LangStruct{} &::=& \PROGRAMDEFSEXP{\code{'()}}{\LP\Def\ldots\RP)}{\Exp} \end{array} \end{array} \] \end{tcolorbox} \caption{The abstract syntax of \LangStruct{}, extending \LangVec{} (figure~\ref{fig:Lvec-syntax}).} \label{fig:Lstruct-syntax} \end{figure} An instance of a structure is created using function-call syntax, with the name of the structure in the function position, as follows: \begin{lstlisting} (point 7 12) \end{lstlisting} Function-call syntax is also used to read a field of a structure. The function name is formed by the structure name, a dash, and the field name. The following example uses \code{point-x} and \code{point-y} to access the \code{x} and \code{y} fields of two point instances: \begin{center} \begin{lstlisting} (let ([pt1 (point 7 12)]) (let ([pt2 (point 4 3)]) (+ (- (point-x pt1) (point-x pt2)) (- (point-y pt1) (point-y pt2))))) \end{lstlisting} \end{center} Similarly, to write to a field of a structure, use its set function, whose name starts with \code{set-}, followed by the structure name, then a dash, then the field name, and finally with an exclamation mark. The following example uses \code{set-point-x!} to change the \code{x} field from \code{7} to \code{42}: \begin{center} \begin{lstlisting} (let ([pt (point 7 12)]) (let ([_ (set-point-x! pt 42)]) (point-x pt))) \end{lstlisting} \end{center} \begin{exercise}\normalfont\normalsize Create a type checker for \LangStruct{} by extending the type checker for \LangVec{}. Extend your compiler with support for simple structures, compiling \LangStruct{} to x86 assembly code. Create five new test cases that use structures, and test your compiler. \end{exercise} % TODO: create an interpreter for L_struct \clearpage \fi} \section{Challenge: Arrays} \label{sec:arrays} % TODO mention trapped-error In this chapter we have studied tuples, that is, heterogeneous sequences of elements whose length is determined at compile time. This challenge is also about sequences, but this time the length is determined at runtime and all the elements have the same type (they are homogeneous). We use the term \emph{array} for this latter kind of sequence. % \racket{ The Racket language does not distinguish between tuples and arrays; they are both represented by vectors. However, Typed Racket distinguishes between tuples and arrays: the \code{Vector} type is for tuples, and the \code{Vectorof} type is for arrays.}% \python{Arrays correspond to the \code{list} type in the Python language.} Figure~\ref{fig:Lvecof-concrete-syntax} presents the definition of the concrete syntax for \LangArray{}, and figure~\ref{fig:Lvecof-syntax} presents the definition of the abstract syntax, extending \LangVec{} with the \racket{\code{Vectorof}}\python{\code{list}} type and the \racket{\code{make-vector} primitive operator for creating an array, whose arguments are the length of the array and an initial value for all the elements in the array.}% \python{bracket notation for creating an array literal.} \racket{The \code{vector-length}, \code{vector-ref}, and \code{vector-ref!} operators that we defined for tuples become overloaded for use with arrays.} \python{ The subscript operator becomes overloaded for use with arrays and tuples and now may appear on the left-hand side of an assignment. Note that the index of the subscript, when applied to an array, may be an arbitrary expression and not exclusively a constant integer. The \code{len} function is also applicable to arrays. } % We include integer multiplication in \LangArray{} because it is useful in many examples involving arrays such as computing the inner product of two arrays (figure~\ref{fig:inner_product}). \newcommand{\LarrayGrammarRacket}{ \begin{array}{lcl} \Type &::=& \LP \key{Vectorof}~\Type \RP \\ \Exp &::=& \CMUL{\Exp}{\Exp} \MID \CMAKEVEC{\Exp}{\Exp} \end{array} } \newcommand{\LarrayASTRacket}{ \begin{array}{lcl} \Type &::=& \LP \key{Vectorof}~\Type \RP \\ \Exp &::=& \MUL{\Exp}{\Exp} \MID \MAKEVEC{\Exp}{\Exp} \end{array} } \newcommand{\LarrayGrammarPython}{ \begin{array}{lcl} \Type &::=& \key{list}\LS\Type\RS \\ \Exp &::=& \CMUL{\Exp}{\Exp} \MID \CGET{\Exp}{\Exp} \MID \LS \Exp \code{,} \ldots \RS \\ \Stmt &::= & \CGET{\Exp}{\Exp} \mathop{\key{=}}\Exp \end{array} } \newcommand{\LarrayASTPython}{ \begin{array}{lcl} \Type &::=& \key{ListType}\LP\Type\RP \\ \Exp &::=& \MUL{\Exp}{\Exp} \MID \GET{\Exp}{\Exp} \\ &\MID& \key{List}\LP \Exp \code{,} \ldots \code{, } \code{Load()} \RP \\ \Stmt &::= & \ASSIGN{ \PUT{\Exp}{\Exp} }{\Exp} \end{array} } \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintGrammarRacket{}} \\ \hline \gray{\LvarGrammarRacket{}} \\ \hline \gray{\LifGrammarRacket{}} \\ \hline \gray{\LwhileGrammarRacket} \\ \hline \gray{\LtupGrammarRacket} \\ \hline \LarrayGrammarRacket \\ \begin{array}{lcl} \LangArray{} &::=& \Exp \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintGrammarPython{}} \\ \hline \gray{\LvarGrammarPython{}} \\ \hline \gray{\LifGrammarPython{}} \\ \hline \gray{\LwhileGrammarPython} \\ \hline \gray{\LtupGrammarPython} \\ \hline \LarrayGrammarPython \\ \begin{array}{rcl} \LangArrayM{} &::=& \Stmt^{*} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The concrete syntax of \LangArray{}, extending \LangVec{} (figure~\ref{fig:Lvec-concrete-syntax}).} \label{fig:Lvecof-concrete-syntax} \end{figure} \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintASTRacket{}} \\ \hline \gray{\LvarASTRacket{}} \\ \hline \gray{\LifASTRacket{}} \\ \hline \gray{\LwhileASTRacket} \\ \hline \gray{\LtupASTRacket} \\ \hline \LarrayASTRacket \\ \begin{array}{lcl} \LangArray{} &::=& \Exp \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintASTPython{}} \\ \hline \gray{\LvarASTPython{}} \\ \hline \gray{\LifASTPython{}} \\ \hline \gray{\LwhileASTPython} \\ \hline \gray{\LtupASTPython} \\ \hline \LarrayASTPython \\ \begin{array}{rcl} \LangArrayM{} &::=& \Stmt^{*} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangArray{}, extending \LangVec{} (figure~\ref{fig:Lvec-syntax}).} \label{fig:Lvecof-syntax} \end{figure} \begin{figure}[tp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd % TODO: remove the function from the following example, like the python version -Jeremy \begin{lstlisting} (let ([A (make-vector 2 2)]) (let ([B (make-vector 2 3)]) (let ([i 0]) (let ([prod 0]) (begin (while (< i n) (begin (set! prod (+ prod (* (vector-ref A i) (vector-ref B i)))) (set! i (+ i 1)))) prod))))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} A = [2, 2] B = [3, 3] i = 0 prod = 0 while i != len(A): prod = prod + A[i] * B[i] i = i + 1 print(prod) \end{lstlisting} \fi} \end{tcolorbox} \caption{Example program that computes the inner product.} \label{fig:inner_product} \end{figure} {\if\edition\racketEd % Figure~\ref{fig:type-check-Lvecof} shows the definition of the type checker for \LangArray{}. The result type of \code{make-vector} is \code{(Vectorof T)}, where \code{T} is the type of the initializing expression. The length expression is required to have type \code{Integer}. The type checking of the operators \code{vector-length}, \code{vector-ref}, and \code{vector-set!} is updated to handle the situation in which the vector has type \code{Vectorof}. In these cases we translate the operators to their \code{vectorof} form so that later passes can easily distinguish between operations on tuples versus arrays. We override the \code{operator-types} method to provide the type signature for multiplication: it takes two integers and returns an integer. \fi} {\if\edition\pythonEd\pythonColor % The type checker for \LangArray{} is defined in figure~\ref{fig:type-check-Lvecof}. The result type of a list literal is \code{list[T]}, where \code{T} is the type of the initializing expressions. The type checking of the \code{len} function and the subscript operator are updated to handle lists. The type checker now also handles a subscript on the left-hand side of an assignment. Regarding multiplication, it takes two integers and returns an integer. % \fi} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define type-check-Lvecof-class (class type-check-Lvec-class (super-new) (inherit check-type-equal?) (define/override (operator-types) (append '((* . ((Integer Integer) . Integer))) (super operator-types))) (define/override (type-check-exp env) (lambda (e) (define recur (type-check-exp env)) (match e [(Prim 'make-vector (list e1 e2)) (define-values (e1^ t1) (recur e1)) (define-values (e2^ elt-type) (recur e2)) (define vec-type `(Vectorof ,elt-type)) (values (Prim 'make-vector (list e1^ e2^)) vec-type)] [(Prim 'vector-ref (list e1 e2)) (define-values (e1^ t1) (recur e1)) (define-values (e2^ t2) (recur e2)) (match* (t1 t2) [(`(Vectorof ,elt-type) 'Integer) (values (Prim 'vectorof-ref (list e1^ e2^)) elt-type)] [(other wise) ((super type-check-exp env) e)])] [(Prim 'vector-set! (list e1 e2 e3) ) (define-values (e-vec t-vec) (recur e1)) (define-values (e2^ t2) (recur e2)) (define-values (e-arg^ t-arg) (recur e3)) (match t-vec [`(Vectorof ,elt-type) (check-type-equal? elt-type t-arg e) (values (Prim 'vectorof-set! (list e-vec e2^ e-arg^)) 'Void)] [else ((super type-check-exp env) e)])] [(Prim 'vector-length (list e1)) (define-values (e1^ t1) (recur e1)) (match t1 [`(Vectorof ,t) (values (Prim 'vectorof-length (list e1^)) 'Integer)] [else ((super type-check-exp env) e)])] [else ((super type-check-exp env) e)]))) )) (define (type-check-Lvecof p) (send (new type-check-Lvecof-class) type-check-program p)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] class TypeCheckLarray(TypeCheckLtup): def type_check_exp(self, e, env): match e: case ast.List(es, Load()): ts = [self.type_check_exp(e, env) for e in es] elt_ty = ts[0] for (ty, elt) in zip(ts, es): self.check_type_equal(elt_ty, ty, elt) e.has_type = ListType(elt_ty) return e.has_type case Call(Name('len'), [tup]): tup_t = self.type_check_exp(tup, env) tup.has_type = tup_t match tup_t: case TupleType(ts): return IntType() case ListType(ty): return IntType() case _: raise Exception('len expected a tuple, not ' + repr(tup_t)) case Subscript(tup, index, Load()): tup_ty = self.type_check_exp(tup, env) index_ty = self.type_check_exp(index, env) self.check_type_equal(index_ty, IntType(), index) match tup_ty: case TupleType(ts): match index: case Constant(i): return ts[i] case _: raise Exception('subscript required constant integer index') case ListType(ty): return ty case _: raise Exception('subscript expected a tuple, not ' + repr(tup_ty)) case BinOp(left, Mult(), right): l = self.type_check_exp(left, env) self.check_type_equal(l, IntType(), left) r = self.type_check_exp(right, env) self.check_type_equal(r, IntType(), right) return IntType() case _: return super().type_check_exp(e, env) \end{lstlisting} \fi} \end{tcolorbox} \caption{Type checker for the \LangArray{} language\python{, part 1}.} \label{fig:type-check-Lvecof} \end{figure} {\if\edition\pythonEd \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] def type_check_stmts(self, ss, env): if len(ss) == 0: return VoidType() match ss[0]: case Assign([Subscript(tup, index, Store())], value): tup_t = self.type_check_exp(tup, env) value_t = self.type_check_exp(value, env) index_ty = self.type_check_exp(index, env) self.check_type_equal(index_ty, IntType(), index) match tup_t: case ListType(ty): self.check_type_equal(ty, value_t, ss[0]) case TupleType(ts): return self.type_check_stmts(ss, env) case _: raise Exception('type_check_stmts: ' 'expected tuple or list, not ' + repr(tup_t)) return self.type_check_stmts(ss[1:], env) case _: return super().type_check_stmts(ss, env) \end{lstlisting} \end{tcolorbox} \caption{Type checker for the \LangArray{} language, part 2.} \label{fig:type-check-Lvecof-part2} \end{figure} \fi} The definition of the interpreter for \LangArray{} is shown in \racket{figure~\ref{fig:interp-Lvecof}} \python{figures~\ref{fig:interp-Lvecof} and \ref{fig:type-check-Lvecof-part2}}. \racket{The \code{make-vector} operator is interpreted using Racket's \code{make-vector} function, and multiplication is interpreted using \code{fx*}, which is multiplication for \code{fixnum} integers. In the \code{resolve} pass (section~\ref{sec:array-resolution}) we translate array access operations into \code{vectorof-ref} and \code{vectorof-set!} operations, which we interpret using \code{vector} operations with additional bounds checks that signal a \code{trapped-error}. } % \python{We implement list creation with a Python list comprehension, and multiplication is implemented with 64-bit multiplication. We add a case to handle a subscript on the left-hand side of assignment. Other uses of subscript can be handled by the existing code for tuples.} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define interp-Lvecof-class (class interp-Lvec-class (super-new) (define/override (interp-op op) (match op ['make-vector make-vector] ['vectorof-length vector-length] ['vectorof-ref (lambda (v i) (if (< i (vector-length v)) (vector-ref v i) (error 'trapped-error "index ~a out of bounds\nin ~v" i v)))] ['vectorof-set! (lambda (v i e) (if (< i (vector-length v)) (vector-set! v i e) (error 'trapped-error "index ~a out of bounds\nin ~v" i v)))] [else (super interp-op op)])) )) (define (interp-Lvecof p) (send (new interp-Lvecof-class) interp-program p)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] class InterpLarray(InterpLtup): def interp_exp(self, e, env): match e: case ast.List(es, Load()): return [self.interp_exp(e, env) for e in es] case BinOp(left, Mult(), right): l = self.interp_exp(left, env) r = self.interp_exp(right, env) return mul64(l, r) case Subscript(tup, index, Load()): t = self.interp_exp(tup, env) n = self.interp_exp(index, env) if n < len(t): return t[n] else: raise TrappedError('array index out of bounds') case _: return super().interp_exp(e, env) def interp_stmt(self, s, env, cont): match s: case Assign([Subscript(tup, index)], value): t = self.interp_exp(tup, env) n = self.interp_exp(index, env) if n < len(t): t[n] = self.interp_exp(value, env) else: raise TrappedError('array index out of bounds') return self.interp_stmts(cont, env) case _: return super().interp_stmt(s, env, cont) \end{lstlisting} \fi} \end{tcolorbox} \caption{Interpreter for \LangArray{}.} \label{fig:interp-Lvecof} \end{figure} \subsection{Data Representation} \label{sec:array-rep} Just as with tuples, we store arrays on the heap, which means that the garbage collector will need to inspect arrays. An immediate thought is to use the same representation for arrays that we use for tuples. However, we limit tuples to a length of fifty so that their length and pointer mask can fit into the 64-bit tag at the beginning of each tuple (section~\ref{sec:data-rep-gc}). We intend arrays to allow millions of elements, so we need more bits to store the length. However, because arrays are homogeneous, we need only 1 bit for the pointer mask instead of 1 bit per array element. Finally, the garbage collector must be able to distinguish between tuples and arrays, so we need to reserve one bit for that purpose. We arrive at the following layout for the 64-bit tag at the beginning of an array: \begin{itemize} \item The right-most bit is the forwarding bit, just as in a tuple. A $0$ indicates that it is a forwarding pointer, and a $1$ indicates that it is not. \item The next bit to the left is the pointer mask. A $0$ indicates that none of the elements are pointers to the heap, and a $1$ indicates that all the elements are pointers. \item The next $60$ bits store the length of the array. \item The bit at position $62$ distinguishes between a tuple ($0$) and an array ($1$). \item The left-most bit is reserved as explained in chapter~\ref{ch:Lgrad}. \end{itemize} %% Recall that in chapter~\ref{ch:Ldyn}, we use a $3$-bit tag to %% differentiate the kinds of values that have been injected into the %% \code{Any} type. We use the bit pattern \code{110} (or $6$ in decimal) %% to indicate that the value is an array. In the following subsections we provide hints regarding how to update the passes to handle arrays. \subsection{Overload Resolution} \label{sec:array-resolution} As noted previously, with the addition of arrays, several operators have become \emph{overloaded}; that is, they can be applied to values of more than one type. In this case, the element access and length operators can be applied to both tuples and arrays. This kind of overloading is quite common in programming languages, so many compilers perform \emph{overload resolution}\index{subject}{overload resolution} to handle it. The idea is to translate each overloaded operator into different operators for the different types. Implement a new pass named \code{resolve}. Translate the reading of an array element into a call to \racket{\code{vectorof-ref}}\python{\code{array\_load}} and the writing of an array element to \racket{\code{vectorof-set!}}\python{\code{array\_store}} Translate calls to \racket{\code{vector-length}}\python{\code{len}} into \racket{\code{vectorof-length}}\python{\code{array\_len}}. When these operators are applied to tuples, leave them as is. % \python{The type checker for \LangArray{} adds a \code{has\_type} field, which can be inspected to determine whether the operator is applied to a tuple or an array.} \subsection{Bounds Checking} Recall that the interpreter for \LangArray{} signals a \code{trapped-error} when there is an array access that is out of bounds. Therefore your compiler is obliged to also catch these errors during execution and halt, signaling an error. We recommend inserting a new pass named \code{check\_bounds} that inserts code around each \racket{\code{vectorof-ref} and \code{vectorof-set!}} \python{subscript} operation to ensure that the index is greater than or equal to zero and less than the array's length. If not, the program should halt, for which we recommend using a new primitive operation named \code{exit}. %% \subsection{Reveal Casts} %% The array-access operators \code{vectorof-ref} and %% \code{vectorof-set!} are similar to the \code{any-vector-ref} and %% \code{any-vector-set!} operators of chapter~\ref{ch:Ldyn} in %% that the type checker cannot tell whether the index will be in bounds, %% so the bounds check must be performed at run time. Recall that the %% \code{reveal-casts} pass (section~\ref{sec:reveal-casts-Rany}) wraps %% an \code{If} around a vector reference for update to check whether %% the index is less than the length. You should do the same for %% \code{vectorof-ref} and \code{vectorof-set!} . %% In addition, the handling of the \code{any-vector} operators in %% \code{reveal-casts} needs to be updated to account for arrays that are %% injected to \code{Any}. For the \code{any-vector-length} operator, the %% generated code should test whether the tag is for tuples (\code{010}) %% or arrays (\code{110}) and then dispatch to either %% \code{any-vector-length} or \code{any-vectorof-length}. For the later %% we add a case in \code{select\_instructions} to generate the %% appropriate instructions for accessing the array length from the %% header of an array. %% For the \code{any-vector-ref} and \code{any-vector-set!} operators, %% the generated code needs to check that the index is less than the %% vector length, so like the code for \code{any-vector-length}, check %% the tag to determine whether to use \code{any-vector-length} or %% \code{any-vectorof-length} for this purpose. Once the bounds checking %% is complete, the generated code can use \code{any-vector-ref} and %% \code{any-vector-set!} for both tuples and arrays because the %% instructions used for those operators do not look at the tag at the %% front of the tuple or array. \subsection{Expose Allocation} This pass should translate array creation into lower-level operations. In particular, the new AST node \ALLOCARRAY{\Exp}{\Type} is analogous to the \code{Allocate} AST node for tuples. The $\Type$ argument must be \ARRAYTY{T}, where $T$ is the element type for the array. The \code{AllocateArray} AST node allocates an array of the length specified by the $\Exp$ (of type \INTTY), but does not initialize the elements of the array. Generate code in this pass to initialize the elements analogous to the case for tuples. {\if\edition\racketEd \subsection{Uncover \texttt{get!}} \label{sec:uncover-get-bang-vecof} Add cases for \code{AllocateArray} to \code{collect-set!} and \code{uncover-get!-exp}. \fi} \subsection{Remove Complex Operands} Add cases in the \code{rco\_atom} and \code{rco\_exp} for \code{AllocateArray}. In particular, an \code{AllocateArray} node is complex, and its subexpression must be atomic. \subsection{Explicate Control} Add cases for \code{AllocateArray} to \code{explicate\_tail} and \code{explicate\_assign}. \subsection{Select Instructions} \index{subject}{select instructions} Generate instructions for \code{AllocateArray} similar to those for \code{Allocate} given in section~\ref{sec:select-instructions-gc} except that the tag at the front of the array should instead use the representation discussed in section~\ref{sec:array-rep}. Regarding \racket{\code{vectorof-length}}\python{\code{array\_len}}, extract the length from the tag. The instructions generated for accessing an element of an array differ from those for a tuple (section~\ref{sec:select-instructions-gc}) in that the index is not a constant so you need to generate instructions that compute the offset at runtime. Compile the \code{exit} primitive into a call to the \code{exit} function of the C standard library, with an argument of $255$. %% Also, note that assignment to an array element may appear in %% as a stand-alone statement, so make sure to handle that situation in %% this pass. %% Finally, the instructions for \code{any-vectorof-length} should be %% similar to those for \code{vectorof-length}, except that one must %% first project the array by writing zeroes into the $3$-bit tag \begin{exercise}\normalfont\normalsize Implement a compiler for the \LangArray{} language by extending your compiler for \LangLoop{}. Test your compiler on a half dozen new programs, including the one shown in figure~\ref{fig:inner_product} and also a program that multiplies two matrices. Note that although matrices are two-dimensional arrays, they can be encoded into one-dimensional arrays by laying out each row in the array, one after the next. \end{exercise} {\if\edition\racketEd \section{Challenge: Generational Collection} The copying collector described in section~\ref{sec:GC} can incur significant runtime overhead because the call to \code{collect} takes time proportional to all the live data. One way to reduce this overhead is to reduce how much data is inspected in each call to \code{collect}. In particular, researchers have observed that recently allocated data is more likely to become garbage then data that has survived one or more previous calls to \code{collect}. This insight motivated the creation of \emph{generational garbage collectors} \index{subject}{generational garbage collector} that (1) segregate data according to its age into two or more generations; (2) allocate less space for younger generations, so collecting them is faster, and more space for the older generations; and (3) perform collection on the younger generations more frequently than on older generations~\citep{Wilson:1992fk}. For this challenge assignment, the goal is to adapt the copying collector implemented in \code{runtime.c} to use two generations, one for young data and one for old data. Each generation consists of a FromSpace and a ToSpace. The following is a sketch of how to adapt the \code{collect} function to use the two generations: \begin{enumerate} \item Copy the young generation's FromSpace to its ToSpace and then switch the role of the ToSpace and FromSpace. \item If there is enough space for the requested number of bytes in the young FromSpace, then return from \code{collect}. \item If there is not enough space in the young FromSpace for the requested bytes, then move the data from the young generation to the old one with the following steps: \begin{enumerate} \item[a.] If there is enough room in the old FromSpace, copy the young FromSpace to the old FromSpace and then return. \item[b.] If there is not enough room in the old FromSpace, then collect the old generation by copying the old FromSpace to the old ToSpace and swap the roles of the old FromSpace and ToSpace. \item[c.] If there is enough room now, copy the young FromSpace to the old FromSpace and return. Otherwise, allocate a larger FromSpace and ToSpace for the old generation. Copy the young FromSpace and the old FromSpace into the larger FromSpace for the old generation and then return. \end{enumerate} \end{enumerate} We recommend that you generalize the \code{cheney} function so that it can be used for all the copies mentioned: between the young FromSpace and ToSpace, between the old FromSpace and ToSpace, and between the young FromSpace and old FromSpace. This can be accomplished by adding parameters to \code{cheney} that replace its use of the global variables \code{fromspace\_begin}, \code{fromspace\_end}, \code{tospace\_begin}, and \code{tospace\_end}. Note that the collection of the young generation does not traverse the old generation. This introduces a potential problem: there may be young data that is reachable only through pointers in the old generation. If these pointers are not taken into account, the collector could throw away young data that is live! One solution, called \emph{pointer recording}, is to maintain a set of all the pointers from the old generation into the new generation and consider this set as part of the root set. To maintain this set, the compiler must insert extra instructions around every \code{vector-set!}. If the vector being modified is in the old generation, and if the value being written is a pointer into the new generation, then that pointer must be added to the set. Also, if the value being overwritten was a pointer into the new generation, then that pointer should be removed from the set. \begin{exercise}\normalfont\normalsize Adapt the \code{collect} function in \code{runtime.c} to implement generational garbage collection, as outlined in this section. Update the code generation for \code{vector-set!} to implement pointer recording. Make sure that your new compiler and runtime execute without error on your test suite. \end{exercise} \fi} \section{Further Reading} \citet{Appel90} describes many data representation approaches including the ones used in the compilation of Standard ML. There are many alternatives to copying collectors (and their bigger siblings, the generational collectors) with regard to garbage collection, such as mark-and-sweep~\citep{McCarthy:1960dz} and reference counting~\citep{Collins:1960aa}. The strengths of copying collectors are that allocation is fast (just a comparison and pointer increment), there is no fragmentation, cyclic garbage is collected, and the time complexity of collection depends only on the amount of live data and not on the amount of garbage~\citep{Wilson:1992fk}. The main disadvantages of a two-space copying collector is that it uses a lot of extra space and takes a long time to perform the copy, though these problems are ameliorated in generational collectors. \racket{Racket}\python{Object-oriented} programs tend to allocate many small objects and generate a lot of garbage, so copying and generational collectors are a good fit\python{~\citep{Dieckmann99}}. Garbage collection is an active research topic, especially concurrent garbage collection~\citep{Tene:2011kx}. Researchers are continuously developing new techniques and revisiting old trade-offs~\citep{Blackburn:2004aa,Jones:2011aa,Shahriyar:2013aa,Cutler:2015aa,Shidal:2015aa,Osterlund:2016aa,Jacek:2019aa,Gamari:2020aa}. Researchers meet every year at the International Symposium on Memory Management to present these findings. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Functions} \label{ch:Lfun} \index{subject}{function} \setcounter{footnote}{0} This chapter studies the compilation of a subset of \racket{Typed Racket}\python{Python} in which only top-level function definitions are allowed. This kind of function appears in the C programming language, and it serves as an important stepping-stone to implementing lexically scoped functions in the form of \key{lambda}\index{subject}{lambda} abstractions, which is the topic of chapter~\ref{ch:Llambda}. \section{The \LangFun{} Language} The concrete syntax and abstract syntax for function definitions and function application are shown in figures~\ref{fig:Lfun-concrete-syntax} and \ref{fig:Lfun-syntax}, with which we define the \LangFun{} language. Programs in \LangFun{} begin with zero or more function definitions. The function names from these definitions are in scope for the entire program, including all the function definitions, and therefore the ordering of function definitions does not matter. % \python{The abstract syntax for function parameters in figure~\ref{fig:Lfun-syntax} is a list of pairs, each of which consists of a parameter name and its type. This design differs from Python's \code{ast} module, which has a more complex structure for function parameters to handle keyword parameters, defaults, and so on. The type checker in \code{type\_check\_Lfun} converts the complex Python abstract syntax into the simpler syntax shown in figure~\ref{fig:Lfun-syntax}. The fourth and sixth parameters of the \code{FunctionDef} constructor are for decorators and a type comment, neither of which are used by our compiler. We recommend replacing them with \code{None} in the \code{shrink} pass. } % The concrete syntax for function application \index{subject}{function application} is \python{$\CAPPLY{\Exp}{\Exp\code{,} \ldots}$}\racket{$\CAPPLY{\Exp}{\Exp \ldots}$}, where the first expression must evaluate to a function and the remaining expressions are the arguments. The abstract syntax for function application is $\APPLY{\Exp}{\Exp^*}$. %% The syntax for function application does not include an explicit %% keyword, which is error prone when using \code{match}. To alleviate %% this problem, we translate the syntax from $(\Exp \; \Exp \ldots)$ to %% $(\key{app}\; \Exp \; \Exp \ldots)$ during type checking. Functions are first-class in the sense that a function pointer \index{subject}{function pointer} is data and can be stored in memory or passed as a parameter to another function. Thus, there is a function type, written {\if\edition\racketEd \begin{lstlisting} (|$\Type_1$| |$\cdots$| |$\Type_n$| -> |$\Type_r$|) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Callable[[|$\Type_1$|,|$\cdots$|,|$\Type_n$|], |$\Type_R$|] \end{lstlisting} \fi} % \noindent for a function whose $n$ parameters have the types $\Type_1$ through $\Type_n$ and whose return type is $\Type_R$. The main limitation of these functions (with respect to \racket{Racket}\python{Python} functions) is that they are not lexically scoped. That is, the only external entities that can be referenced from inside a function body are other globally defined functions. The syntax of \LangFun{} prevents function definitions from being nested inside each other. \newcommand{\LfunGrammarRacket}{ \begin{array}{lcl} \Type &::=& (\Type \ldots \; \key{->}\; \Type) \\ \Exp &::=& \LP\Exp \; \Exp \ldots\RP \\ \Def &::=& \CDEF{\Var}{\LS\Var \key{:} \Type\RS \ldots}{\Type}{\Exp} \\ \end{array} } \newcommand{\LfunASTRacket}{ \begin{array}{lcl} \Type &::=& (\Type \ldots \; \key{->}\; \Type) \\ \Exp &::=& \APPLY{\Exp}{\Exp\ldots}\\ \Def &::=& \FUNDEF{\Var}{\LP[\Var \code{:} \Type]\ldots\RP}{\Type}{\code{'()}}{\Exp} \end{array} } \newcommand{\LfunGrammarPython}{ \begin{array}{lcl} \Type &::=& \key{int} \MID \key{bool} \MID \key{void} \MID \key{tuple}\LS \Type^+ \RS \MID \key{Callable}\LS \LS \Type \key{,} \ldots \RS \key{, } \Type \RS \\ \Exp &::=& \CAPPLY{\Exp}{\Exp\code{,} \ldots} \\ \Stmt &::=& \CRETURN{\Exp} \\ \Def &::=& \CDEF{\Var}{\Var \key{:} \Type\key{,} \ldots}{\Type}{\Stmt^{+}} \end{array} } \newcommand{\LfunASTPython}{ \begin{array}{lcl} \Type &::=& \key{IntType()} \MID \key{BoolType()} \MID \key{VoidType()} \MID \key{TupleType}\LS\Type^+\RS\\ &\MID& \key{FunctionType}\LP \Type^{*} \key{, } \Type \RP \\ \Exp &::=& \CALL{\Exp}{\Exp^{*}}\\ \Stmt &::=& \RETURN{\Exp} \\ \Params &::=& \LP\Var\key{,}\Type\RP^* \\ \Def &::=& \FUNDEF{\Var}{\Params}{\Type}{}{\Stmt^{+}} \end{array} } \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintGrammarRacket{}} \\ \hline \gray{\LvarGrammarRacket{}} \\ \hline \gray{\LifGrammarRacket{}} \\ \hline \gray{\LwhileGrammarRacket} \\ \hline \gray{\LtupGrammarRacket} \\ \hline \LfunGrammarRacket \\ \begin{array}{lcl} \LangFunM{} &::=& \Def \ldots \; \Exp \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintGrammarPython{}} \\ \hline \gray{\LvarGrammarPython{}} \\ \hline \gray{\LifGrammarPython{}} \\ \hline \gray{\LwhileGrammarPython} \\ \hline \gray{\LtupGrammarPython} \\ \hline \LfunGrammarPython \\ \begin{array}{rcl} \LangFunM{} &::=& \Def\ldots \Stmt\ldots \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The concrete syntax of \LangFun{}, extending \LangVec{} (figure~\ref{fig:Lvec-concrete-syntax}).} \label{fig:Lfun-concrete-syntax} \end{figure} \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintOpAST} \\ \hline \gray{\LvarASTRacket{}} \\ \hline \gray{\LifASTRacket{}} \\ \hline \gray{\LwhileASTRacket{}} \\ \hline \gray{\LtupASTRacket{}} \\ \hline \LfunASTRacket \\ \begin{array}{lcl} \LangFunM{} &::=& \PROGRAMDEFSEXP{\code{'()}}{\LP\Def\ldots\RP)}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintASTPython{}} \\ \hline \gray{\LvarASTPython{}} \\ \hline \gray{\LifASTPython{}} \\ \hline \gray{\LwhileASTPython} \\ \hline \gray{\LtupASTPython} \\ \hline \LfunASTPython \\ \begin{array}{rcl} \LangFunM{} &::=& \PROGRAM{}{\LS \Def \ldots \Stmt \ldots \RS} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangFun{}, extending \LangVec{} (figure~\ref{fig:Lvec-syntax}).} \label{fig:Lfun-syntax} \end{figure} The program shown in figure~\ref{fig:Lfun-function-example} is a representative example of defining and using functions in \LangFun{}. We define a function \code{map} that applies some other function \code{f} to both elements of a tuple and returns a new tuple containing the results. We also define a function \code{inc}. The program applies \code{map} to \code{inc} and % \racket{\code{(vector 0 41)}}\python{\code{(0, 41)}}. % The result is \racket{\code{(vector 1 42)}}\python{\code{(1, 42)}}, % from which we return \code{42}. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define (map [f : (Integer -> Integer)] [v : (Vector Integer Integer)]) : (Vector Integer Integer) (vector (f (vector-ref v 0)) (f (vector-ref v 1)))) (define (inc [x : Integer]) : Integer (+ x 1)) (vector-ref (map inc (vector 0 41)) 1) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def map(f : Callable[[int], int], v : tuple[int,int]) -> tuple[int,int]: return f(v[0]), f(v[1]) def inc(x : int) -> int: return x + 1 print(map(inc, (0, 41))[1]) \end{lstlisting} \fi} \end{tcolorbox} \caption{Example of using functions in \LangFun{}.} \label{fig:Lfun-function-example} \end{figure} The definitional interpreter for \LangFun{} is shown in figure~\ref{fig:interp-Lfun}. The case for the % \racket{\code{ProgramDefsExp}}\python{\code{Module}} % AST is responsible for setting up the mutual recursion between the top-level function definitions. % \racket{We use the classic back-patching \index{subject}{back-patching} approach that uses mutable variables and makes two passes over the function definitions~\citep{Kelsey:1998di}. In the first pass we set up the top-level environment using a mutable cons cell for each function definition. Note that the \code{lambda}\index{subject}{lambda} value for each function is incomplete; it does not yet include the environment. Once the top-level environment has been constructed, we iterate over it and update the \code{lambda} values to use the top-level environment.} % \python{We create a dictionary named \code{env} and fill it in by mapping each function name to a new \code{Function} value, each of which stores a reference to the \code{env}. (We define the class \code{Function} for this purpose.)} % To interpret a function \racket{application}\python{call}, we match the result of the function expression to obtain a function value. We then extend the function's environment with the mapping of parameters to argument values. Finally, we interpret the body of the function in this extended environment. \begin{figure}[tp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define interp-Lfun-class (class interp-Lvec-class (super-new) (define/override ((interp-exp env) e) (define recur (interp-exp env)) (match e [(Apply fun args) (define fun-val (recur fun)) (define arg-vals (for/list ([e args]) (recur e))) (match fun-val [`(function (,xs ...) ,body ,fun-env) (define params-args (for/list ([x xs] [arg arg-vals]) (cons x (box arg)))) (define new-env (append params-args fun-env)) ((interp-exp new-env) body)] [else (error 'interp-exp "expected function, not ~a" fun-val)])] [else ((super interp-exp env) e)] )) (define/public (interp-def d) (match d [(Def f (list `[,xs : ,ps] ...) rt _ body) (cons f (box `(function ,xs ,body ())))])) (define/override (interp-program p) (match p [(ProgramDefsExp info ds body) (let ([top-level (for/list ([d ds]) (interp-def d))]) (for/list ([f (in-dict-values top-level)]) (set-box! f (match (unbox f) [`(function ,xs ,body ()) `(function ,xs ,body ,top-level)]))) ((interp-exp top-level) body))])) )) (define (interp-Lfun p) (send (new interp-Lfun-class) interp-program p)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} class InterpLfun(InterpLtup): def apply_fun(self, fun, args, e): match fun: case Function(name, xs, body, env): new_env = env.copy().update(zip(xs, args)) return self.interp_stmts(body, new_env) case _: raise Exception('apply_fun: unexpected: ' + repr(fun)) def interp_exp(self, e, env): match e: case Call(Name('input_int'), []): return super().interp_exp(e, env) case Call(func, args): f = self.interp_exp(func, env) vs = [self.interp_exp(arg, env) for arg in args] return self.apply_fun(f, vs, e) case _: return super().interp_exp(e, env) def interp_stmt(self, s, env, cont): match s: case Return(value): return self.interp_exp(value, env) case FunctionDef(name, params, bod, dl, returns, comment): if isinstance(params, ast.arguments): ps = [p.arg for p in params.args] else: ps = [x for (x,t) in params] env[name] = Function(name, ps, bod, env) return self.interp_stmts(cont, env) case _: return super().interp_stmt(s, env, cont) def interp(self, p): match p: case Module(ss): env = {} self.interp_stmts(ss, env) if 'main' in env.keys(): self.apply_fun(env['main'], [], None) case _: raise Exception('interp: unexpected ' + repr(p)) \end{lstlisting} \fi} \end{tcolorbox} \caption{Interpreter for the \LangFun{} language.} \label{fig:interp-Lfun} \end{figure} %\margincomment{TODO: explain type checker} The type checker for \LangFun{} is shown in figure~\ref{fig:type-check-Lfun}. % \python{(We omit the code that parses function parameters into the simpler abstract syntax.)} % Similarly to the interpreter, the case for the \racket{\code{ProgramDefsExp}}\python{\code{Module}} % AST is responsible for setting up the mutual recursion between the top-level function definitions. We begin by create a mapping \code{env} from every function name to its type. We then type check the program using this mapping. % In the case for function \racket{application}\python{call}, we match the type of the function expression to a function type and check that the types of the argument expressions are equal to the function's parameter types. The type of the \racket{application}\python{call} as a whole is the return type from the function type. \begin{figure}[tp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define type-check-Lfun-class (class type-check-Lvec-class (super-new) (inherit check-type-equal?) (define/public (type-check-apply env e es) (define-values (e^ ty) ((type-check-exp env) e)) (define-values (e* ty*) (for/lists (e* ty*) ([e (in-list es)]) ((type-check-exp env) e))) (match ty [`(,ty^* ... -> ,rt) (for ([arg-ty ty*] [param-ty ty^*]) (check-type-equal? arg-ty param-ty (Apply e es))) (values e^ e* rt)])) (define/override (type-check-exp env) (lambda (e) (match e [(FunRef f n) (values (FunRef f n) (dict-ref env f))] [(Apply e es) (define-values (e^ es^ rt) (type-check-apply env e es)) (values (Apply e^ es^) rt)] [(Call e es) (define-values (e^ es^ rt) (type-check-apply env e es)) (values (Call e^ es^) rt)] [else ((super type-check-exp env) e)]))) (define/public (type-check-def env) (lambda (e) (match e [(Def f (and p:t* (list `[,xs : ,ps] ...)) rt info body) (define new-env (append (map cons xs ps) env)) (define-values (body^ ty^) ((type-check-exp new-env) body)) (check-type-equal? ty^ rt body) (Def f p:t* rt info body^)]))) (define/public (fun-def-type d) (match d [(Def f (list `[,xs : ,ps] ...) rt info body) `(,@ps -> ,rt)])) (define/override (type-check-program e) (match e [(ProgramDefsExp info ds body) (define env (for/list ([d ds]) (cons (Def-name d) (fun-def-type d)))) (define ds^ (for/list ([d ds]) ((type-check-def env) d))) (define-values (body^ ty) ((type-check-exp env) body)) (check-type-equal? ty 'Integer body) (ProgramDefsExp info ds^ body^)])))) (define (type-check-Lfun p) (send (new type-check-Lfun-class) type-check-program p)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} class TypeCheckLfun(TypeCheckLtup): def type_check_exp(self, e, env): match e: case Call(Name('input_int'), []): return super().type_check_exp(e, env) case Call(func, args): func_t = self.type_check_exp(func, env) args_t = [self.type_check_exp(arg, env) for arg in args] match func_t: case FunctionType(params_t, return_t): for (arg_t, param_t) in zip(args_t, params_t): check_type_equal(param_t, arg_t, e) return return_t case _: raise Exception('type_check_exp: in call, unexpected ' + repr(func_t)) case _: return super().type_check_exp(e, env) def type_check_stmts(self, ss, env): if len(ss) == 0: return match ss[0]: case FunctionDef(name, params, body, dl, returns, comment): new_env = env.copy().update(params) rt = self.type_check_stmts(body, new_env) check_type_equal(returns, rt, ss[0]) return self.type_check_stmts(ss[1:], env) case Return(value): return self.type_check_exp(value, env) case _: return super().type_check_stmts(ss, env) def type_check(self, p): match p: case Module(body): env = {} for s in body: match s: case FunctionDef(name, params, bod, dl, returns, comment): if name in env: raise Exception('type_check: function ' + repr(name) + ' defined twice') params_t = [t for (x,t) in params] env[name] = FunctionType(params_t, returns) self.type_check_stmts(body, env) case _: raise Exception('type_check: unexpected ' + repr(p)) \end{lstlisting} \fi} \end{tcolorbox} \caption{Type checker for the \LangFun{} language.} \label{fig:type-check-Lfun} \end{figure} \clearpage \section{Functions in x86} \label{sec:fun-x86} %% \margincomment{\tiny Make sure callee-saved registers are discussed %% in enough depth, especially updating Fig 6.4 \\ --Jeremy } %% \margincomment{\tiny Talk about the return address on the %% stack and what callq and retq does.\\ --Jeremy } The x86 architecture provides a few features to support the implementation of functions. We have already seen that there are labels in x86 so that one can refer to the location of an instruction, as is needed for jump instructions. Labels can also be used to mark the beginning of the instructions for a function. Going further, we can obtain the address of a label by using the \key{leaq} instruction. For example, the following puts the address of the \code{inc} label into the \code{rbx} register: \begin{lstlisting} leaq inc(%rip), %rbx \end{lstlisting} Recall from section~\ref{sec:select-instructions-gc} that \verb!inc(%rip)! is an example of instruction-pointer-relative addressing. In section~\ref{sec:x86} we used the \code{callq} instruction to jump to functions whose locations were given by a label, such as \code{read\_int}. To support function calls in this chapter we instead jump to functions whose location are given by an address in a register; that is, we use \emph{indirect function calls}. The x86 syntax for this is a \code{callq} instruction that requires an asterisk before the register name.\index{subject}{indirect function call} \begin{lstlisting} callq *%rbx \end{lstlisting} \subsection{Calling Conventions} \label{sec:calling-conventions-fun} \index{subject}{calling conventions} The \code{callq} instruction provides partial support for implementing functions: it pushes the return address on the stack and it jumps to the target. However, \code{callq} does not handle \begin{enumerate} \item parameter passing, \item pushing frames on the procedure call stack and popping them off, or \item determining how registers are shared by different functions. \end{enumerate} Regarding parameter passing, recall that the x86-64 calling convention for Unix-based systems uses the following six registers to pass arguments to a function, in the given order: \begin{lstlisting} rdi rsi rdx rcx r8 r9 \end{lstlisting} If there are more than six arguments, then the calling convention mandates using space on the frame of the caller for the rest of the arguments. However, to ease the implementation of efficient tail calls (section~\ref{sec:tail-call}), we arrange never to need more than six arguments. % The return value of the function is stored in register \code{rax}. Regarding frames \index{subject}{frame} and the procedure call stack, \index{subject}{procedure call stack} recall from section~\ref{sec:x86} that the stack grows down and each function call uses a chunk of space on the stack called a frame. The caller sets the stack pointer, register \code{rsp}, to the last data item in its frame. The callee must not change anything in the caller's frame, that is, anything that is at or above the stack pointer. The callee is free to use locations that are below the stack pointer. Recall that we store variables of tuple type on the root stack. So, the prelude\index{subject}{prelude} of a function needs to move the root stack pointer \code{r15} up according to the number of variables of tuple type and the conclusion\index{subject}{conclusion} needs to move the root stack pointer back down. Also, the prelude must initialize to \code{0} this frame's slots in the root stack to signal to the garbage collector that those slots do not yet contain a valid pointer. Otherwise the garbage collector will interpret the garbage bits in those slots as memory addresses and try to traverse them, causing serious mayhem! Regarding the sharing of registers between different functions, recall from section~\ref{sec:calling-conventions} that the registers are divided into two groups, the caller-saved registers and the callee-saved registers. The caller should assume that all the caller-saved registers are overwritten with arbitrary values by the callee. For that reason we recommend in section~\ref{sec:calling-conventions} that variables that are live during a function call should not be assigned to caller-saved registers. On the flip side, if the callee wants to use a callee-saved register, the callee must save the contents of those registers on their stack frame and then put them back prior to returning to the caller. For that reason we recommend in section~\ref{sec:calling-conventions} that if the register allocator assigns a variable to a callee-saved register, then the prelude of the \code{main} function must save that register to the stack and the conclusion of \code{main} must restore it. This recommendation now generalizes to all functions. Recall that the base pointer, register \code{rbp}, is used as a point of reference within a frame, so that each local variable can be accessed at a fixed offset from the base pointer (section~\ref{sec:x86}). % Figure~\ref{fig:call-frames} shows the layout of the caller and callee frames. \begin{figure}[tbp] \centering \begin{tcolorbox}[colback=white] \begin{tabular}{r|r|l|l} \hline Caller View & Callee View & Contents & Frame \\ \hline 8(\key{\%rbp}) & & return address & \multirow{5}{*}{Caller}\\ 0(\key{\%rbp}) & & old \key{rbp} \\ -8(\key{\%rbp}) & & callee-saved $1$ \\ \ldots & & \ldots \\ $-8j$(\key{\%rbp}) & & callee-saved $j$ \\ $-8(j+1)$(\key{\%rbp}) & & local variable $1$ \\ \ldots & & \ldots \\ $-8(j+k)$(\key{\%rbp}) & & local variable $k$ \\ %% & & \\ %% $8n-8$\key{(\%rsp)} & $8n+8$(\key{\%rbp})& argument $n$ \\ %% & \ldots & \ldots \\ %% 0\key{(\%rsp)} & 16(\key{\%rbp}) & argument $1$ & \\ \hline & 8(\key{\%rbp}) & return address & \multirow{5}{*}{Callee}\\ & 0(\key{\%rbp}) & old \key{rbp} \\ & -8(\key{\%rbp}) & callee-saved $1$ \\ & \ldots & \ldots \\ & $-8n$(\key{\%rbp}) & callee-saved $n$ \\ & $-8(n+1)$(\key{\%rbp}) & local variable $1$ \\ & \ldots & \ldots \\ & $-8(n+m)$(\key{\%rbp}) & local variable $m$\\ \hline \end{tabular} \end{tcolorbox} \caption{Memory layout of caller and callee frames.} \label{fig:call-frames} \end{figure} %% Recall from section~\ref{sec:x86} that the stack is also used for %% local variables and for storing the values of callee-saved registers %% (we shall refer to all of these collectively as ``locals''), and that %% at the beginning of a function we move the stack pointer \code{rsp} %% down to make room for them. %% We recommend storing the local variables %% first and then the callee-saved registers, so that the local variables %% can be accessed using \code{rbp} the same as before the addition of %% functions. %% To make additional room for passing arguments, we shall %% move the stack pointer even further down. We count how many stack %% arguments are needed for each function call that occurs inside the %% body of the function and find their maximum. Adding this number to the %% number of locals gives us how much the \code{rsp} should be moved at %% the beginning of the function. In preparation for a function call, we %% offset from \code{rsp} to set up the stack arguments. We put the first %% stack argument in \code{0(\%rsp)}, the second in \code{8(\%rsp)}, and %% so on. %% Upon calling the function, the stack arguments are retrieved by the %% callee using the base pointer \code{rbp}. The address \code{16(\%rbp)} %% is the location of the first stack argument, \code{24(\%rbp)} is the %% address of the second, and so on. Figure~\ref{fig:call-frames} shows %% the layout of the caller and callee frames. Notice how important it is %% that we correctly compute the maximum number of arguments needed for %% function calls; if that number is too small then the arguments and %% local variables will smash into each other! \subsection{Efficient Tail Calls} \label{sec:tail-call} In general, the amount of stack space used by a program is determined by the longest chain of nested function calls. That is, if function $f_1$ calls $f_2$, $f_2$ calls $f_3$, and so on to $f_n$, then the amount of stack space is linear in $n$. The depth $n$ can grow quite large if functions are recursive. However, in some cases we can arrange to use only a constant amount of space for a long chain of nested function calls. A \emph{tail call}\index{subject}{tail call} is a function call that happens as the last action in a function body. For example, in the following program, the recursive call to \code{tail\_sum} is a tail call: \begin{center} {\if\edition\racketEd \begin{lstlisting} (define (tail_sum [n : Integer] [r : Integer]) : Integer (if (eq? n 0) r (tail_sum (- n 1) (+ n r)))) (+ (tail_sum 3 0) 36) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def tail_sum(n : int, r : int) -> int: if n == 0: return r else: return tail_sum(n - 1, n + r) print(tail_sum(3, 0) + 36) \end{lstlisting} \fi} \end{center} At a tail call, the frame of the caller is no longer needed, so we can pop the caller's frame before making the tail call. With this approach, a recursive function that makes only tail calls ends up using a constant amount of stack space. Functional languages like Racket rely heavily on recursive functions, so the definition of Racket \emph{requires} that all tail calls be optimized in this way. \index{subject}{frame} Some care is needed with regard to argument passing in tail calls. As mentioned, for arguments beyond the sixth, the convention is to use space in the caller's frame for passing arguments. However, for a tail call we pop the caller's frame and can no longer use it. An alternative is to use space in the callee's frame for passing arguments. However, this option is also problematic because the caller and callee's frames overlap in memory. As we begin to copy the arguments from their sources in the caller's frame, the target locations in the callee's frame might collide with the sources for later arguments! We solve this problem by using the heap instead of the stack for passing more than six arguments (section~\ref{sec:limit-functions-r4}). As mentioned, for a tail call we pop the caller's frame prior to making the tail call. The instructions for popping a frame are the instructions that we usually place in the conclusion of a function. Thus, we also need to place such code immediately before each tail call. These instructions include restoring the callee-saved registers, so it is fortunate that the argument passing registers are all caller-saved registers. One note remains regarding which instruction to use to make the tail call. When the callee is finished, it should not return to the current function but instead return to the function that called the current one. Thus, the return address that is already on the stack is the right one, and we should not use \key{callq} to make the tail call because that would overwrite the return address. Instead we simply use the \key{jmp} instruction. As with the indirect function call, we write an \emph{indirect jump}\index{subject}{indirect jump} with a register prefixed with an asterisk. We recommend using \code{rax} to hold the jump target because the conclusion can overwrite just about everything else. \begin{lstlisting} jmp *%rax \end{lstlisting} \section{Shrink \LangFun{}} \label{sec:shrink-r4} The \code{shrink} pass performs a minor modification to ease the later passes. This pass introduces an explicit \code{main} function that gobbles up all the top-level statements of the module. % \racket{It also changes the top \code{ProgramDefsExp} form to \code{ProgramDefs}.} {\if\edition\racketEd \begin{lstlisting} (ProgramDefsExp |$\itm{info}$| (|$\Def\ldots$|) |$\Exp$|) |$\Rightarrow$| (ProgramDefs |$\itm{info}$| (|$\Def\ldots$| |$\itm{mainDef}$|)) \end{lstlisting} where $\itm{mainDef}$ is \begin{lstlisting} (Def 'main '() 'Integer '() |$\Exp'$|) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Module(|$\Def\ldots\Stmt\ldots$|) |$\Rightarrow$| Module(|$\Def\ldots\itm{mainDef}$|) \end{lstlisting} where $\itm{mainDef}$ is \begin{lstlisting} FunctionDef('main', [], int, None, |$\Stmt\ldots$|Return(Constant(0)), None) \end{lstlisting} \fi} \section{Reveal Functions and the \LangFunRef{} Language} \label{sec:reveal-functions-r4} The syntax of \LangFun{} is inconvenient for purposes of compilation in that it conflates the use of function names and local variables. This is a problem because we need to compile the use of a function name differently from the use of a local variable. In particular, we use \code{leaq} to convert the function name (a label in x86) to an address in a register. Thus, we create a new pass that changes function references from $\VAR{f}$ to $\FUNREF{f}{n}$ where $n$ is the arity of the function.\python{\footnote{The arity is not needed in this chapter but is used in chapter~\ref{ch:Ldyn}.}} This pass is named \code{reveal\_functions} and the output language is \LangFunRef{}. %is defined in figure~\ref{fig:f1-syntax}. %% The concrete syntax for a %% function reference is $\CFUNREF{f}$. %% \begin{figure}[tp] %% \centering %% \fbox{ %% \begin{minipage}{0.96\textwidth} %% {\if\edition\racketEd %% \[ %% \begin{array}{lcl} %% \Exp &::=& \ldots \MID \FUNREF{\Var}{\Int}\\ %% \Def &::=& \gray{ \FUNDEF{\Var}{([\Var \code{:} \Type]\ldots)}{\Type}{\code{'()}}{\Exp} }\\ %% \LangFunRefM{} &::=& \PROGRAMDEFS{\code{'()}}{\LP \Def\ldots \RP} %% \end{array} %% \] %% \fi} %% {\if\edition\pythonEd\pythonColor %% \[ %% \begin{array}{lcl} %% \Exp &::=& \FUNREF{\Var}{\Int}\\ %% \LangFunRefM{} &::=& \PROGRAM{}{\LS \Def \code{,} \ldots \RS} %% \end{array} %% \] %% \fi} %% \end{minipage} %% } %% \caption{The abstract syntax \LangFunRef{}, an extension of \LangFun{} %% (figure~\ref{fig:Lfun-syntax}).} %% \label{fig:f1-syntax} %% \end{figure} %% Distinguishing between calls in tail position and non-tail position %% requires the pass to have some notion of context. We recommend using %% two mutually recursive functions, one for processing expressions in %% tail position and another for the rest. \racket{Placing this pass after \code{uniquify} will make sure that there are no local variables and functions that share the same name.} % The \code{reveal\_functions} pass should come before the \code{remove\_complex\_operands} pass because function references should be categorized as complex expressions. \section{Limit Functions} \label{sec:limit-functions-r4} Recall that we wish to limit the number of function parameters to six so that we do not need to use the stack for argument passing, which makes it easier to implement efficient tail calls. However, because the input language \LangFun{} supports arbitrary numbers of function arguments, we have some work to do! The \code{limit\_functions} pass transforms functions and function calls that involve more than six arguments to pass the first five arguments as usual, but it packs the rest of the arguments into a tuple and passes it as the sixth argument.\footnote{The implementation this pass can be postponed to last because you can test the rest of the passes on functions with six or fewer parameters.} Each function definition with seven or more parameters is transformed as follows: {\if\edition\racketEd \begin{lstlisting} (Def |$f$| ([|$x_1$|:|$T_1$|] |$\ldots$| [|$x_n$|:|$T_n$|]) |$T_r$| |$\itm{info}$| |$\itm{body}$|) |$\Rightarrow$| (Def |$f$| ([|$x_1$|:|$T_1$|] |$\ldots$| [|$x_5$|:|$T_5$|] [tup : (Vector |$T_6 \ldots T_n$|)]) |$T_r$| |$\itm{info}$| |$\itm{body}'$|) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} FunctionDef(|$f$|, [(|$x_1$|,|$T_1$|),|$\ldots$|,(|$x_n$|,|$T_n$|)], |$T_r$|, None, |$\itm{body}$|, None) |$\Rightarrow$| FunctionDef(|$f$|, [(|$x_1$|,|$T_1$|),|$\ldots$|,(|$x_5$|,|$T_5$|),(tup,TupleType([|$T_6, \ldots, T_n$|]))], |$T_r$|, None, |$\itm{body}'$|, None) \end{lstlisting} \fi} % \noindent where the $\itm{body}$ is transformed into $\itm{body}'$ by replacing the occurrences of each parameter $x_i$ where $i > 5$ with the $k$th element of the tuple, where $k = i - 6$. % {\if\edition\racketEd \begin{lstlisting} (Var |$x_i$|) |$\Rightarrow$| (Prim 'vector-ref (list tup (Int |$k$|))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Name(|$x_i$|) |$\Rightarrow$| Subscript(tup, Constant(|$k$|), Load()) \end{lstlisting} \fi} For function calls with too many arguments, the \code{limit\_functions} pass transforms them in the following way: \begin{tabular}{lll} \begin{minipage}{0.3\textwidth} {\if\edition\racketEd \begin{lstlisting} (|$e_0$| |$e_1$| |$\ldots$| |$e_n$|) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Call(|$e_0$|, [|$e_1,\ldots,e_n$|]) \end{lstlisting} \fi} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.5\textwidth} {\if\edition\racketEd \begin{lstlisting} (|$e_0$| |$e_1 \ldots e_5$| (vector |$e_6 \ldots e_n$|)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Call(|$e_0$|, [|$e_1,\ldots,e_5$|,Tuple([|$e_6,\ldots,e_n$|])]) \end{lstlisting} \fi} \end{minipage} \end{tabular} \section{Remove Complex Operands} \label{sec:rco-r4} The primary decisions to make for this pass are whether to classify \code{FunRef} and \racket{\code{Apply}}\python{\code{Call}} as either atomic or complex expressions. Recall that an atomic expression ends up as an immediate argument of an x86 instruction. Function application translates to a sequence of instructions, so \racket{\code{Apply}}\python{\code{Call}} must be classified as a complex expression. On the other hand, the arguments of \racket{\code{Apply}}\python{\code{Call}} should be atomic expressions. % Regarding \code{FunRef}, as discussed previously, the function label needs to be converted to an address using the \code{leaq} instruction. Thus, even though \code{FunRef} seems rather simple, it needs to be classified as a complex expression so that we generate an assignment statement with a left-hand side that can serve as the target of the \code{leaq}. The output of this pass, \LangFunANF{} (figure~\ref{fig:Lfun-anf-syntax}), extends \LangAllocANF{} (figure~\ref{fig:Lvec-anf-syntax}) with \code{FunRef} and \racket{\code{Apply}}\python{\code{Call}} in the grammar for expressions and augments programs to include a list of function definitions. % \python{Also, \LangFunANF{} adds \code{Return} to the grammar for statements.} \newcommand{\LfunMonadASTRacket}{ \begin{array}{lcl} \Type &::=& (\Type \ldots \; \key{->}\; \Type) \\ \Exp &::=& \FUNREF{\itm{label}}{\Int} \MID \APPLY{\Atm}{\Atm\ldots}\\ \Def &::=& \FUNDEF{\Var}{\LP[\Var \code{:} \Type]\ldots\RP}{\Type}{\code{'()}}{\Exp} \end{array} } \newcommand{\LfunMonadASTPython}{ \begin{array}{lcl} \Type &::=& \key{IntType()} \MID \key{BoolType()} \MID \key{VoidType()} \MID \key{TupleType}\LS\Type^+\RS\\ &\MID& \key{FunctionType}\LP \Type^{*} \key{, } \Type \RP \\ \Exp &::=& \FUNREF{\itm{label}}{\Int} \MID \CALL{\Atm}{\Atm^{*}}\\ \Stmt &::=& \RETURN{\Exp} \\ \Params &::=& \LP\Var\key{,}\Type\RP^* \\ \Def &::=& \FUNDEF{\Var}{\Params}{\Type}{}{\Stmt^{+}} \end{array} } \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LvarMonadASTRacket} \\ \hline \gray{\LifMonadASTRacket} \\ \hline \gray{\LwhileMonadASTRacket} \\ \hline \gray{\LtupMonadASTRacket} \\ \hline \LfunMonadASTRacket \\ \begin{array}{rcl} \LangFunANFM{} &::=& \PROGRAMDEFSEXP{\code{'()}}{\LP\Def\ldots\RP)}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LvarMonadASTPython} \\ \hline \gray{\LifMonadASTPython} \\ \hline \gray{\LwhileMonadASTPython} \\ \hline \gray{\LtupMonadASTPython} \\ \hline \LfunMonadASTPython \\ \begin{array}{rcl} \LangFunANFM{} &::=& \PROGRAM{}{\LS \Def \ldots \Stmt \ldots \RS} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{\LangFunANF{} is \LangFunRef{} in monadic normal form.} \label{fig:Lfun-anf-syntax} \end{figure} %% Figure~\ref{fig:Lfun-anf-syntax} defines the output language %% \LangFunANF{} of this pass. %% \begin{figure}[tp] %% \centering %% \fbox{ %% \begin{minipage}{0.96\textwidth} %% \small %% \[ %% \begin{array}{rcl} %% \Atm &::=& \gray{ \INT{\Int} \MID \VAR{\Var} \MID \BOOL{\itm{bool}} %% \MID \VOID{} } \\ %% \Exp &::=& \gray{ \Atm \MID \READ{} } \\ %% &\MID& \gray{ \NEG{\Atm} \MID \ADD{\Atm}{\Atm} } \\ %% &\MID& \gray{ \LET{\Var}{\Exp}{\Exp} } \\ %% &\MID& \gray{ \UNIOP{\key{'not}}{\Atm} } \\ %% &\MID& \gray{ \BINOP{\itm{cmp}}{\Atm}{\Atm} \MID \IF{\Exp}{\Exp}{\Exp} }\\ %% &\MID& \gray{ \LP\key{Collect}~\Int\RP \MID \LP\key{Allocate}~\Int~\Type\RP %% \MID \LP\key{GlobalValue}~\Var\RP }\\ %% &\MID& \FUNREF{\Var} \MID \APPLY{\Atm}{\Atm\ldots}\\ %% \Def &::=& \gray{ \FUNDEF{\Var}{([\Var \code{:} \Type]\ldots)}{\Type}{\code{'()}}{\Exp} }\\ %% R^{\dagger}_4 &::=& \gray{ \PROGRAMDEFS{\code{'()}}{\Def} } %% \end{array} %% \] %% \end{minipage} %% } %% \caption{\LangFunANF{} is \LangFunRefAlloc{} in monadic normal form.} %% \label{fig:Lfun-anf-syntax} %% \end{figure} \section{Explicate Control and the \LangCFun{} Language} \label{sec:explicate-control-r4} Figure~\ref{fig:c3-syntax} defines the abstract syntax for \LangCFun{}, the output of \code{explicate\_control}. % %% \racket{(The concrete syntax is given in %% figure~\ref{fig:c3-concrete-syntax} of the Appendix.)} % The auxiliary functions for assignment\racket{ and tail contexts} should be updated with cases for \racket{\code{Apply}}\python{\code{Call}} and \code{FunRef} and the function for predicate context should be updated for \racket{\code{Apply}}\python{\code{Call}} but not \code{FunRef}. (A \code{FunRef} cannot be a Boolean.) In assignment and predicate contexts, \code{Apply} becomes \code{Call}\racket{, whereas in tail position \code{Apply} becomes \code{TailCall}}. We recommend defining a new auxiliary function for processing function definitions. This code is similar to the case for \code{Program} in \LangVec{}. The top-level \code{explicate\_control} function that handles the \code{ProgramDefs} form of \LangFun{} can then apply this new function to all the function definitions. {\if\edition\pythonEd\pythonColor The translation of \code{Return} statements requires a new auxiliary function to handle expressions in tail context, called \code{explicate\_tail}. The function should take an expression and the dictionary of basic blocks and produce a list of statements in the \LangCFun{} language. The \code{explicate\_tail} function should include cases for \code{Begin}, \code{IfExp}, and \code{Call}, and a default case for other kinds of expressions. The default case should produce a \code{Return} statement. The case for \code{Call} should change it into \code{TailCall}. The other cases should recursively process their subexpressions and statements, choosing the appropriate explicate functions for the various contexts. \fi} \newcommand{\CfunASTRacket}{ \begin{array}{lcl} \Exp &::= & \FUNREF{\itm{label}}{\Int} \MID \CALL{\Atm}{\LP\Atm\ldots\RP} \\ \Tail &::= & \TAILCALL{\Atm}{\Atm\ldots} \\ \Def &::=& \DEF{\itm{label}}{\LP[\Var\key{:}\Type]\ldots\RP}{\Type}{\itm{info}}{\LP\LP\itm{label}\,\key{.}\,\Tail\RP\ldots\RP} \end{array} } \newcommand{\CfunASTPython}{ \begin{array}{lcl} \Exp &::= & \FUNREF{\itm{label}}{\Int} \MID \CALL{\Atm}{\Atm^{*}} \\ \Tail &::= & \TAILCALL{\Atm}{\Atm^{*}} \\ \Params &::=& \LS\LP\Var\key{,}\Type\RP\code{,}\ldots\RS \\ \Block &::=& \itm{label}\key{:} \Stmt^{*}\;\Tail \\ \Blocks &::=& \LC\Block\code{,}\ldots\RC \\ \Def &::=& \DEF{\itm{label}}{\Params}{\Blocks}{\key{None}}{\Type}{\key{None}} \end{array} } \begin{figure}[tp] \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\CvarASTRacket} \\ \hline \gray{\CifASTRacket} \\ \hline \gray{\CloopASTRacket} \\ \hline \gray{\CtupASTRacket} \\ \hline \CfunASTRacket \\ \begin{array}{lcl} \LangCFunM{} & ::= & \PROGRAMDEFS{\itm{info}}{\LP\Def\ldots\RP} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\CifASTPython} \\ \hline \gray{\CtupASTPython} \\ \hline \CfunASTPython \\ \begin{array}{lcl} \LangCFunM{} & ::= & \CPROGRAMDEFS{\LS\Def\code{,}\ldots\RS} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangCFun{}, extending \LangCVec{} (figure~\ref{fig:c2-syntax}).} \label{fig:c3-syntax} \end{figure} \clearpage \section{Select Instructions and the \LangXIndCall{} Language} \label{sec:select-r4} \index{subject}{select instructions} The output of select instructions is a program in the \LangXIndCall{} language; the definition of its concrete syntax is shown in figure~\ref{fig:x86-3-concrete}, and the definition of its abstract syntax is shown in figure~\ref{fig:x86-3}. We use the \code{align} directive on the labels of function definitions to make sure the bottom three bits are zero, which we put to use in chapter~\ref{ch:Ldyn}. We discuss the new instructions as needed in this section. \index{subject}{x86} \newcommand{\GrammarXIndCall}{ \begin{array}{lcl} \Instr &::=& \key{callq}\;\key{*}\Arg \MID \key{tailjmp}\;\Arg \MID \key{leaq}\;\Arg\key{,}\;\key{\%}\Reg \\ \Block &::= & \Instr^{+} \\ \Def &::= & \code{.globl}\,\code{.align 8}\,\itm{label}\; (\itm{label}\key{:}\, \Block)^{*} \end{array} } \newcommand{\ASTXIndCallRacket}{ \begin{array}{lcl} \Instr &::=& \INDCALLQ{\Arg}{\itm{int}} \MID \TAILJMP{\Arg}{\itm{int}}\\ &\MID& \BININSTR{\code{'leaq}}{\Arg}{\REG{\Reg}}\\ \Block &::= & \BLOCK{\itm{info}}{\LP\Instr\ldots\RP}\\ \Def &::= & \DEF{\itm{label}}{\code{'()}}{\Type}{\itm{info}}{\LP\LP\itm{label}\,\key{.}\,\Block\RP\ldots\RP} \end{array} } \begin{figure}[tp] \begin{tcolorbox}[colback=white] \small \[ \begin{array}{l} \gray{\GrammarXInt} \\ \hline \gray{\GrammarXIf} \\ \hline \gray{\GrammarXGlobal} \\ \hline \GrammarXIndCall \\ \begin{array}{lcl} \LangXIndCallM{} &::= & \Def^{*} \end{array} \end{array} \] \end{tcolorbox} \caption{The concrete syntax of \LangXIndCall{} (extends \LangXGlobal{} of figure~\ref{fig:x86-2-concrete}).} \label{fig:x86-3-concrete} \end{figure} \begin{figure}[tp] \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[\arraycolsep=3pt \begin{array}{l} \gray{\ASTXIntRacket} \\ \hline \gray{\ASTXIfRacket} \\ \hline \gray{\ASTXGlobalRacket} \\ \hline \ASTXIndCallRacket \\ \begin{array}{lcl} \LangXIndCallM{} &::= & \XPROGRAMDEFS{\itm{info}}{\LP\Def\ldots\RP} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{lcl} \Arg &::=& \gray{ \INT{\Int} \MID \REG{\Reg} \MID \DEREF{\Reg}{\Int} \MID \BYTEREG{\Reg} } \\ &\MID& \gray{ \GLOBAL{\itm{label}} } \MID \FUNREF{\itm{label}}{\Int} \\ \Instr &::=& \ldots \MID \INDCALLQ{\Arg}{\itm{int}} \MID \TAILJMP{\Arg}{\itm{int}}\\ &\MID& \BININSTR{\scode{leaq}}{\Arg}{\REG{\Reg}}\\ \Block &::=&\itm{label}\key{:}\,\Instr^{*} \\ \Blocks &::= & \LC\Block\code{,}\ldots\RC\\ \Def &::= & \DEF{\itm{label}}{\LS\RS}{\Blocks}{\_}{\Type}{\_} \\ \LangXIndCallM{} &::= & \XPROGRAMDEFS{\LS\Def\code{,}\ldots\RS} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangXIndCall{} (extends \LangXGlobal{} of figure~\ref{fig:x86-2}).} \label{fig:x86-3} \end{figure} An assignment of a function reference to a variable becomes a load-effective-address instruction as follows, where $\itm{lhs}'$ is the translation of $\itm{lhs}$ from \Atm{} in \LangCFun{} to \Arg{} in \LangXIndCallVar{}. The \code{FunRef} becomes a \code{Global} AST node, whose concrete syntax is instruction-pointer-relative addressing. \begin{center} \begin{tabular}{lcl} \begin{minipage}{0.35\textwidth} {\if\edition\racketEd \begin{lstlisting} |$\itm{lhs}$| = (fun-ref |$f$| |$n$|); \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} |$\itm{lhs}$| = FunRef(|$f$| |$n$|); \end{lstlisting} \fi} \end{minipage} & $\Rightarrow$\qquad\qquad & \begin{minipage}{0.3\textwidth} \begin{lstlisting} leaq |$f$|(%rip), |$\itm{lhs}'$| \end{lstlisting} \end{minipage} \end{tabular} \end{center} Regarding function definitions, we need to remove the parameters and instead perform parameter passing using the conventions discussed in section~\ref{sec:fun-x86}. That is, the arguments are passed in registers. We recommend turning the parameters into local variables and generating instructions at the beginning of the function to move from the argument-passing registers (section~\ref{sec:calling-conventions-fun}) to these local variables. {\if\edition\racketEd \begin{lstlisting} (Def |$f$| '([|$x_1$| : |$T_1$|] [|$x_2$| : |$T_2$|] |$\ldots$| ) |$T_r$| |$\itm{info}$| |$B$|) |$\Rightarrow$| (Def |$f$| '() 'Integer |$\itm{info}'$| |$B'$|) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} FunctionDef(|$f$|, [|$(x_1,T_1),\ldots$|], |$B$|, _, |$T_r$|, _) |$\Rightarrow$| FunctionDef(|$f$|, [], |$B'$|, _, int, _) \end{lstlisting} \fi} The basic blocks $B'$ are the same as $B$ except that the \code{start} block is modified to add the instructions for moving from the argument registers to the parameter variables. So the \code{start} block of $B$ shown on the left of the following is changed to the code on the right: \begin{center} \begin{minipage}{0.3\textwidth} \begin{lstlisting} start: |$\itm{instr}_1$| |$\cdots$| |$\itm{instr}_n$| \end{lstlisting} \end{minipage} $\Rightarrow$ \begin{minipage}{0.3\textwidth} \begin{lstlisting} |$f$|start: movq %rdi, |$x_1$| movq %rsi, |$x_2$| |$\cdots$| |$\itm{instr}_1$| |$\cdots$| |$\itm{instr}_n$| \end{lstlisting} \end{minipage} \end{center} Recall that we use the label \code{start} for the initial block of a program, and in section~\ref{sec:select-Lvar} we recommend labeling the conclusion of the program with \code{conclusion}, so that $\RETURN{Arg}$ can be compiled to an assignment to \code{rax} followed by a jump to \code{conclusion}. With the addition of function definitions, there is a start block and conclusion for each function, but their labels need to be unique. We recommend prepending the function's name to \code{start} and \code{conclusion}, respectively, to obtain unique labels. \racket{The interpreter for \LangXIndCall{} needs to be given the number of parameters the function expects, but the parameters are no longer in the syntax of function definitions. Instead, add an entry to $\itm{info}$ that maps \code{num-params} to the number of parameters to construct $\itm{info}'$.} By changing the parameters to local variables, we are giving the register allocator control over which registers or stack locations to use for them. If you implement the move-biasing challenge (section~\ref{sec:move-biasing}), the register allocator will try to assign the parameter variables to the corresponding argument register, in which case the \code{patch\_instructions} pass will remove the \code{movq} instruction. This happens in the example translation given in figure~\ref{fig:add-fun} in section~\ref{sec:functions-example}, in the \code{add} function. % Also, note that the register allocator will perform liveness analysis on this sequence of move instructions and build the interference graph. So, for example, $x_1$ will be marked as interfering with \code{rsi}, and that will prevent the mapping of $x_1$ to \code{rsi}, which is good because otherwise the first \code{movq} would overwrite the argument in \code{rsi} that is needed for $x_2$. Next, consider the compilation of function calls. In the mirror image of the handling of parameters in function definitions, the arguments are moved to the argument-passing registers. Note that the function is not given as a label, but its address is produced by the argument $\itm{arg}_0$. So, we translate the call into an indirect function call. The return value from the function is stored in \code{rax}, so it needs to be moved into the \itm{lhs}. \begin{lstlisting} |\itm{lhs}| = |$\CALL{\itm{arg}_0}{\itm{arg}_1~\itm{arg}_2 \ldots}$| |$\Rightarrow$| movq |$\itm{arg}_1$|, %rdi movq |$\itm{arg}_2$|, %rsi |$\vdots$| callq *|$\itm{arg}_0$| movq %rax, |$\itm{lhs}$| \end{lstlisting} The \code{IndirectCallq} AST node includes an integer for the arity of the function, that is, the number of parameters. That information is useful in the \code{uncover\_live} pass for determining which argument-passing registers are potentially read during the call. For tail calls, the parameter passing is the same as non-tail calls: generate instructions to move the arguments into the argument-passing registers. After that we need to pop the frame from the procedure call stack. However, we do not yet know how big the frame is; that gets determined during register allocation. So, instead of generating those instructions here, we invent a new instruction that means ``pop the frame and then do an indirect jump,'' which we name \code{TailJmp}. The abstract syntax for this instruction includes an argument that specifies where to jump and an integer that represents the arity of the function being called. \section{Register Allocation} \label{sec:register-allocation-r4} The addition of functions requires some changes to all three aspects of register allocation, which we discuss in the following subsections. \subsection{Liveness Analysis} \label{sec:liveness-analysis-r4} \index{subject}{liveness analysis} %% The rest of the passes need only minor modifications to handle the new %% kinds of AST nodes: \code{fun-ref}, \code{indirect-callq}, and %% \code{leaq}. The \code{IndirectCallq} instruction should be treated like \code{Callq} regarding its written locations $W$, in that they should include all the caller-saved registers. Recall that the reason for that is to force variables that are live across a function call to be assigned to callee-saved registers or to be spilled to the stack. Regarding the set of read locations $R$, the arity fields of \code{TailJmp} and \code{IndirectCallq} determine how many of the argument-passing registers should be considered as read by those instructions. Also, the target field of \code{TailJmp} and \code{IndirectCallq} should be included in the set of read locations $R$. \subsection{Build Interference Graph} \label{sec:build-interference-r4} With the addition of function definitions, we compute a separate interference graph for each function (not just one for the whole program). Recall that in section~\ref{sec:reg-alloc-gc} we discussed the need to spill tuple-typed variables that are live during a call to \code{collect}, the garbage collector. With the addition of functions to our language, we need to revisit this issue. Functions that perform allocation contain calls to the collector. Thus, we should not only spill a tuple-typed variable when it is live during a call to \code{collect}, but we should spill the variable if it is live during a call to any user-defined function. Thus, in the \code{build\_interference} pass, we recommend adding interference edges between call-live tuple-typed variables and the callee-saved registers (in addition to creating edges between call-live variables and the caller-saved registers). \subsection{Allocate Registers} The primary change to the \code{allocate\_registers} pass is adding an auxiliary function for handling definitions (the \Def{} nonterminal shown in figure~\ref{fig:x86-3}) with one case for function definitions. The logic is the same as described in chapter~\ref{ch:register-allocation-Lvar} except that now register allocation is performed many times, once for each function definition, instead of just once for the whole program. \section{Patch Instructions} In \code{patch\_instructions}, you should deal with the x86 idiosyncrasy that the destination argument of \code{leaq} must be a register. Additionally, you should ensure that the argument of \code{TailJmp} is \itm{rax}, our reserved register---because we trample many other registers before the tail call, as explained in the next section. \section{Prelude and Conclusion} Now that register allocation is complete, we can translate the \code{TailJmp} into a sequence of instructions. A naive translation of \code{TailJmp} would simply be \code{jmp *$\itm{arg}$}. However, before the jump we need to pop the current frame to achieve efficient tail calls. This sequence of instructions is the same as the code for the conclusion of a function, except that the \code{retq} is replaced with \code{jmp *$\itm{arg}$}. Regarding function definitions, we generate a prelude and conclusion for each one. This code is similar to the prelude and conclusion generated for the \code{main} function presented in chapter~\ref{ch:Lvec}. To review, the prelude of every function should carry out the following steps: % TODO: .align the functions! \begin{enumerate} %% \item Start with \code{.global} and \code{.align} directives followed %% by the label for the function. (See figure~\ref{fig:add-fun} for an %% example.) \item Push \code{rbp} to the stack and set \code{rbp} to current stack pointer. \item Push to the stack all the callee-saved registers that were used for register allocation. \item Move the stack pointer \code{rsp} down to make room for the regular spills (aligned to 16 bytes). \item Move the root stack pointer \code{r15} up by the size of the root-stack frame for this function, which depends on the number of spilled tuple-typed variables. \label{root-stack-init} \item Initialize to zero all new entries in the root-stack frame. \item Jump to the start block. \end{enumerate} The prelude of the \code{main} function has an additional task: call the \code{initialize} function to set up the garbage collector, and then move the value of the global \code{rootstack\_begin} in \code{r15}. This initialization should happen before step \ref{root-stack-init}, which depends on \code{r15}. The conclusion of every function should do the following: \begin{enumerate} \item Move the stack pointer back up past the regular spills. \item Restore the callee-saved registers by popping them from the stack. \item Move the root stack pointer back down by the size of the root-stack frame for this function. \item Restore \code{rbp} by popping it from the stack. \item Return to the caller with the \code{retq} instruction. \end{enumerate} The output of this pass is \LangXIndCallFlat{}, which differs from \LangXIndCall{} in that there is no longer an AST node for function definitions. Instead, a program is just an association list of basic blocks, as in \LangXGlobal{}. So we have the following grammar rule: \[ \LangXIndCallFlatM{} ::= \XPROGRAM{\itm{info}}{\LP\LP\itm{label} \,\key{.}\, \Block \RP\ldots\RP} \] Figure~\ref{fig:Lfun-passes} gives an overview of the passes for compiling \LangFun{} to x86. \begin{exercise}\normalfont\normalsize Expand your compiler to handle \LangFun{} as outlined in this chapter. Create eight new programs that use functions including examples that pass functions and return functions from other functions, recursive functions, functions that create vectors, and functions that make tail calls. Test your compiler on these new programs and all your previously created test programs. \end{exercise} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.90] \node (Lfun) at (0,2) {\large \LangFun{}}; \node (Lfun-1) at (4,2) {\large \LangFun{}}; \node (Lfun-2) at (7,2) {\large \LangFun{}}; \node (F1-1) at (11,2) {\large \LangFunRef{}}; \node (F1-2) at (11,0) {\large \LangFunRef{}}; \node (F1-3) at (7,0) {\large \LangFunRefAlloc{}}; \node (F1-4) at (4,0) {\large \LangFunRefAlloc{}}; \node (F1-5) at (0,0) {\large \LangFunANF{}}; \node (C3-2) at (0,-2) {\large \LangCFun{}}; \node (x86-2) at (0,-4) {\large \LangXIndCallVar{}}; \node (x86-3) at (4,-4) {\large \LangXIndCallVar{}}; \node (x86-4) at (8,-4) {\large \LangXIndCall{}}; \node (x86-5) at (8,-6) {\large \LangXIndCallFlat{}}; \node (x86-2-1) at (0,-6) {\large \LangXIndCallVar{}}; \node (x86-2-2) at (4,-6) {\large \LangXIndCallVar{}}; \path[->,bend left=15] (Lfun) edge [above] node {\ttfamily\footnotesize shrink} (Lfun-1); \path[->,bend left=15] (Lfun-1) edge [above] node {\ttfamily\footnotesize uniquify} (Lfun-2); \path[->,bend left=15] (Lfun-2) edge [above] node {\ttfamily\footnotesize ~~reveal\_functions} (F1-1); \path[->,bend left=15] (F1-1) edge [left] node {\ttfamily\footnotesize limit\_functions} (F1-2); \path[->,bend left=15] (F1-2) edge [below] node {\ttfamily\footnotesize expose\_allocation} (F1-3); \path[->,bend left=15] (F1-3) edge [below] node {\ttfamily\footnotesize uncover\_get!} (F1-4); \path[->,bend right=15] (F1-4) edge [above] node {\ttfamily\footnotesize remove\_complex\_operands} (F1-5); \path[->,bend right=15] (F1-5) edge [right] node {\ttfamily\footnotesize explicate\_control} (C3-2); \path[->,bend right=15] (C3-2) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend left=15] (x86-2) edge [right] node {\ttfamily\footnotesize uncover\_live} (x86-2-1); \path[->,bend right=15] (x86-2-1) edge [below] node {\ttfamily\footnotesize build\_interference} (x86-2-2); \path[->,bend right=15] (x86-2-2) edge [right] node {\ttfamily\footnotesize allocate\_registers} (x86-3); \path[->,bend left=15] (x86-3) edge [above] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend right=15] (x86-4) edge [right] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \fi} {\if\edition\pythonEd\pythonColor \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.85] \node (Lfun) at (0,2) {\large \LangFun{}}; \node (Lfun-2) at (4,2) {\large \LangFun{}}; \node (F1-1) at (8,2) {\large \LangFunRef{}}; \node (F1-2) at (12,2) {\large \LangFunRef{}}; \node (F1-4) at (4,0) {\large \LangFunRefAlloc{}}; \node (F1-5) at (0,0) {\large \LangFunANF{}}; \node (C3-2) at (0,-2) {\large \LangCFun{}}; \node (x86-2) at (0,-4) {\large \LangXIndCallVar{}}; \node (x86-3) at (4,-4) {\large \LangXIndCallVar{}}; \node (x86-4) at (8,-4) {\large \LangXIndCall{}}; \node (x86-5) at (12,-4) {\large \LangXIndCallFlat{}}; \path[->,bend left=15] (Lfun) edge [above] node {\ttfamily\footnotesize shrink} (Lfun-2); \path[->,bend left=15] (Lfun-2) edge [above] node {\ttfamily\footnotesize ~~reveal\_functions} (F1-1); \path[->,bend left=15] (F1-1) edge [above] node {\ttfamily\footnotesize limit\_functions} (F1-2); \path[->,bend left=15] (F1-2) edge [right] node {\ttfamily\footnotesize \ \ expose\_allocation} (F1-4); \path[->,bend right=15] (F1-4) edge [above] node {\ttfamily\footnotesize remove\_complex\_operands} (F1-5); \path[->,bend right=15] (F1-5) edge [right] node {\ttfamily\footnotesize explicate\_control} (C3-2); \path[->,bend left=15] (C3-2) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend right=15] (x86-2) edge [below] node {\ttfamily\footnotesize assign\_homes} (x86-3); \path[->,bend left=15] (x86-3) edge [above] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend right=15] (x86-4) edge [below] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \fi} \end{tcolorbox} \caption{Diagram of the passes for \LangFun{}, a language with functions.} \label{fig:Lfun-passes} \end{figure} \section{An Example Translation} \label{sec:functions-example} Figure~\ref{fig:add-fun} shows an example translation of a simple function in \LangFun{} to x86. The figure includes the results of \code{explicate\_control} and \code{select\_instructions}. \begin{figure}[hbtp] \begin{tcolorbox}[colback=white] \begin{tabular}{ll} \begin{minipage}{0.4\textwidth} % s3_2.rkt {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define (add [x : Integer] [y : Integer]) : Integer (+ x y)) (add 40 2) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] def add(x:int, y:int) -> int: return x + y print(add(40, 2)) \end{lstlisting} \fi} $\Downarrow$ {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define (add86 [x87 : Integer] [y88 : Integer]) : Integer add86start: return (+ x87 y88); ) (define (main) : Integer () mainstart: tmp89 = (fun-ref add86 2); (tail-call tmp89 40 2) ) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] def add(x:int, y:int) -> int: addstart: return x + y def main() -> int: mainstart: fun.0 = add tmp.1 = fun.0(40, 2) print(tmp.1) return 0 \end{lstlisting} \fi} \end{minipage} & $\Rightarrow$ \begin{minipage}{0.5\textwidth} {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define (add86) : Integer add86start: movq %rdi, x87 movq %rsi, y88 movq x87, %rax addq y88, %rax jmp inc1389conclusion ) (define (main) : Integer mainstart: leaq (fun-ref add86 2), tmp89 movq $40, %rdi movq $2, %rsi tail-jmp tmp89 ) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] def add() -> int: addstart: movq %rdi, x movq %rsi, y movq x, %rax addq y, %rax jmp addconclusion def main() -> int: mainstart: leaq add, fun.0 movq $40, %rdi movq $2, %rsi callq *fun.0 movq %rax, tmp.1 movq tmp.1, %rdi callq print_int movq $0, %rax jmp mainconclusion \end{lstlisting} \fi} $\Downarrow$ \end{minipage} \end{tabular} \begin{tabular}{ll} \begin{minipage}{0.3\textwidth} {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] .globl add86 .align 8 add86: pushq %rbp movq %rsp, %rbp jmp add86start add86start: movq %rdi, %rax addq %rsi, %rax jmp add86conclusion add86conclusion: popq %rbp retq \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] .align 8 add: pushq %rbp movq %rsp, %rbp subq $0, %rsp jmp addstart addstart: movq %rdi, %rdx movq %rsi, %rcx movq %rdx, %rax addq %rcx, %rax jmp addconclusion addconclusion: subq $0, %r15 addq $0, %rsp popq %rbp retq \end{lstlisting} \fi} \end{minipage} & \begin{minipage}{0.5\textwidth} {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] .globl main .align 8 main: pushq %rbp movq %rsp, %rbp movq $16384, %rdi movq $16384, %rsi callq initialize movq rootstack_begin(%rip), %r15 jmp mainstart mainstart: leaq add86(%rip), %rcx movq $40, %rdi movq $2, %rsi movq %rcx, %rax popq %rbp jmp *%rax mainconclusion: popq %rbp retq \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] .globl main .align 8 main: pushq %rbp movq %rsp, %rbp subq $0, %rsp movq $65536, %rdi movq $65536, %rsi callq initialize movq rootstack_begin(%rip), %r15 jmp mainstart mainstart: leaq add(%rip), %rcx movq $40, %rdi movq $2, %rsi callq *%rcx movq %rax, %rcx movq %rcx, %rdi callq print_int movq $0, %rax jmp mainconclusion mainconclusion: subq $0, %r15 addq $0, %rsp popq %rbp retq \end{lstlisting} \fi} \end{minipage} \end{tabular} \end{tcolorbox} \caption{Example compilation of a simple function to x86.} \label{fig:add-fun} \end{figure} % Challenge idea: inlining! (simple version) % Further Reading %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Lexically Scoped Functions} \label{ch:Llambda} \setcounter{footnote}{0} This chapter studies lexically scoped functions. Lexical scoping\index{subject}{lexical scoping} means that a function's body may refer to variables whose binding site is outside of the function, in an enclosing scope. % Consider the example shown in figure~\ref{fig:lexical-scoping} written in \LangLam{}, which extends \LangFun{} with the \key{lambda}\index{subject}{lambda} form for creating lexically scoped functions. The body of the \key{lambda} refers to three variables: \code{x}, \code{y}, and \code{z}. The binding sites for \code{x} and \code{y} are outside of the \key{lambda}. Variable \code{y} is \racket{bound by the enclosing \key{let}}\python{a local variable of function \code{f}}, and \code{x} is a parameter of function \code{f}. Note that function \code{f} returns the \key{lambda} as its result value. The main expression of the program includes two calls to \code{f} with different arguments for \code{x}: first \code{5} and then \code{3}. The functions returned from \code{f} are bound to variables \code{g} and \code{h}. Even though these two functions were created by the same \code{lambda}, they are really different functions because they use different values for \code{x}. Applying \code{g} to \code{11} produces \code{20} whereas applying \code{h} to \code{15} produces \code{22}, so the result of the program is \code{42}. \begin{figure}[btp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd % lambda_test_21.rkt \begin{lstlisting} (define (f [x : Integer]) : (Integer -> Integer) (let ([y 4]) (lambda: ([z : Integer]) : Integer (+ x (+ y z))))) (let ([g (f 5)]) (let ([h (f 3)]) (+ (g 11) (h 15)))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def f(x : int) -> Callable[[int], int]: y = 4 return lambda z: x + y + z g = f(5) h = f(3) print(g(11) + h(15)) \end{lstlisting} \fi} \end{tcolorbox} \caption{Example of a lexically scoped function.} \label{fig:lexical-scoping} \end{figure} The approach that we take for implementing lexically scoped functions is to compile them into top-level function definitions, translating from \LangLam{} into \LangFun{}. However, the compiler must give special treatment to variable occurrences such as \code{x} and \code{y} in the body of the \code{lambda} shown in figure~\ref{fig:lexical-scoping}. After all, an \LangFun{} function may not refer to variables defined outside of it. To identify such variable occurrences, we review the standard notion of free variable. \begin{definition}\normalfont A variable is \emph{free in expression} $e$ if the variable occurs inside $e$ but does not have an enclosing definition that is also in $e$.\index{subject}{free variable} \end{definition} For example, in the expression \racket{\code{(+ x (+ y z))}}\python{\code{x + y + z}} the variables \code{x}, \code{y}, and \code{z} are all free. On the other hand, only \code{x} and \code{y} are free in the following expression, because \code{z} is defined by the \code{lambda} {\if\edition\racketEd \begin{lstlisting} (lambda: ([z : Integer]) : Integer (+ x (+ y z))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} lambda z: x + y + z \end{lstlisting} \fi} % \noindent Thus the free variables of a \code{lambda} are the ones that need special treatment. We need to transport at runtime the values of those variables from the point where the \code{lambda} was created to the point where the \code{lambda} is applied. An efficient solution to the problem, due to \citet{Cardelli:1983aa}, is to bundle the values of the free variables together with a function pointer into a tuple, an arrangement called a \emph{flat closure} (which we shorten to just \emph{closure}).\index{subject}{closure}\index{subject}{flat closure} % By design, we have all the ingredients to make closures: chapter~\ref{ch:Lvec} gave us tuples, and chapter~\ref{ch:Lfun} gave us function pointers. The function pointer resides at index $0$, and the values for the free variables fill in the rest of the tuple. Let us revisit the example shown in figure~\ref{fig:lexical-scoping} to see how closures work. It is a three-step dance. The program calls function \code{f}, which creates a closure for the \code{lambda}. The closure is a tuple whose first element is a pointer to the top-level function that we will generate for the \code{lambda}; the second element is the value of \code{x}, which is \code{5}; and the third element is \code{4}, the value of \code{y}. The closure does not contain an element for \code{z} because \code{z} is not a free variable of the \code{lambda}. Creating the closure is step 1 of the dance. The closure is returned from \code{f} and bound to \code{g}, as shown in figure~\ref{fig:closures}. % The second call to \code{f} creates another closure, this time with \code{3} in the second slot (for \code{x}). This closure is also returned from \code{f} but bound to \code{h}, which is also shown in figure~\ref{fig:closures}. \begin{figure}[tbp] \centering \begin{minipage}{0.65\textwidth} \begin{tcolorbox}[colback=white] \includegraphics[width=\textwidth]{figs/closures} \end{tcolorbox} \end{minipage} \caption{Flat closure representations for the two functions produced by the \key{lambda} in figure~\ref{fig:lexical-scoping}.} \label{fig:closures} \end{figure} Continuing with the example, consider the application of \code{g} to \code{11} shown in figure~\ref{fig:lexical-scoping}. To apply a closure, we obtain the function pointer from the first element of the closure and call it, passing in the closure itself and then the regular arguments, in this case \code{11}. This technique for applying a closure is step 2 of the dance. % But doesn't this \code{lambda} take only one argument, for parameter \code{z}? The third and final step of the dance is generating a top-level function for a \code{lambda}. We add an additional parameter for the closure and insert an initialization at the beginning of the function for each free variable, to bind those variables to the appropriate elements from the closure parameter. % This three-step dance is known as \emph{closure conversion}\index{subject}{closure conversion}. We discuss the details of closure conversion in section~\ref{sec:closure-conversion} and show the code generated from the example in section~\ref{sec:example-lambda}. First, we define the syntax and semantics of \LangLam{} in section~\ref{sec:r5}. \section{The \LangLam{} Language} \label{sec:r5} The definitions of the concrete syntax and abstract syntax for \LangLam{}, a language with anonymous functions and lexical scoping, are shown in figures~\ref{fig:Llam-concrete-syntax} and \ref{fig:Llam-syntax}. They add the \key{lambda} form to the grammar for \LangFun{}, which already has syntax for function application. % \python{The syntax also includes an assignment statement that includes a type annotation for the variable on the left-hand side, which facilitates the type checking of \code{lambda} expressions that we discuss later in this section.} % \racket{The \code{procedure-arity} operation returns the number of parameters of a given function, an operation that we need for the translation of dynamic typing that is discussed in chapter~\ref{ch:Ldyn}.} % \python{The \code{arity} operation returns the number of parameters of a given function, an operation that we need for the translation of dynamic typing that is discussed in chapter~\ref{ch:Ldyn}. The \code{arity} operation is not in Python, but the same functionality is available in a more complex form. We include \code{arity} in the \LangLam{} source language to enable testing.} \newcommand{\LlambdaGrammarRacket}{ \begin{array}{lcl} \Exp &::=& \CLAMBDA{\LP\LS\Var \key{:} \Type\RS\ldots\RP}{\Type}{\Exp} \\ &\MID& \LP \key{procedure-arity}~\Exp\RP \end{array} } \newcommand{\LlambdaASTRacket}{ \begin{array}{lcl} \Exp &::=& \LAMBDA{\LP\LS\Var\code{:}\Type\RS\ldots\RP}{\Type}{\Exp}\\ \itm{op} &::=& \code{procedure-arity} \end{array} } \newcommand{\LlambdaGrammarPython}{ \begin{array}{lcl} \Exp &::=& \CLAMBDA{\Var\code{, }\ldots}{\Exp} \MID \CARITY{\Exp} \\ \Stmt &::=& \CANNASSIGN{\Var}{\Type}{\Exp} \end{array} } \newcommand{\LlambdaASTPython}{ \begin{array}{lcl} \Exp &::=& \LAMBDA{\Var^{*}}{\Exp} \MID \ARITY{\Exp} \\ \Stmt &::=& \ANNASSIGN{\Var}{\Type}{\Exp} \end{array} } % include AnnAssign in ASTPython \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintGrammarRacket{}} \\ \hline \gray{\LvarGrammarRacket{}} \\ \hline \gray{\LifGrammarRacket{}} \\ \hline \gray{\LwhileGrammarRacket} \\ \hline \gray{\LtupGrammarRacket} \\ \hline \gray{\LfunGrammarRacket} \\ \hline \LlambdaGrammarRacket \\ \begin{array}{lcl} \LangLamM{} &::=& \Def\ldots \; \Exp \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintGrammarPython{}} \\ \hline \gray{\LvarGrammarPython{}} \\ \hline \gray{\LifGrammarPython{}} \\ \hline \gray{\LwhileGrammarPython} \\ \hline \gray{\LtupGrammarPython} \\ \hline \gray{\LfunGrammarPython} \\ \hline \LlambdaGrammarPython \\ \begin{array}{lcl} \LangFunM{} &::=& \Def\ldots \Stmt\ldots \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The concrete syntax of \LangLam{}, extending \LangFun{} (figure~\ref{fig:Lfun-concrete-syntax}) with \key{lambda}.} \label{fig:Llam-concrete-syntax} \end{figure} \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[\arraycolsep=3pt \begin{array}{l} \gray{\LintOpAST} \\ \hline \gray{\LvarASTRacket{}} \\ \hline \gray{\LifASTRacket{}} \\ \hline \gray{\LwhileASTRacket{}} \\ \hline \gray{\LtupASTRacket{}} \\ \hline \gray{\LfunASTRacket} \\ \hline \LlambdaASTRacket \\ \begin{array}{lcl} \LangLamM{} &::=& \PROGRAMDEFSEXP{\code{'()}}{\LP\Def\ldots\RP}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintASTPython} \\ \hline \gray{\LvarASTPython{}} \\ \hline \gray{\LifASTPython{}} \\ \hline \gray{\LwhileASTPython{}} \\ \hline \gray{\LtupASTPython{}} \\ \hline \gray{\LfunASTPython} \\ \hline \LlambdaASTPython \\ \begin{array}{lcl} \LangLamM{} &::=& \PROGRAM{}{\LS \Def \ldots \Stmt \ldots \RS} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangLam{}, extending \LangFun{} (figure~\ref{fig:Lfun-syntax}).} \label{fig:Llam-syntax} \end{figure} Figure~\ref{fig:interp-Llambda} shows the definitional interpreter\index{subject}{interpreter} for \LangLam{}. The case for \key{Lambda} saves the current environment inside the returned function value. Recall that during function application, the environment stored in the function value, extended with the mapping of parameters to argument values, is used to interpret the body of the function. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define interp-Llambda-class (class interp-Lfun-class (super-new) (define/override (interp-op op) (match op ['procedure-arity (lambda (v) (match v [`(function (,xs ...) ,body ,lam-env) (length xs)] [else (error 'interp-op "expected a function, not ~a" v)]))] [else (super interp-op op)])) (define/override ((interp-exp env) e) (define recur (interp-exp env)) (match e [(Lambda (list `[,xs : ,Ts] ...) rT body) `(function ,xs ,body ,env)] [else ((super interp-exp env) e)])) )) (define (interp-Llambda p) (send (new interp-Llambda-class) interp-program p)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} class InterpLlambda(InterpLfun): def arity(self, v): match v: case Function(name, params, body, env): return len(params) case _: raise Exception('Llambda arity unexpected ' + repr(v)) def interp_exp(self, e, env): match e: case Call(Name('arity'), [fun]): f = self.interp_exp(fun, env) return self.arity(f) case Lambda(params, body): return Function('lambda', params, [Return(body)], env) case _: return super().interp_exp(e, env) def interp_stmt(self, s, env, cont): match s: case AnnAssign(lhs, typ, value, simple): env[lhs.id] = self.interp_exp(value, env) return self.interp_stmts(cont, env) case Pass(): return self.interp_stmts(cont, env) case _: return super().interp_stmt(s, env, cont) \end{lstlisting} \fi} \end{tcolorbox} \caption{Interpreter for \LangLam{}.} \label{fig:interp-Llambda} \end{figure} {\if\edition\racketEd % Figure~\ref{fig:type-check-Llambda} shows how to type check the new \key{lambda} form. The body of the \key{lambda} is checked in an environment that includes the current environment (because it is lexically scoped) and also includes the \key{lambda}'s parameters. We require the body's type to match the declared return type. % \fi} {\if\edition\pythonEd\pythonColor % Figures~\ref{fig:type-check-Llambda} and \ref{fig:type-check-Llambda-part2} define the type checker for \LangLam{}, which is more complex than one might expect. The reason for the added complexity is that the syntax of \key{lambda} does not include type annotations for the parameters or return type. Instead they must be inferred. There are many approaches to type inference from which to choose, of varying degrees of complexity. We choose one of the simpler approaches, bidirectional type inference~\citep{Pierce:2000,Dunfield:2021}, because the focus of this book is compilation, not type inference. The main idea of bidirectional type inference is to add an auxiliary function, here named \code{check\_exp}, that takes an expected type and checks whether the given expression is of that type. Thus, in \code{check\_exp}, type information flows in a top-down manner with respect to the AST, in contrast to the regular \code{type\_check\_exp} function, where type information flows in a primarily bottom-up manner. % The idea then is to use \code{check\_exp} in all the places where we already know what the type of an expression should be, such as in the \code{return} statement of a top-level function definition or on the right-hand side of an annotated assignment statement. With regard to \code{lambda}, it is straightforward to check a \code{lambda} inside \code{check\_exp} because the expected type provides the parameter types and the return type. On the other hand, inside \code{type\_check\_exp} we disallow \code{lambda}, which means that we do not allow \code{lambda} in contexts in which we don't already know its type. This restriction does not incur a loss of expressiveness for \LangLam{} because it is straightforward to modify a program to sidestep the restriction, for example, by using an annotated assignment statement to assign the \code{lambda} to a temporary variable. Note that for the \code{Name} and \code{Lambda} AST nodes, the type checker records their type in a \code{has\_type} field. This type information is used further on in this chapter. % \fi} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define (type-check-Llambda env) (lambda (e) (match e [(Lambda (and params `([,xs : ,Ts] ...)) rT body) (define-values (new-body bodyT) ((type-check-exp (append (map cons xs Ts) env)) body)) (define ty `(,@Ts -> ,rT)) (cond [(equal? rT bodyT) (values (HasType (Lambda params rT new-body) ty) ty)] [else (error "mismatch in return type" bodyT rT)])] ... ))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} class TypeCheckLlambda(TypeCheckLfun): def type_check_exp(self, e, env): match e: case Name(id): e.has_type = env[id] return env[id] case Lambda(params, body): raise Exception('cannot synthesize a type for a lambda') case Call(Name('arity'), [func]): func_t = self.type_check_exp(func, env) match func_t: case FunctionType(params_t, return_t): return IntType() case _: raise Exception('in arity, unexpected ' + repr(func_t)) case _: return super().type_check_exp(e, env) def check_exp(self, e, ty, env): match e: case Lambda(params, body): e.has_type = ty match ty: case FunctionType(params_t, return_t): new_env = env.copy().update(zip(params, params_t)) self.check_exp(body, return_t, new_env) case _: raise Exception('lambda does not have type ' + str(ty)) case Call(func, args): func_t = self.type_check_exp(func, env) match func_t: case FunctionType(params_t, return_t): for (arg, param_t) in zip(args, params_t): self.check_exp(arg, param_t, env) self.check_type_equal(return_t, ty, e) case _: raise Exception('type_check_exp: in call, unexpected ' + \ repr(func_t)) case _: t = self.type_check_exp(e, env) self.check_type_equal(t, ty, e) \end{lstlisting} \fi} \end{tcolorbox} \caption{Type checking \LangLam{}\python{, part 1}.} \label{fig:type-check-Llambda} \end{figure} {\if\edition\pythonEd\pythonColor \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting} def check_stmts(self, ss, return_ty, env): if len(ss) == 0: return match ss[0]: case FunctionDef(name, params, body, dl, returns, comment): new_env = env.copy().update(params) rt = self.check_stmts(body, returns, new_env) self.check_stmts(ss[1:], return_ty, env) case Return(value): self.check_exp(value, return_ty, env) case Assign([Name(id)], value): if id in env: self.check_exp(value, env[id], env) else: env[id] = self.type_check_exp(value, env) self.check_stmts(ss[1:], return_ty, env) case Assign([Subscript(tup, Constant(index), Store())], value): tup_t = self.type_check_exp(tup, env) match tup_t: case TupleType(ts): self.check_exp(value, ts[index], env) case _: raise Exception('expected a tuple, not ' + repr(tup_t)) self.check_stmts(ss[1:], return_ty, env) case AnnAssign(Name(id), ty_annot, value, simple): ss[0].annotation = ty_annot if id in env: self.check_type_equal(env[id], ty_annot) else: env[id] = ty_annot self.check_exp(value, ty_annot, env) self.check_stmts(ss[1:], return_ty, env) case _: self.type_check_stmts(ss, env) def type_check(self, p): match p: case Module(body): env = {} for s in body: match s: case FunctionDef(name, params, bod, dl, returns, comment): params_t = [t for (x,t) in params] env[name] = FunctionType(params_t, returns) self.check_stmts(body, int, env) \end{lstlisting} \end{tcolorbox} \caption{Type checking the \key{lambda}'s in \LangLam{}, part 2.} \label{fig:type-check-Llambda-part2} \end{figure} \fi} \clearpage \section{Assignment and Lexically Scoped Functions} \label{sec:assignment-scoping} The combination of lexically scoped functions and assignment to variables raises a challenge with the flat-closure approach to implementing lexically scoped functions. Consider the following example in which function \code{f} has a free variable \code{x} that is changed after \code{f} is created but before the call to \code{f}. % loop_test_11.rkt {\if\edition\racketEd \begin{lstlisting} (let ([x 0]) (let ([y 0]) (let ([z 20]) (let ([f (lambda: ([a : Integer]) : Integer (+ a (+ x z)))]) (begin (set! x 10) (set! y 12) (f y)))))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor % box_free_assign.py \begin{lstlisting} def g(z : int) -> int: x = 0 y = 0 f : Callable[[int],int] = lambda a: a + x + z x = 10 y = 12 return f(y) print(g(20)) \end{lstlisting} \fi} The correct output for this example is \code{42} because the call to \code{f} is required to use the current value of \code{x} (which is \code{10}). Unfortunately, the closure conversion pass (section~\ref{sec:closure-conversion}) generates code for the \code{lambda} that copies the old value of \code{x} into a closure. Thus, if we naively applied closure conversion, the output of this program would be \code{32}. A first attempt at solving this problem would be to save a pointer to \code{x} in the closure and change the occurrences of \code{x} inside the lambda to dereference the pointer. Of course, this would require assigning \code{x} to the stack and not to a register. However, the problem goes a bit deeper. Consider the following example that returns a function that refers to a local variable of the enclosing function: \begin{center} \begin{minipage}{\textwidth} {\if\edition\racketEd \begin{lstlisting} (define (f) : ( -> Integer) (let ([x 0]) (let ([g (lambda: () : Integer x)]) (begin (set! x 42) g)))) ((f)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor % counter.py \begin{lstlisting} def f(): x = 0 g = lambda: x x = 42 return g print(f()()) \end{lstlisting} \fi} \end{minipage} \end{center} In this example, the lifetime of \code{x} extends beyond the lifetime of the call to \code{f}. Thus, if we were to store \code{x} on the stack frame for the call to \code{f}, it would be gone by the time we called \code{g}, leaving us with dangling pointers for \code{x}. This example demonstrates that when a variable occurs free inside a function, its lifetime becomes indefinite. Thus, the value of the variable needs to live on the heap. The verb \emph{box}\index{subject}{box} is often used for allocating a single value on the heap, producing a pointer, and \emph{unbox}\index{subject}{unbox} for dereferencing the pointer. % We introduce a new pass named \code{convert\_assignments} to address this challenge. % \python{But before diving into that, we have one more problem to discuss.} {\if\edition\pythonEd\pythonColor \section{Uniquify Variables} \label{sec:uniquify-lambda} With the addition of \code{lambda} we have a complication to deal with: name shadowing. Consider the following program with a function \code{f} that has a parameter \code{x}. Inside \code{f} there are two \code{lambda} expressions. The first \code{lambda} has a parameter that is also named \code{x}. \begin{lstlisting} def f(x:int, y:int) -> Callable[[int], int]: g : Callable[[int],int] = (lambda x: x + y) h : Callable[[int],int] = (lambda y: x + y) x = input_int() return g print(f(0, 10)(32)) \end{lstlisting} Many of our compiler passes rely on being able to connect variable uses with their definitions using just the name of the variable. However, in the example above, the name of the variable does not uniquely determine its definition. To solve this problem we recommend implementing a pass named \code{uniquify} that renames every variable in the program to make sure that they are all unique. The following shows the result of \code{uniquify} for the example above. The \code{x} parameter of function \code{f} is renamed to \code{x\_0}, and the \code{x} parameter of the first \code{lambda} is renamed to \code{x\_4}. \begin{lstlisting} def f(x_0:int, y_1:int) -> Callable[[int], int] : g_2 : Callable[[int], int] = (lambda x_4: x_4 + y_1) h_3 : Callable[[int], int] = (lambda y_5: x_0 + y_5) x_0 = input_int() return g_2 def main() -> int : print(f(0, 10)(32)) return 0 \end{lstlisting} \fi} % pythonEd %% \section{Reveal Functions} %% \label{sec:reveal-functions-r5} %% \racket{To support the \code{procedure-arity} operator we need to %% communicate the arity of a function to the point of closure %% creation.} %% % %% \python{In chapter~\ref{ch:Ldyn} we need to access the arity of a %% function at runtime. Thus, we need to communicate the arity of a %% function to the point of closure creation.} %% % %% We can accomplish this by replacing the $\FUNREF{\Var}{\Int}$ AST node with %% one that has a second field for the arity: $\FUNREFARITY{\Var}{\Int}$. %% \[ %% \begin{array}{lcl} %% \Exp &::=& \FUNREFARITY{\Var}{\Int} %% \end{array} %% \] \section{Assignment Conversion} \label{sec:convert-assignments} The purpose of the \code{convert\_assignments} pass is to address the challenge regarding the interaction between variable assignments and closure conversion. First we identify which variables need to be boxed, and then we transform the program to box those variables. In general, boxing introduces runtime overhead that we would like to avoid, so we should box as few variables as possible. We recommend boxing the variables in the intersection of the following two sets of variables: \begin{enumerate} \item The variables that are free in a \code{lambda}. \item The variables that appear on the left-hand side of an assignment. \end{enumerate} The first condition is a must but the second condition is conservative. It is possible to develop a more liberal condition using static program analysis. Consider again the first example from section~\ref{sec:assignment-scoping}: % {\if\edition\racketEd \begin{lstlisting} (let ([x 0]) (let ([y 0]) (let ([z 20]) (let ([f (lambda: ([a : Integer]) : Integer (+ a (+ x z)))]) (begin (set! x 10) (set! y 12) (f y)))))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def g(z : int) -> int: x = 0 y = 0 f : Callable[[int],int] = lambda a: a + x + z x = 10 y = 12 return f(y) print(g(20)) \end{lstlisting} \fi} % \noindent The variables \code{x} and \code{y} appear on the left-hand side of assignments. The variables \code{x} and \code{z} occur free inside the \code{lambda}. Thus, variable \code{x} needs to be boxed but not \code{y} or \code{z}. The boxing of \code{x} consists of three transformations: initialize \code{x} with a tuple whose elements are uninitialized, replace reads from \code{x} with tuple reads, and replace each assignment to \code{x} with a tuple write. The output of \code{convert\_assignments} for this example is as follows: % {\if\edition\racketEd \begin{lstlisting} (define (main) : Integer (let ([x0 (vector 0)]) (let ([y1 0]) (let ([z2 20]) (let ([f4 (lambda: ([a3 : Integer]) : Integer (+ a3 (+ (vector-ref x0 0) z2)))]) (begin (vector-set! x0 0 10) (set! y1 12) (f4 y1))))))) \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor \begin{lstlisting} def g(z : int)-> int: x = (uninitialized(int),) x[0] = 0 y = 0 f : Callable[[int], int] = (lambda a: a + x[0] + z) x[0] = 10 y = 12 return f(y) def main() -> int: print(g(20)) return 0 \end{lstlisting} \fi} To compute the free variables of all the \code{lambda} expressions, we recommend defining the following two auxiliary functions: \begin{enumerate} \item \code{free\_variables} computes the free variables of an expression, and \item \code{free\_in\_lambda} collects all the variables that are free in any of the \code{lambda} expressions, using \code{free\_variables} in the case for each \code{lambda}. \end{enumerate} {\if\edition\racketEd % To compute the variables that are assigned to, we recommend updating the \code{collect-set!} function that we introduced in section~\ref{sec:uncover-get-bang} to include the new AST forms such as \code{Lambda}. % \fi} {\if\edition\pythonEd\pythonColor % To compute the variables that are assigned to, we recommend defining an auxiliary function named \code{assigned\_vars\_stmt} that returns the set of variables that occur in the left-hand side of an assignment statement and otherwise returns the empty set. % \fi} Let $\mathit{AF}$ be the intersection of the set of variables that are free in a \code{lambda} and that are assigned to in the enclosing function definition. Next we discuss the \code{convert\_assignments} pass. In the case for $\VAR{x}$, if $x$ is in $\mathit{AF}$, then unbox it by translating $\VAR{x}$ to a tuple read. % {\if\edition\racketEd \begin{lstlisting} (Var |$x$|) |$\Rightarrow$| (Prim 'vector-ref (list (Var |$x$|) (Int 0))) \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor \begin{lstlisting} Name(|$x$|) |$\Rightarrow$| Subscript(Name(|$x$|), Constant(0), Load()) \end{lstlisting} \fi} % \noindent In the case for assignment, recursively process the right-hand side \itm{rhs} to obtain \itm{rhs'}. If the left-hand side $x$ is in $\mathit{AF}$, translate the assignment into a tuple write as follows: % {\if\edition\racketEd \begin{lstlisting} (SetBang |$x$| |$\itm{rhs}$|) |$\Rightarrow$| (Prim 'vector-set! (list (Var |$x$|) (Int 0) |$\itm{rhs'}$|)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Assign([Name(|$x$|)],|$\itm{rhs}$|) |$\Rightarrow$| Assign([Subscript(Name(|$x$|), Constant(0), Store())], |$\itm{rhs'}$|) \end{lstlisting} \fi} % {\if\edition\racketEd The case for \code{Lambda} is nontrivial, but it is similar to the case for function definitions, which we discuss next. \fi} % To translate a function definition, we first compute $\mathit{AF}$, the intersection of the variables that are free in a \code{lambda} and that are assigned to. We then apply assignment conversion to the body of the function definition. Finally, we box the parameters of this function definition that are in $\mathit{AF}$. For example, the parameter \code{x} of the following function \code{g} needs to be boxed: {\if\edition\racketEd \begin{lstlisting} (define (g [x : Integer]) : Integer (let ([f (lambda: ([a : Integer]) : Integer (+ a x))]) (begin (set! x 10) (f 32)))) \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor \begin{lstlisting} def g(x : int) -> int: f : Callable[[int],int] = lambda a: a + x x = 10 return f(32) \end{lstlisting} \fi} % \noindent We box parameter \code{x} by creating a local variable named \code{x} that is initialized to a tuple whose contents is the value of the parameter, which has been renamed to \code{x\_0}. % {\if\edition\racketEd \begin{lstlisting} (define (g [x_0 : Integer]) : Integer (let ([x (vector x_0)]) (let ([f (lambda: ([a : Integer]) : Integer (+ a (vector-ref x 0)))]) (begin (vector-set! x 0 10) (f 32))))) \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor \begin{lstlisting} def g(x_0 : int)-> int: x = (x_0,) f : Callable[[int], int] = (lambda a: a + x[0]) x[0] = 10 return f(32) \end{lstlisting} \fi} \section{Closure Conversion} \label{sec:closure-conversion} \index{subject}{closure conversion} The compiling of lexically scoped functions into top-level function definitions and flat closures is accomplished in the pass \code{convert\_to\_closures} that comes after \code{reveal\_functions} and before \code{limit\_functions}. As usual, we implement the pass as a recursive function over the AST. The interesting cases are for \key{lambda} and function application. We transform a \key{lambda} expression into an expression that creates a closure, that is, a tuple for which the first element is a function pointer and the rest of the elements are the values of the free variables of the \key{lambda}. % However, we use the \code{Closure} AST node instead of using a tuple so that we can record the arity. % In the generated code that follows, \itm{fvs} is the free variables of the lambda and \itm{name} is a unique symbol generated to identify the lambda. % \racket{The \itm{arity} is the number of parameters (the length of \itm{ps}).} % {\if\edition\racketEd \begin{lstlisting} (Lambda |\itm{ps}| |\itm{rt}| |\itm{body}|) |$\Rightarrow$| (Closure |\itm{arity}| (cons (FunRef |\itm{name}| |\itm{arity}|) |\itm{fvs}|)) \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor \begin{lstlisting} Lambda([|$x_1,\ldots,x_n$|], |\itm{body}|) |$\Rightarrow$| Closure(|$n$|, [FunRef(|\itm{name}|, |$n$|), |\itm{fvs}, \ldots|]) \end{lstlisting} \fi} % In addition to transforming each \key{Lambda} AST node into a tuple, we create a top-level function definition for each \key{Lambda}, as shown next.\\ \begin{minipage}{0.8\textwidth} {\if\edition\racketEd \begin{lstlisting} (Def |\itm{name}| ([clos : (Vector _ |\itm{fvts}| ...)] |\itm{ps'}| ...) |\itm{rt'}| (Let |$\itm{fvs}_1$| (Prim 'vector-ref (list (Var clos) (Int 1))) ... (Let |$\itm{fvs}_n$| (Prim 'vector-ref (list (Var clos) (Int |$n$|))) |\itm{body'}|)...)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def |\itm{name}|(clos : |\itm{closTy}|, |\itm{ps'}, \ldots|) -> |\itm{rt'}|: |$\itm{fvs}_1$| = clos[1] |$\ldots$| |$\itm{fvs}_n$| = clos[|$n$|] |\itm{body'}| \end{lstlisting} \fi} \end{minipage}\\ The \code{clos} parameter refers to the closure. Translate the type annotations in \itm{ps} and the return type \itm{rt}, as discussed in the next paragraph, to obtain \itm{ps'} and \itm{rt'}. The type \itm{closTy} is a tuple type for which the first element type is \python{\code{Bottom()}}\racket{\code{\_} (the dummy type)} and the rest of the element types are the types of the free variables in the lambda. We use \python{\code{Bottom()}}\racket{\code{\_}} because it is nontrivial to give a type to the function in the closure's type.% % \footnote{To give an accurate type to a closure, we would need to add existential types to the type checker~\citep{Minamide:1996ys}.} % %% The dummy type is considered to be equal to any other type during type %% checking. The free variables become local variables that are initialized with their values in the closure. Closure conversion turns every function into a tuple, so the type annotations in the program must also be translated. We recommend defining an auxiliary recursive function for this purpose. Function types should be translated as follows: % {\if\edition\racketEd \begin{lstlisting} (|$T_1, \ldots, T_n$| -> |$T_r$|) |$\Rightarrow$| (Vector ((Vector) |$T'_1, \ldots, T'_n$| -> |$T'_r$|)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} FunctionType([|$T_1, \ldots, T_n$|], |$T_r$|) |$\Rightarrow$| TupleType([FunctionType([TupleType([]), |$T'_1, \ldots, T'_n$|], |$T'_r$|)]) \end{lstlisting} \fi} % This type indicates that the first thing in the tuple is a function. The first parameter of the function is a tuple (a closure) and the rest of the parameters are the ones from the original function, with types $T'_1, \ldots, T'_n$. The type for the closure omits the types of the free variables because (1) those types are not available in this context, and (2) we do not need them in the code that is generated for function application. So this type describes only the first component of the closure tuple. At runtime the tuple may have more components, but we ignore them at this point. We transform function application into code that retrieves the function from the closure and then calls the function, passing the closure as the first argument. We place $e'$ in a temporary variable to avoid code duplication. \begin{center} \begin{minipage}{\textwidth} {\if\edition\racketEd \begin{lstlisting} (Apply |$e$| |$\itm{es}$|) |$\Rightarrow$| (Let |$\itm{tmp}$| |$e'$| (Apply (Prim 'vector-ref (list (Var |$\itm{tmp}$|) (Int 0))) (cons (Var |$\itm{tmp}$|) |$\itm{es'}$|))) \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor \begin{lstlisting} Call(|$e$|, [|$e_1, \ldots, e_n$|]) |$\Rightarrow$| Begin([Assign([|$\itm{tmp}$|], |$e'$|)], Call(Subscript(Name(|$\itm{tmp}$|), Constant(0)), [|$\itm{tmp}$|, |$e'_1, \ldots, e'_n$|])) \end{lstlisting} \fi} \end{minipage} \end{center} There is also the question of what to do with references to top-level function definitions. To maintain a uniform translation of function application, we turn function references into closures. \begin{tabular}{lll} \begin{minipage}{0.2\textwidth} {\if\edition\racketEd \begin{lstlisting} (FunRef |$f$| |$n$|) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} FunRef(|$f$|, |$n$|) \end{lstlisting} \fi} \end{minipage} & $\Rightarrow\qquad$ & \begin{minipage}{0.5\textwidth} {\if\edition\racketEd \begin{lstlisting} (Closure |$n$| (FunRef |$f$| |$n$|) '()) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Closure(|$n$|, [FunRef(|$f$| |$n$|)]) \end{lstlisting} \fi} \end{minipage} \end{tabular} \\ We no longer need the annotated assignment statement \code{AnnAssign} to support the type checking of \code{lambda} expressions, so we translate it to a regular \code{Assign} statement. The top-level function definitions need to be updated to take an extra closure parameter, but that parameter is ignored in the body of those functions. \section{An Example Translation} \label{sec:example-lambda} Figure~\ref{fig:lexical-functions-example} shows the result of \code{reveal\_functions} and \code{convert\_to\_closures} for the example program demonstrating lexical scoping that we discussed at the beginning of this chapter. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{minipage}{0.8\textwidth} {\if\edition\racketEd % tests/lambda_test_6.rkt \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define (f6 [x7 : Integer]) : (Integer -> Integer) (let ([y8 4]) (lambda: ([z9 : Integer]) : Integer (+ x7 (+ y8 z9))))) (define (main) : Integer (let ([g0 ((fun-ref f6 1) 5)]) (let ([h1 ((fun-ref f6 1) 3)]) (+ (g0 11) (h1 15))))) \end{lstlisting} $\Rightarrow$ \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define (f6 [fvs4 : _] [x7 : Integer]) : (Vector ((Vector _) Integer -> Integer)) (let ([y8 4]) (closure 1 (list (fun-ref lambda2 1) x7 y8)))) (define (lambda2 [fvs3 : (Vector _ Integer Integer)] [z9 : Integer]) : Integer (let ([x7 (vector-ref fvs3 1)]) (let ([y8 (vector-ref fvs3 2)]) (+ x7 (+ y8 z9))))) (define (main) : Integer (let ([g0 (let ([clos5 (closure 1 (list (fun-ref f6 1)))]) ((vector-ref clos5 0) clos5 5))]) (let ([h1 (let ([clos6 (closure 1 (list (fun-ref f6 1)))]) ((vector-ref clos6 0) clos6 3))]) (+ ((vector-ref g0 0) g0 11) ((vector-ref h1 0) h1 15))))) \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor % free_var.py \begin{lstlisting} def f(x: int) -> Callable[[int],int]: y = 4 return lambda z: x + y + z g = f(5) h = f(3) print(g(11) + h(15)) \end{lstlisting} $\Rightarrow$ \begin{lstlisting} def lambda_0(fvs_1: tuple[bot,int,tuple[int]], z: int) -> int: x = fvs_1[1] y = fvs_1[2] return (x + y[0] + z) def f(fvs_2: tuple[bot], x: int) -> tuple[Callable[[tuple[],int],int]]: y = (uninitialized(int),) y[0] = 4 return closure{1}({lambda_0}, x, y) def main() -> int: g = (begin: clos_3 = closure{1}({f}) clos_3[0](clos_3, 5)) h = (begin: clos_4 = closure{1}({f}) clos_4[0](clos_4, 3)) print((begin: clos_5 = g clos_5[0](clos_5, 11)) + (begin: clos_6 = h clos_6[0](clos_6, 15))) return 0 \end{lstlisting} \fi} \end{minipage} \end{tcolorbox} \caption{Example of closure conversion.} \label{fig:lexical-functions-example} \end{figure} \begin{exercise}\normalfont\normalsize Expand your compiler to handle \LangLam{} as outlined in this chapter. Create five new programs that use \key{lambda} functions and make use of lexical scoping. Test your compiler on these new programs and all your previously created test programs. \end{exercise} \section{Expose Allocation} \label{sec:expose-allocation-r5} Compile the $\CLOSURE{\itm{arity}}{\Exp^{*}}$ form into code that allocates and initializes a tuple, similar to the translation of the tuple creation in section~\ref{sec:expose-allocation}. The only difference is replacing the use of \ALLOC{\itm{len}}{\itm{type}} with \ALLOCCLOS{\itm{len}}{\itm{type}}{\itm{arity}}. \section{Explicate Control and \LangCLam{}} \label{sec:explicate-r5} The output language of \code{explicate\_control} is \LangCLam{}; the definition of its abstract syntax is shown in figure~\ref{fig:Clam-syntax}. % \racket{The only differences with respect to \LangCFun{} are the addition of the \code{AllocateClosure} form to the grammar for $\Exp$ and the \code{procedure-arity} operator. The handling of \code{AllocateClosure} in the \code{explicate\_control} pass is similar to the handling of other expressions such as primitive operators.} % \python{The differences with respect to \LangCFun{} are the additions of \code{Uninitialized}, \code{AllocateClosure}, and \code{arity} to the grammar for $\Exp$. The handling of them in the \code{explicate\_control} pass is similar to the handling of other expressions such as primitive operators.} \newcommand{\ClambdaASTRacket}{ \begin{array}{lcl} \Exp &::= & \ALLOCCLOS{\Int}{\Type}{\Int} \\ \itm{op} &::= & \code{procedure-arity} \end{array} } \newcommand{\ClambdaASTPython}{ \begin{array}{lcl} \Exp &::=& \key{Uninitialized}\LP \Type \RP \MID \key{AllocateClosure}\LP\itm{len},\Type, \itm{arity}\RP \\ &\MID& \ARITY{\Atm} \end{array} } \begin{figure}[tp] \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\CvarASTRacket} \\ \hline \gray{\CifASTRacket} \\ \hline \gray{\CloopASTRacket} \\ \hline \gray{\CtupASTRacket} \\ \hline \gray{\CfunASTRacket} \\ \hline \ClambdaASTRacket \\ \begin{array}{lcl} \LangCLamM{} & ::= & \PROGRAMDEFS{\itm{info}}{\Def^{*}} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\CifASTPython} \\ \hline \gray{\CtupASTPython} \\ \hline \gray{\CfunASTPython} \\ \hline \ClambdaASTPython \\ \begin{array}{lcl} \LangCLamM{} & ::= & \CPROGRAMDEFS{\LS\Def\code{,}\ldots\RS} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangCLam{}, extending \LangCFun{} (figure~\ref{fig:c3-syntax}).} \label{fig:Clam-syntax} \end{figure} \section{Select Instructions} \label{sec:select-instructions-Llambda} \index{subject}{select instructions} Compile \ALLOCCLOS{\itm{len}}{\itm{type}}{\itm{arity}} in almost the same way as the \ALLOC{\itm{len}}{\itm{type}} form (section~\ref{sec:select-instructions-gc}). The only difference is that you should place the \itm{arity} in the tag that is stored at position $0$ of the tuple. Recall that in section~\ref{sec:select-instructions-gc} a portion of the 64-bit tag was not used. We store the arity in the $5$ bits starting at position $58$. \racket{Compile the \code{procedure-arity} operator into a sequence of instructions that access the tag from position $0$ of the vector and extract the $5$ bits starting at position $58$ from the tag.} % \python{Compile a call to the \code{arity} operator to a sequence of instructions that access the tag from position $0$ of the tuple (representing a closure) and extract the $5$ bits starting at position $58$ from the tag.} Figure~\ref{fig:Llambda-passes} provides an overview of the passes needed for the compilation of \LangLam{}. \begin{figure}[bthp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.85] \node (Lfun) at (0,2) {\large \LangLam{}}; \node (Lfun-2) at (4,2) {\large \LangLam{}}; \node (Lfun-3) at (8,2) {\large \LangLam{}}; \node (F1-0) at (12,2) {\large \LangLamFunRef{}}; \node (F1-1) at (12,0) {\large \LangLamFunRef{}}; \node (F1-2) at (8,0) {\large \LangFunRef{}}; \node (F1-3) at (4,0) {\large \LangFunRef{}}; \node (F1-4) at (0,0) {\large \LangFunRefAlloc{}}; \node (F1-5) at (0,-2) {\large \LangFunRefAlloc{}}; \node (F1-6) at (4,-2) {\large \LangFunANF{}}; \node (C3-2) at (8,-2) {\large \LangCFun{}}; \node (x86-2) at (0,-5) {\large \LangXIndCallVar{}}; \node (x86-2-1) at (0,-7) {\large \LangXIndCallVar{}}; \node (x86-2-2) at (4,-7) {\large \LangXIndCallVar{}}; \node (x86-3) at (4,-5) {\large \LangXIndCallVar{}}; \node (x86-4) at (8,-5) {\large \LangXIndCall{}}; \node (x86-5) at (8,-7) {\large \LangXIndCall{}}; \path[->,bend left=15] (Lfun) edge [above] node {\ttfamily\footnotesize shrink} (Lfun-2); \path[->,bend left=15] (Lfun-2) edge [above] node {\ttfamily\footnotesize uniquify} (Lfun-3); \path[->,bend left=15] (Lfun-3) edge [above] node {\ttfamily\footnotesize reveal\_functions} (F1-0); \path[->,bend left=15] (F1-0) edge [left] node {\ttfamily\footnotesize convert\_assignments} (F1-1); \path[->,bend left=15] (F1-1) edge [below] node {\ttfamily\footnotesize convert\_to\_closures} (F1-2); \path[->,bend right=15] (F1-2) edge [above] node {\ttfamily\footnotesize limit\_functions} (F1-3); \path[->,bend right=15] (F1-3) edge [above] node {\ttfamily\footnotesize expose\_allocation} (F1-4); \path[->,bend left=15] (F1-4) edge [right] node {\ttfamily\footnotesize uncover\_get!} (F1-5); \path[->,bend right=15] (F1-5) edge [below] node {\ttfamily\footnotesize remove\_complex\_operands} (F1-6); \path[->,bend left=15] (F1-6) edge [above] node {\ttfamily\footnotesize explicate\_control} (C3-2); \path[->] (C3-2) edge [right] node {\ttfamily\footnotesize \ \ select\_instructions} (x86-2); \path[->,bend right=15] (x86-2) edge [right] node {\ttfamily\footnotesize uncover\_live} (x86-2-1); \path[->,bend right=15] (x86-2-1) edge [below] node {\ttfamily\footnotesize build\_interference} (x86-2-2); \path[->,bend right=15] (x86-2-2) edge [right] node {\ttfamily\footnotesize allocate\_registers} (x86-3); \path[->,bend left=15] (x86-3) edge [above] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend left=15] (x86-4) edge [right] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \fi} {\if\edition\pythonEd\pythonColor \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.85] \node (Lfun) at (0,2) {\large \LangLam{}}; \node (Lfun-2) at (4,2) {\large \LangLam{}}; \node (Lfun-3) at (8,2) {\large \LangLam{}}; \node (F1-0) at (12,2) {\large \LangLamFunRef{}}; \node (F1-1) at (12,0) {\large \LangLamFunRef{}}; \node (F1-2) at (8,0) {\large \LangFunRef{}}; \node (F1-3) at (4,0) {\large \LangFunRef{}}; \node (F1-5) at (0,0) {\large \LangFunRefAlloc{}}; \node (F1-6) at (0,-2) {\large \LangFunANF{}}; \node (C3-2) at (0,-4) {\large \LangCFun{}}; \node (x86-2) at (0,-6) {\large \LangXIndCallVar{}}; \node (x86-3) at (4,-6) {\large \LangXIndCallVar{}}; \node (x86-4) at (8,-6) {\large \LangXIndCall{}}; \node (x86-5) at (12,-6) {\large \LangXIndCall{}}; \path[->,bend left=15] (Lfun) edge [above] node {\ttfamily\footnotesize shrink} (Lfun-2); \path[->,bend left=15] (Lfun-2) edge [above] node {\ttfamily\footnotesize uniquify} (Lfun-3); \path[->,bend left=15] (Lfun-3) edge [above] node {\ttfamily\footnotesize reveal\_functions} (F1-0); \path[->,bend left=15] (F1-0) edge [left] node {\ttfamily\footnotesize convert\_assignments} (F1-1); \path[->,bend left=15] (F1-1) edge [below] node {\ttfamily\footnotesize convert\_to\_closures} (F1-2); \path[->,bend left=15] (F1-2) edge [below] node {\ttfamily\footnotesize limit\_functions} (F1-3); \path[->,bend right=15] (F1-3) edge [above] node {\ttfamily\footnotesize expose\_allocation} (F1-5); \path[->,bend right=15] (F1-5) edge [right] node {\ttfamily\footnotesize remove\_complex\_operands} (F1-6); \path[->,bend left=15] (F1-6) edge [right] node {\ttfamily\footnotesize explicate\_control} (C3-2); \path[->,bend right=15] (C3-2) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend right=15] (x86-2) edge [below] node {\ttfamily\footnotesize assign\_homes} (x86-3); \path[->,bend right=15] (x86-3) edge [below] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend left=15] (x86-4) edge [above] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \fi} \end{tcolorbox} \caption{Diagram of the passes for \LangLam{}, a language with lexically scoped functions.} \label{fig:Llambda-passes} \end{figure} \clearpage \section{Challenge: Optimize Closures} \label{sec:optimize-closures} In this chapter we compile lexically scoped functions into a relatively efficient representation: flat closures. However, even this representation comes with some overhead. For example, consider the following program with a function \code{tail\_sum} that does not have any free variables and where all the uses of \code{tail\_sum} are in applications in which we know that only \code{tail\_sum} is being applied (and not any other functions): \begin{center} \begin{minipage}{0.95\textwidth} {\if\edition\racketEd \begin{lstlisting} (define (tail_sum [n : Integer] [s : Integer]) : Integer (if (eq? n 0) s (tail_sum (- n 1) (+ n s)))) (+ (tail_sum 3 0) 36) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def tail_sum(n : int, s : int) -> int: if n == 0: return s else: return tail_sum(n - 1, n + s) print(tail_sum(3, 0) + 36) \end{lstlisting} \fi} \end{minipage} \end{center} As described in this chapter, we uniformly apply closure conversion to all functions, obtaining the following output for this program: \begin{center} \begin{minipage}{0.95\textwidth} {\if\edition\racketEd \begin{lstlisting} (define (tail_sum1 [fvs5 : _] [n2 : Integer] [s3 : Integer]) : Integer (if (eq? n2 0) s3 (let ([clos4 (closure (list (fun-ref tail_sum1 2)))]) ((vector-ref clos4 0) clos4 (+ n2 -1) (+ n2 s3))))) (define (main) : Integer (+ (let ([clos6 (closure (list (fun-ref tail_sum1 2)))]) ((vector-ref clos6 0) clos6 3 0)) 27)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def tail_sum(fvs_3:bot,n_0:int,s_1:int) -> int : if n_0 == 0: return s_1 else: return (begin: clos_2 = (tail_sum,) clos_2[0](clos_2, n_0 - 1, n_0 + s_1)) def main() -> int : print((begin: clos_4 = (tail_sum,) clos_4[0](clos_4, 3, 0)) + 36) return 0 \end{lstlisting} \fi} \end{minipage} \end{center} If this program were compiled according to the previous chapter, there would be no allocation and the calls to \code{tail\_sum} would be direct calls. In contrast, the program presented here allocates memory for each closure and the calls to \code{tail\_sum} are indirect. These two differences incur considerable overhead in a program such as this, in which the allocations and indirect calls occur inside a tight loop. One might think that this problem is trivial to solve: can't we just recognize calls of the form \APPLY{\FUNREF{$f$}{$n$}}{$\mathit{args}$} and compile them to direct calls instead of treating it like a call to a closure? We would also drop the new \code{fvs} parameter of \code{tail\_sum}. % However, this problem is not so trivial, because a global function may \emph{escape} and become involved in applications that also involve closures. Consider the following example in which the application \CAPPLY{\code{f}}{\code{41}} needs to be compiled into a closure application because the \code{lambda} may flow into \code{f}, but the \code{inc} function might also flow into \code{f}: \begin{center} \begin{minipage}{\textwidth} % lambda_test_30.rkt {\if\edition\racketEd \begin{lstlisting} (define (inc [x : Integer]) : Integer (+ x 1)) (let ([y (read)]) (let ([f (if (eq? (read) 0) inc (lambda: ([x : Integer]) : Integer (- x y)))]) (f 41))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def add1(x : int) -> int: return x + 1 y = input_int() g : Callable[[int], int] = lambda x: x - y f = add1 if input_int() == 0 else g print(f(41)) \end{lstlisting} \fi} \end{minipage} \end{center} If a global function name is used in any way other than as the operator in a direct call, then we say that the function \emph{escapes}. If a global function does not escape, then we do not need to perform closure conversion on the function. \begin{exercise}\normalfont\normalsize Implement an auxiliary function for detecting which global functions escape. Using that function, implement an improved version of closure conversion that does not apply closure conversion to global functions that do not escape but instead compiles them as regular functions. Create several new test cases that check whether your compiler properly detects whether global functions escape or not. \end{exercise} So far we have reduced the overhead of calling global functions, but it would also be nice to reduce the overhead of calling a \code{lambda} when we can determine at compile time which \code{lambda} will be called. We refer to such calls as \emph{known calls}. Consider the following example in which a \code{lambda} is bound to \code{f} and then applied. {\if\edition\racketEd % lambda_test_9.rkt \begin{lstlisting} (let ([y (read)]) (let ([f (lambda: ([x : Integer]) : Integer (+ x y))]) (f 21))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} y = input_int() f : Callable[[int],int] = lambda x: x + y print(f(21)) \end{lstlisting} \fi} % \noindent Closure conversion compiles the application \CAPPLY{\code{f}}{\code{21}} into an indirect call, as follows: % {\if\edition\racketEd \begin{lstlisting} (define (lambda5 [fvs6 : (Vector _ Integer)] [x3 : Integer]) : Integer (let ([y2 (vector-ref fvs6 1)]) (+ x3 y2))) (define (main) : Integer (let ([y2 (read)]) (let ([f4 (Closure 1 (list (fun-ref lambda5 1) y2))]) ((vector-ref f4 0) f4 21)))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def lambda_3(fvs_4:tuple[bot,tuple[int]], x_2:int) -> int: y_1 = fvs_4[1] return x_2 + y_1[0] def main() -> int: y_1 = (777,) y_1[0] = input_int() f_0 = (lambda_3, y_1) print((let clos_5 = f_0 in clos_5[0](clos_5, 21))) return 0 \end{lstlisting} \fi} % \noindent However, we can instead compile the application \CAPPLY{\code{f}}{\code{21}} into a direct call, as follows: % {\if\edition\racketEd \begin{lstlisting} (define (main) : Integer (let ([y2 (read)]) (let ([f4 (Closure 1 (list (fun-ref lambda5 1) y2))]) ((fun-ref lambda5 1) f4 21)))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def main() -> int: y_1 = (777,) y_1[0] = input_int() f_0 = (lambda_3, y_1) print(lambda_3(f_0, 21)) return 0 \end{lstlisting} \fi} The problem of determining which \code{lambda} will be called from a particular application is quite challenging in general and the topic of considerable research~\citep{Shivers:1988aa,Gilray:2016aa}. For the following exercise we recommend that you compile an application to a direct call when the operator is a variable and \racket{the variable is \code{let}-bound to a closure}\python{the previous assignment to the variable is a closure}. This can be accomplished by maintaining an environment that maps variables to function names. Extend the environment whenever you encounter a closure on the right-hand side of \racket{a \code{let}}\python{an assignment}, mapping the variable to the name of the global function for the closure. This pass should come after closure conversion. \begin{exercise}\normalfont\normalsize Implement a compiler pass, named \code{optimize\_known\_calls}, that compiles known calls into direct calls. Verify that your compiler is successful in this regard on several example programs. \end{exercise} These exercises only scratch the surface of closure optimization. A good next step for the interested reader is to look at the work of \citet{Keep:2012ab}. \section{Further Reading} The notion of lexically scoped functions predates modern computers by about a decade. They were invented by \citet{Church:1932aa}, who proposed the lambda calculus as a foundation for logic. Anonymous functions were included in the LISP~\citep{McCarthy:1960dz} programming language but were initially dynamically scoped. The Scheme dialect of LISP adopted lexical scoping, and \citet{Guy-L.-Steele:1978yq} demonstrated how to efficiently compile Scheme programs. However, environments were represented as linked lists, so variable look-up was linear in the size of the environment. \citet{Appel91} gives a detailed description of several closure representations. In this chapter we represent environments using flat closures, which were invented by \citet{Cardelli:1983aa,Cardelli:1984aa} for the purpose of compiling the ML language~\citep{Gordon:1978aa,Milner:1990fk}. With flat closures, variable look-up is constant time but the time to create a closure is proportional to the number of its free variables. Flat closures were reinvented by \citet{Dybvig:1987ab} in his PhD thesis and used in Chez Scheme version 1~\citep{Dybvig:2006aa}. % todo: related work on assignment conversion (e.g. orbit and rabbit % compilers) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Dynamic Typing} \label{ch:Ldyn} \index{subject}{dynamic typing} \setcounter{footnote}{0} In this chapter we learn how to compile \LangDyn{}, a dynamically typed language that is a subset of \racket{Racket}\python{Python}. The focus on dynamic typing is in contrast to the previous chapters, which have studied the compilation of statically typed languages. In dynamically typed languages such as \LangDyn{}, a particular expression may produce a value of a different type each time it is executed. Consider the following example with a conditional \code{if} expression that may return a Boolean or an integer depending on the input to the program: % part of dynamic_test_25.rkt {\if\edition\racketEd \begin{lstlisting} (not (if (eq? (read) 1) #f 0)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} not (False if input_int() == 1 else 0) \end{lstlisting} \fi} Languages that allow expressions to produce different kinds of values are called \emph{polymorphic}, a word composed of the Greek roots \emph{poly}, meaning \emph{many}, and \emph{morph}, meaning \emph{form}. There are several kinds of polymorphism in programming languages, such as subtype polymorphism\index{subject}{subtype polymorphism} and parametric polymorphism\index{subject}{parametric polymorphism} (aka generics)~\citep{Cardelli:1985kx}. The kind of polymorphism that we study in this chapter does not have a special name; it is the kind that arises in dynamically typed languages. Another characteristic of dynamically typed languages is that their primitive operations, such as \code{not}, are often defined to operate on many different types of values. In fact, in \racket{Racket}\python{Python}, the \code{not} operator produces a result for any kind of value: given \FALSE{} it returns \TRUE{}, and given anything else it returns \FALSE{}. Furthermore, even when primitive operations restrict their inputs to values of a certain type, this restriction is enforced at runtime instead of during compilation. For example, the tuple read operation \racket{\code{(vector-ref \#t 0)}}\python{\code{True[0]}} results in a runtime error because the first argument must be a tuple, not a Boolean. \section{The \LangDyn{} Language} \newcommand{\LdynGrammarRacket}{ \begin{array}{rcl} \Exp &::=& \LP\Exp \; \Exp\ldots\RP \MID \LP\key{lambda}\;\LP\Var\ldots\RP\;\Exp\RP \\ & \MID & \LP\key{boolean?}\;\Exp\RP \MID \LP\key{integer?}\;\Exp\RP\\ & \MID & \LP\key{vector?}\;\Exp\RP \MID \LP\key{procedure?}\;\Exp\RP \MID \LP\key{void?}\;\Exp\RP \\ \Def &::=& \LP\key{define}\; \LP\Var \; \Var\ldots\RP \; \Exp\RP \end{array} } \newcommand{\LdynASTRacket}{ \begin{array}{lcl} \Exp &::=& \APPLY{\Exp}{\Exp\ldots} \MID \LAMBDA{\LP\Var\ldots\RP}{\code{'Any}}{\Exp}\\ \Def &::=& \FUNDEF{\Var}{\LP\Var\ldots\RP}{\code{'Any}}{\code{'()}}{\Exp} \end{array} } \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintGrammarRacket{}} \\ \hline \gray{\LvarGrammarRacket{}} \\ \hline \gray{\LifGrammarRacket{}} \\ \hline \gray{\LwhileGrammarRacket} \\ \hline \gray{\LtupGrammarRacket} \\ \hline \LdynGrammarRacket \\ \begin{array}{rcl} \LangDynM{} &::=& \Def\ldots\; \Exp \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{rcl} \itm{cmp} &::= & \key{==} \MID \key{!=} \MID \key{<} \MID \key{<=} \MID \key{>} \MID \key{>=} \MID \key{is} \\ \Exp &::=& \Int \MID \key{input\_int}\LP\RP \MID \key{-}\;\Exp \MID \Exp \; \key{+} \; \Exp \MID \Exp \; \key{-} \; \Exp \MID \LP\Exp\RP \\ &\MID& \Var{} \MID \TRUE \MID \FALSE \MID \CAND{\Exp}{\Exp} \MID \COR{\Exp}{\Exp} \MID \key{not}~\Exp \\ &\MID& \CCMP{\itm{cmp}}{\Exp}{\Exp} \MID \CIF{\Exp}{\Exp}{\Exp} \\ &\MID& \Exp \key{,} \ldots \key{,} \Exp \MID \CGET{\Exp}{\Exp} \MID \CLEN{\Exp} \\ &\MID& \CAPPLY{\Exp}{\Exp\code{,} \ldots} \MID \CLAMBDA{\Var\code{, }\ldots}{\Exp}\\ \Stmt &::=& \key{print}\LP \Exp \RP \MID \Exp \MID \Var\mathop{\key{=}}\Exp \\ &\MID& \key{if}~ \Exp \key{:}~ \Stmt^{+} ~\key{else:}~ \Stmt^{+} \MID \key{while}~ \Exp \key{:}~ \Stmt^{+} \\ &\MID& \CRETURN{\Exp} \\ \Def &::=& \CDEFU{\Var}{\Var{,} \ldots}{\Stmt^{+}} \\ \LangDynM{} &::=& \Def\ldots \Stmt\ldots \end{array} \] \fi} \end{tcolorbox} \caption{Syntax of \LangDyn{}, an untyped language (a subset of \racket{Racket}\python{Python}).} \label{fig:r7-concrete-syntax} \end{figure} \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintASTRacket{}} \\ \hline \gray{\LvarASTRacket{}} \\ \hline \gray{\LifASTRacket{}} \\ \hline \gray{\LwhileASTRacket} \\ \hline \gray{\LtupASTRacket} \\ \hline \LdynASTRacket \\ \begin{array}{lcl} \LangDynM{} &::=& \PROGRAMDEFSEXP{\code{'()}}{\LP\Def\ldots\RP}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{rcl} \itm{boolop} &::=& \code{And()} \MID \code{Or()} \\ \itm{cmp} &::= & \code{Eq()} \MID \code{NotEq()} \MID \code{Lt()} \MID \code{LtE()} \MID \code{Gt()} \MID \code{GtE()} \MID \code{Is()} \\ \itm{bool} &::=& \code{True} \MID \code{False} \\ \Exp{} &::=& \INT{\Int} \MID \READ{} \\ &\MID& \UNIOP{\key{USub()}}{\Exp}\\ &\MID& \BINOP{\Exp}{\key{Add()}}{\Exp} \MID \BINOP{\Exp}{\key{Sub()}}{\Exp} \\ &\MID& \VAR{\Var{}} \MID \BOOL{\itm{bool}} \MID \BOOLOP{\itm{boolop}}{\Exp}{\Exp}\\ &\MID& \CMP{\Exp}{\itm{cmp}}{\Exp} \MID \IF{\Exp}{\Exp}{\Exp} \\ &\MID& \TUPLE{\Exp^{+}} \MID \GET{\Exp}{\Exp} \\ &\MID& \LEN{\Exp} \\ &\MID& \CALL{\Exp}{\Exp^{*}} \MID \LAMBDA{\Var^{*}}{\Exp} \\ \Stmt{} &::=& \PRINT{\Exp} \MID \EXPR{\Exp} \\ &\MID& \ASSIGN{\VAR{\Var}}{\Exp}\\ &\MID& \IFSTMT{\Exp}{\Stmt^{+}}{\Stmt^{+}} \MID \WHILESTMT{\Exp}{\Stmt^{+}}\\ &\MID& \RETURN{\Exp} \\ \Params &::=& \LP\Var\key{,}\code{AnyType()}\RP^* \\ \Def &::=& \FUNDEF{\Var}{\Params}{\code{AnyType()}}{}{\Stmt^{+}} \\ \LangDynM{} &::=& \PROGRAM{}{\LS \Def \ldots \Stmt \ldots \RS} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangDyn{}.} \label{fig:r7-syntax} \end{figure} The definitions of the concrete and abstract syntax of \LangDyn{} are shown in figures~\ref{fig:r7-concrete-syntax} and \ref{fig:r7-syntax}. % There is no type checker for \LangDyn{} because it checks types only at runtime. The definitional interpreter for \LangDyn{} is presented in \racket{figure~\ref{fig:interp-Ldyn}}\python{figures~\ref{fig:interp-Ldyn} and \ref{fig:interp-Ldyn-2}}, and definitions of its auxiliary functions are shown in figure~\ref{fig:interp-Ldyn-aux}. Consider the match case for \INT{n}. Instead of simply returning the integer \code{n} (as in the interpreter for \LangVar{} in figure~\ref{fig:interp-Lvar}), the interpreter for \LangDyn{} creates a \emph{tagged value}\index{subject}{tagged value} that combines an underlying value with a tag that identifies what kind of value it is. We define the following \racket{struct}\python{class} to represent tagged values: % {\if\edition\racketEd \begin{lstlisting} (struct Tagged (value tag) #:transparent) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{minipage}{\textwidth} \begin{lstlisting} @dataclass(eq=True) class Tagged(Value): value : Value tag : str def __str__(self): return str(self.value) \end{lstlisting} \end{minipage} \fi} % \racket{The tags are \code{Integer}, \BOOLTY{}, \code{Void}, \code{Vector}, and \code{Procedure}.} % \python{The tags are \skey{int}, \skey{bool}, \skey{none}, \skey{tuple}, and \skey{function}.} % Tags are closely related to types but do not always capture all the information that a type does. % \racket{For example, a vector of type \code{(Vector Any Any)} is tagged with \code{Vector}, and a procedure of type \code{(Any Any -> Any)} is tagged with \code{Procedure}.} % \python{For example, a tuple of type \code{TupleType([AnyType(),AnyType()])} is tagged with \skey{tuple} and a function of type \code{FunctionType([AnyType(), AnyType()], AnyType())} is tagged with \skey{function}.} Next consider the match case for accessing the element of a tuple. The \racket{\code{check-tag}}\python{\code{untag}} auxiliary function (figure~\ref{fig:interp-Ldyn-aux}) is used to ensure that the first argument is a tuple and the second is an integer. \racket{ If they are not, a \code{trapped-error} is raised. Recall from section~\ref{sec:interp_Lint} that when a definition interpreter raises a \code{trapped-error} error, the compiled code must also signal an error by exiting with return code \code{255}. A \code{trapped-error} is also raised if the index is not less than the length of the vector. } % \python{If they are not, an exception is raised. The compiled code must also signal an error by exiting with return code \code{255}. A exception is also raised if the index is not less than the length of the tuple or if it is negative.} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define ((interp-Ldyn-exp env) ast) (define recur (interp-Ldyn-exp env)) (match ast [(Var x) (dict-ref env x)] [(Int n) (Tagged n 'Integer)] [(Bool b) (Tagged b 'Boolean)] [(Lambda xs rt body) (Tagged `(function ,xs ,body ,env) 'Procedure)] [(Prim 'vector es) (Tagged (apply vector (for/list ([e es]) (recur e))) 'Vector)] [(Prim 'vector-ref (list e1 e2)) (define vec (recur e1)) (define i (recur e2)) (check-tag vec 'Vector ast) (check-tag i 'Integer ast) (unless (< (Tagged-value i) (vector-length (Tagged-value vec))) (error 'trapped-error "index ~a too big\nin ~v" (Tagged-value i) ast)) (vector-ref (Tagged-value vec) (Tagged-value i))] [(Prim 'vector-set! (list e1 e2 e3)) (define vec (recur e1)) (define i (recur e2)) (define arg (recur e3)) (check-tag vec 'Vector ast) (check-tag i 'Integer ast) (unless (< (Tagged-value i) (vector-length (Tagged-value vec))) (error 'trapped-error "index ~a too big\nin ~v" (Tagged-value i) ast)) (vector-set! (Tagged-value vec) (Tagged-value i) arg) (Tagged (void) 'Void)] [(Let x e body) ((interp-Ldyn-exp (cons (cons x (recur e)) env)) body)] [(Prim 'and (list e1 e2)) (recur (If e1 e2 (Bool #f)))] [(Prim 'or (list e1 e2)) (define v1 (recur e1)) (match (Tagged-value v1) [#f (recur e2)] [else v1])] [(Prim 'eq? (list l r)) (Tagged (equal? (recur l) (recur r)) 'Boolean)] [(Prim op (list e1)) #:when (set-member? type-predicates op) (tag-value ((interp-op op) (Tagged-value (recur e1))))] [(Prim op es) (define args (map recur es)) (define tags (for/list ([arg args]) (Tagged-tag arg))) (unless (for/or ([expected-tags (op-tags op)]) (equal? expected-tags tags)) (error 'trapped-error "illegal argument tags ~a\nin ~v" tags ast)) (tag-value (apply (interp-op op) (for/list ([a args]) (Tagged-value a))))] [(If q t f) (match (Tagged-value (recur q)) [#f (recur f)] [else (recur t)])] [(Apply f es) (define new-f (recur f)) (define args (map recur es)) (check-tag new-f 'Procedure ast) (define f-val (Tagged-value new-f)) (match f-val [`(function ,xs ,body ,lam-env) (unless (eq? (length xs) (length args)) (error 'trapped-error "~a != ~a\nin ~v" (length args) (length xs) ast)) (define new-env (append (map cons xs args) lam-env)) ((interp-Ldyn-exp new-env) body)] [else (error "interp-Ldyn-exp, expected function, not" f-val)])])) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\scriptsize] class InterpLdyn(InterpLlambda): def interp_exp(self, e, env): match e: case Constant(n): return self.tag(super().interp_exp(e, env)) case Tuple(es, Load()): return self.tag(super().interp_exp(e, env)) case Lambda(params, body): return self.tag(super().interp_exp(e, env)) case Call(Name('input_int'), []): return self.tag(super().interp_exp(e, env)) case BinOp(left, Add(), right): l = self.interp_exp(left, env); r = self.interp_exp(right, env) return self.tag(self.untag(l, 'int', e) + self.untag(r, 'int', e)) case BinOp(left, Sub(), right): l = self.interp_exp(left, env); r = self.interp_exp(right, env) return self.tag(self.untag(l, 'int', e) - self.untag(r, 'int', e)) case UnaryOp(USub(), e1): v = self.interp_exp(e1, env) return self.tag(- self.untag(v, 'int', e)) case IfExp(test, body, orelse): v = self.interp_exp(test, env) if self.untag(v, 'bool', e): return self.interp_exp(body, env) else: return self.interp_exp(orelse, env) case UnaryOp(Not(), e1): v = self.interp_exp(e1, env) return self.tag(not self.untag(v, 'bool', e)) case BoolOp(And(), values): left = values[0]; right = values[1] l = self.interp_exp(left, env) if self.untag(l, 'bool', e): return self.interp_exp(right, env) else: return self.tag(False) case BoolOp(Or(), values): left = values[0]; right = values[1] l = self.interp_exp(left, env) if self.untag(l, 'bool', e): return self.tag(True) else: return self.interp_exp(right, env) case Compare(left, [cmp], [right]): l = self.interp_exp(left, env) r = self.interp_exp(right, env) if l.tag == r.tag: return self.tag(self.interp_cmp(cmp)(l.value, r.value)) else: raise Exception('interp Compare unexpected ' + repr(l) + ' ' + repr(r)) case Subscript(tup, index, Load()): t = self.interp_exp(tup, env) n = self.interp_exp(index, env) return self.untag(t, 'tuple', e)[self.untag(n, 'int', e)] case Call(Name('len'), [tup]): t = self.interp_exp(tup, env) return self.tag(len(self.untag(t, 'tuple', e))) case _: return self.tag(super().interp_exp(e, env)) \end{lstlisting} \fi} \end{tcolorbox} \caption{Interpreter for the \LangDyn{} language\python{, part 1}.} \label{fig:interp-Ldyn} \end{figure} {\if\edition\pythonEd\pythonColor \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] class InterpLdyn(InterpLlambda): def interp_stmt(self, s, env, cont): match s: case If(test, body, orelse): v = self.interp_exp(test, env) match self.untag(v, 'bool', s): case True: return self.interp_stmts(body + cont, env) case False: return self.interp_stmts(orelse + cont, env) case While(test, body, []): v = self.interp_exp(test, env) if self.untag(v, 'bool', test): self.interp_stmts(body + [s] + cont, env) else: return self.interp_stmts(cont, env) case Assign([Subscript(tup, index)], value): tup = self.interp_exp(tup, env) index = self.interp_exp(index, env) tup_v = self.untag(tup, 'tuple', s) index_v = self.untag(index, 'int', s) tup_v[index_v] = self.interp_exp(value, env) return self.interp_stmts(cont, env) case FunctionDef(name, params, bod, dl, returns, comment): if isinstance(params, ast.arguments): ps = [p.arg for p in params.args] else: ps = [x for (x,t) in params] env[name] = self.tag(Function(name, ps, bod, env)) return self.interp_stmts(cont, env) case _: return super().interp_stmt(s, env, cont) \end{lstlisting} \end{tcolorbox} \caption{Interpreter for the \LangDyn{} language\python{, part 2}.} \label{fig:interp-Ldyn-2} \end{figure} \fi} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define (interp-op op) (match op ['+ fx+] ['- fx-] ['read read-fixnum] ['not (lambda (v) (match v [#t #f] [#f #t]))] ['< (lambda (v1 v2) (cond [(and (fixnum? v1) (fixnum? v2)) (< v1 v2)]))] ['<= (lambda (v1 v2) (cond [(and (fixnum? v1) (fixnum? v2)) (<= v1 v2)]))] ['> (lambda (v1 v2) (cond [(and (fixnum? v1) (fixnum? v2)) (> v1 v2)]))] ['>= (lambda (v1 v2) (cond [(and (fixnum? v1) (fixnum? v2)) (>= v1 v2)]))] ['boolean? boolean?] ['integer? fixnum?] ['void? void?] ['vector? vector?] ['vector-length vector-length] ['procedure? (match-lambda [`(functions ,xs ,body ,env) #t] [else #f])] [else (error 'interp-op "unknown operator" op)])) (define (op-tags op) (match op ['+ '((Integer Integer))] ['- '((Integer Integer) (Integer))] ['read '(())] ['not '((Boolean))] ['< '((Integer Integer))] ['<= '((Integer Integer))] ['> '((Integer Integer))] ['>= '((Integer Integer))] ['vector-length '((Vector))])) (define type-predicates (set 'boolean? 'integer? 'vector? 'procedure? 'void?)) (define (tag-value v) (cond [(boolean? v) (Tagged v 'Boolean)] [(fixnum? v) (Tagged v 'Integer)] [(procedure? v) (Tagged v 'Procedure)] [(vector? v) (Tagged v 'Vector)] [(void? v) (Tagged v 'Void)] [else (error 'tag-value "unidentified value ~a" v)])) (define (check-tag val expected ast) (define tag (Tagged-tag val)) (unless (eq? tag expected) (error 'trapped-error "expected ~a, not ~a\nin ~v" expected tag ast))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} class InterpLdyn(InterpLlambda): def tag(self, v): if v is True or v is False: return Tagged(v, 'bool') elif isinstance(v, int): return Tagged(v, 'int') elif isinstance(v, Function): return Tagged(v, 'function') elif isinstance(v, tuple): return Tagged(v, 'tuple') elif isinstance(v, type(None)): return Tagged(v, 'none') else: raise Exception('tag: unexpected ' + repr(v)) def untag(self, v, expected_tag, ast): match v: case Tagged(val, tag) if tag == expected_tag: return val case _: raise TrappedError('expected Tagged value with ' + expected_tag + ', not ' + ' ' + repr(v)) def apply_fun(self, fun, args, e): f = self.untag(fun, 'function', e) return super().apply_fun(f, args, e) \end{lstlisting} \fi} \end{tcolorbox} \caption{Auxiliary functions for the \LangDyn{} interpreter.} \label{fig:interp-Ldyn-aux} \end{figure} \clearpage \section{Representation of Tagged Values} The interpreter for \LangDyn{} introduced a new kind of value: the tagged value. To compile \LangDyn{} to x86 we must decide how to represent tagged values at the bit level. Because almost every operation in \LangDyn{} involves manipulating tagged values, the representation must be efficient. Recall that all our values are 64 bits. We shall steal the right-most $3$ bits to encode the tag. We use $001$ to identify integers, $100$ for Booleans, $010$ for tuples, $011$ for procedures, and $101$ for the void value\python{, \key{None}}. We define the following auxiliary function for mapping types to tag codes: % {\if\edition\racketEd \begin{align*} \itm{tagof}(\key{Integer}) &= 001 \\ \itm{tagof}(\key{Boolean}) &= 100 \\ \itm{tagof}(\LP\key{Vector} \ldots\RP) &= 010 \\ \itm{tagof}(\LP\ldots \key{->} \ldots\RP) &= 011 \\ \itm{tagof}(\key{Void}) &= 101 \end{align*} \fi} {\if\edition\pythonEd\pythonColor \begin{align*} \itm{tagof}(\key{IntType()}) &= 001 \\ \itm{tagof}(\key{BoolType()}) &= 100 \\ \itm{tagof}(\key{TupleType(ts)}) &= 010 \\ \itm{tagof}(\key{FunctionType(ps, rt)}) &= 011 \\ \itm{tagof}(\key{type(None)}) &= 101 \end{align*} \fi} % This stealing of 3 bits comes at some price: integers are now restricted to the range $-2^{60}$ to $2^{60}-1$. The stealing does not adversely affect tuples and procedures because those values are addresses, and our addresses are 8-byte aligned so the rightmost 3 bits are unused; they are always $000$. Thus, we do not lose information by overwriting the rightmost 3 bits with the tag, and we can simply zero out the tag to recover the original address. To make tagged values into first-class entities, we can give them a type called \racket{\code{Any}}\python{\code{AnyType()}} and define operations such as \code{Inject} and \code{Project} for creating and using them, yielding the statically typed \LangAny{} intermediate language. We describe how to compile \LangDyn{} to \LangAny{} in section~\ref{sec:compile-r7}; in the next section we describe the \LangAny{} language in greater detail. \section{The \LangAny{} Language} \label{sec:Rany-lang} \newcommand{\LanyASTRacket}{ \begin{array}{lcl} \Type &::= & \ANYTY \\ \FType &::=& \key{Integer} \MID \key{Boolean} \MID \key{Void} \MID \LP\key{Vector}\; \ANYTY\ldots\RP \MID \LP\ANYTY\ldots \; \key{->}\; \ANYTY\RP\\ \itm{op} &::= & \code{any-vector-length} \MID \code{any-vector-ref} \MID \code{any-vector-set!}\\ &\MID& \code{boolean?} \MID \code{integer?} \MID \code{vector?} \MID \code{procedure?} \MID \code{void?} \\ \Exp &::=& \INJECT{\Exp}{\FType} \MID \PROJECT{\Exp}{\FType} \end{array} } \newcommand{\LanyASTPython}{ \begin{array}{lcl} \Type &::= & \key{AnyType()} \\ \FType &::=& \key{IntType()} \MID \key{BoolType()} \MID \key{VoidType()} \MID \key{TupleType}\LS\key{AnyType()}^+\RS \\ &\MID& \key{FunctionType}\LP \key{AnyType()}^{*}\key{, }\key{AnyType()}\RP \\ \Exp & ::= & \INJECT{\Exp}{\FType} \MID \PROJECT{\Exp}{\FType} \\ &\MID& \CALL{\VAR{\skey{any\_tuple\_load}}}{\LS\Exp\key{, }\Exp\RS}\\ &\MID& \CALL{\VAR{\skey{any\_len}}}{\LS\Exp\RS} \\ &\MID& \CALL{\VAR{\skey{arity}}}{\LS\Exp\RS} \\ &\MID& \CALL{\VAR{\skey{make\_any}}}{\LS\Exp\key{, }\INT{\Int}\RS} %% &\MID& \CALL{\VAR{\skey{is\_int}}}{\Exp} %% \MID \CALL{\VAR{\skey{is\_bool}}}{\Exp} \\ %% &\MID& \CALL{\VAR{\skey{is\_none}}}{\Exp} %% \MID \CALL{\VAR{\skey{is\_tuple}}}{\Exp} \\ %% &\MID& \CALL{\VAR{\skey{is\_function}}}{\Exp} \end{array} } \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintOpAST} \\ \hline \gray{\LvarASTRacket{}} \\ \hline \gray{\LifASTRacket{}} \\ \hline \gray{\LwhileASTRacket{}} \\ \hline \gray{\LtupASTRacket{}} \\ \hline \gray{\LfunASTRacket} \\ \hline \gray{\LlambdaASTRacket} \\ \hline \LanyASTRacket \\ \begin{array}{lcl} \LangAnyM{} &::=& \PROGRAMDEFSEXP{\code{'()}}{\LP\Def\ldots\RP}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintASTPython} \\ \hline \gray{\LvarASTPython{}} \\ \hline \gray{\LifASTPython{}} \\ \hline \gray{\LwhileASTPython{}} \\ \hline \gray{\LtupASTPython{}} \\ \hline \gray{\LfunASTPython} \\ \hline \gray{\LlambdaASTPython} \\ \hline \LanyASTPython \\ \begin{array}{lcl} \LangAnyM{} &::=& \PROGRAM{}{\LS \Def \ldots \Stmt \ldots \RS} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangAny{}, extending \LangLam{} (figure~\ref{fig:Llam-syntax}).} \label{fig:Lany-syntax} \end{figure} The definition of the abstract syntax of \LangAny{} is given in figure~\ref{fig:Lany-syntax}. %% \racket{(The concrete syntax of \LangAny{} is in the Appendix, %% figure~\ref{fig:Lany-concrete-syntax}.)} The $\INJECT{e}{T}$ form converts the value produced by expression $e$ of type $T$ into a tagged value. The $\PROJECT{e}{T}$ form either converts the tagged value produced by expression $e$ into a value of type $T$ or halts the program if the type tag does not match $T$. % Note that in both \code{Inject} and \code{Project}, the type $T$ is restricted to be a flat type (the nonterminal $\FType$) which simplifies the implementation and complies with the needs for compiling \LangDyn{}. The \racket{\code{any-vector}} operators \python{\code{any\_tuple\_load} and \code{any\_len}} adapt the tuple operations so that they can be applied to a value of type \racket{\code{Any}}\python{\code{AnyType}}. They also generalize the tuple operations in that the index is not restricted to a literal integer in the grammar but is allowed to be any expression. \racket{The type predicates such as \racket{\key{boolean?}}\python{\key{is\_bool}} expect their argument to produce a tagged value; they return {\TRUE} if the tag corresponds to the predicate and return {\FALSE} otherwise.} The type checker for \LangAny{} is shown in figure~\ref{fig:type-check-Lany} % \racket{ and uses the auxiliary functions presented in figure~\ref{fig:type-check-Lany-aux}}. % The interpreter for \LangAny{} is shown in figure~\ref{fig:interp-Lany} and its auxiliary functions are shown in figure~\ref{fig:interp-Lany-aux}. \begin{figure}[btp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define type-check-Lany-class (class type-check-Llambda-class (super-new) (inherit check-type-equal?) (define/override (type-check-exp env) (lambda (e) (define recur (type-check-exp env)) (match e [(Inject e1 ty) (unless (flat-ty? ty) (error 'type-check "may only inject from flat type, not ~a" ty)) (define-values (new-e1 e-ty) (recur e1)) (check-type-equal? e-ty ty e) (values (Inject new-e1 ty) 'Any)] [(Project e1 ty) (unless (flat-ty? ty) (error 'type-check "may only project to flat type, not ~a" ty)) (define-values (new-e1 e-ty) (recur e1)) (check-type-equal? e-ty 'Any e) (values (Project new-e1 ty) ty)] [(Prim 'any-vector-length (list e1)) (define-values (e1^ t1) (recur e1)) (check-type-equal? t1 'Any e) (values (Prim 'any-vector-length (list e1^)) 'Integer)] [(Prim 'any-vector-ref (list e1 e2)) (define-values (e1^ t1) (recur e1)) (define-values (e2^ t2) (recur e2)) (check-type-equal? t1 'Any e) (check-type-equal? t2 'Integer e) (values (Prim 'any-vector-ref (list e1^ e2^)) 'Any)] [(Prim 'any-vector-set! (list e1 e2 e3)) (define-values (e1^ t1) (recur e1)) (define-values (e2^ t2) (recur e2)) (define-values (e3^ t3) (recur e3)) (check-type-equal? t1 'Any e) (check-type-equal? t2 'Integer e) (check-type-equal? t3 'Any e) (values (Prim 'any-vector-set! (list e1^ e2^ e3^)) 'Void)] [(Prim pred (list e1)) #:when (set-member? (type-predicates) pred) (define-values (new-e1 e-ty) (recur e1)) (check-type-equal? e-ty 'Any e) (values (Prim pred (list new-e1)) 'Boolean)] [(Prim 'eq? (list arg1 arg2)) (define-values (e1 t1) (recur arg1)) (define-values (e2 t2) (recur arg2)) (match* (t1 t2) [(`(Vector ,ts1 ...) `(Vector ,ts2 ...)) (void)] [(other wise) (check-type-equal? t1 t2 e)]) (values (Prim 'eq? (list e1 e2)) 'Boolean)] [else ((super type-check-exp env) e)]))) )) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} class TypeCheckLany(TypeCheckLlambda): def type_check_exp(self, e, env): match e: case Inject(value, typ): self.check_exp(value, typ, env) return AnyType() case Project(value, typ): self.check_exp(value, AnyType(), env) return typ case Call(Name('any_tuple_load'), [tup, index]): self.check_exp(tup, AnyType(), env) self.check_exp(index, IntType(), env) return AnyType() case Call(Name('any_len'), [tup]): self.check_exp(tup, AnyType(), env) return IntType() case Call(Name('arity'), [fun]): ty = self.type_check_exp(fun, env) match ty: case FunctionType(ps, rt): return IntType() case TupleType([FunctionType(ps,rs)]): return IntType() case _: raise Exception('type check arity unexpected ' + repr(ty)) case Call(Name('make_any'), [value, tag]): self.type_check_exp(value, env) self.check_exp(tag, IntType(), env) return AnyType() case AnnLambda(params, returns, body): new_env = {x:t for (x,t) in env.items()} for (x,t) in params: new_env[x] = t return_t = self.type_check_exp(body, new_env) self.check_type_equal(returns, return_t, e) return FunctionType([t for (x,t) in params], return_t) case _: return super().type_check_exp(e, env) \end{lstlisting} \fi} \end{tcolorbox} \caption{Type checker for the \LangAny{} language.} \label{fig:type-check-Lany} \end{figure} {\if\edition\racketEd \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting} (define/override (operator-types) (append '((integer? . ((Any) . Boolean)) (vector? . ((Any) . Boolean)) (procedure? . ((Any) . Boolean)) (void? . ((Any) . Boolean))) (super operator-types))) (define/public (type-predicates) (set 'boolean? 'integer? 'vector? 'procedure? 'void?)) (define/public (flat-ty? ty) (match ty [(or `Integer `Boolean `Void) #t] [`(Vector ,ts ...) (for/and ([t ts]) (eq? t 'Any))] [`(,ts ... -> ,rt) (and (eq? rt 'Any) (for/and ([t ts]) (eq? t 'Any)))] [else #f])) \end{lstlisting} \end{tcolorbox} \caption{Auxiliary methods for type checking \LangAny{}.} \label{fig:type-check-Lany-aux} \end{figure} \fi} \begin{figure}[btp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define interp-Lany-class (class interp-Llambda-class (super-new) (define/override (interp-op op) (match op ['boolean? (match-lambda [`(tagged ,v1 ,tg) (equal? tg (any-tag 'Boolean))] [else #f])] ['integer? (match-lambda [`(tagged ,v1 ,tg) (equal? tg (any-tag 'Integer))] [else #f])] ['vector? (match-lambda [`(tagged ,v1 ,tg) (equal? tg (any-tag `(Vector Any)))] [else #f])] ['procedure? (match-lambda [`(tagged ,v1 ,tg) (equal? tg (any-tag `(Any -> Any)))] [else #f])] ['eq? (match-lambda* [`((tagged ,v1^ ,tg1) (tagged ,v2^ ,tg2)) (and (eq? v1^ v2^) (equal? tg1 tg2))] [ls (apply (super interp-op op) ls)])] ['any-vector-ref (lambda (v i) (match v [`(tagged ,v^ ,tg) (vector-ref v^ i)]))] ['any-vector-set! (lambda (v i a) (match v [`(tagged ,v^ ,tg) (vector-set! v^ i a)]))] ['any-vector-length (lambda (v) (match v [`(tagged ,v^ ,tg) (vector-length v^)]))] [else (super interp-op op)])) (define/override ((interp-exp env) e) (define recur (interp-exp env)) (match e [(Inject e ty) `(tagged ,(recur e) ,(any-tag ty))] [(Project e ty2) (apply-project (recur e) ty2)] [else ((super interp-exp env) e)])) )) (define (interp-Lany p) (send (new interp-Lany-class) interp-program p)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} class InterpLany(InterpLlambda): def interp_exp(self, e, env): match e: case Inject(value, typ): v = self.interp_exp(value, env) return Tagged(v, self.type_to_tag(typ)) case Project(value, typ): v = self.interp_exp(value, env) match v: case Tagged(val, tag) if self.type_to_tag(typ) == tag: return val case _: raise Exception('interp project to ' + repr(typ) + ' unexpected ' + repr(v)) case Call(Name('any_tuple_load'), [tup, index]): tv = self.interp_exp(tup, env) n = self.interp_exp(index, env) match tv: case Tagged(v, tag): return v[n] case _: raise Exception('in any_tuple_load unexpected ' + repr(tv)) case Call(Name('any_len'), [value]): v = self.interp_exp(value, env) match v: case Tagged(value, tag): return len(value) case _: raise Exception('interp any_len unexpected ' + repr(v)) case Call(Name('arity'), [fun]): f = self.interp_exp(fun, env) return self.arity(f) case _: return super().interp_exp(e, env) \end{lstlisting} \fi} \end{tcolorbox} \caption{Interpreter for \LangAny{}.} \label{fig:interp-Lany} \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define/public (apply-inject v tg) (Tagged v tg)) (define/public (apply-project v ty2) (define tag2 (any-tag ty2)) (match v [(Tagged v1 tag1) (cond [(eq? tag1 tag2) (match ty2 [`(Vector ,ts ...) (define l1 ((interp-op 'vector-length) v1)) (cond [(eq? l1 (length ts)) v1] [else (error 'apply-project "vector length mismatch, ~a != ~a" l1 (length ts))])] [`(,ts ... -> ,rt) (match v1 [`(function ,xs ,body ,env) (cond [(eq? (length xs) (length ts)) v1] [else (error 'apply-project "arity mismatch ~a != ~a" (length xs) (length ts))])] [else (error 'apply-project "expected function not ~a" v1)])] [else v1])] [else (error 'apply-project "tag mismatch ~a != ~a" tag1 tag2)])] [else (error 'apply-project "expected tagged value, not ~a" v)])) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} class InterpLany(InterpLlambda): def type_to_tag(self, typ): match typ: case FunctionType(params, rt): return 'function' case TupleType(fields): return 'tuple' case t if t == int: return 'int' case t if t == bool: return 'bool' case IntType(): return 'int' case BoolType(): return 'int' case _: raise Exception('type_to_tag unexpected ' + repr(typ)) def arity(self, v): match v: case Function(name, params, body, env): return len(params) case ClosureTuple(args, arity): return arity case _: raise Exception('Lany arity unexpected ' + repr(v)) \end{lstlisting} \fi} \end{tcolorbox} \caption{Auxiliary functions for interpreting \LangAny{}.} \label{fig:interp-Lany-aux} \end{figure} \clearpage \section{Cast Insertion: Compiling \LangDyn{} to \LangAny{}} \label{sec:compile-r7} The \code{cast\_insert} pass compiles from \LangDyn{} to \LangAny{}. Figure~\ref{fig:compile-r7-Lany} shows the compilation of many of the \LangDyn{} forms into \LangAny{}. An important invariant of this pass is that given any subexpression $e$ in the \LangDyn{} program, the pass will produce an expression $e'$ in \LangAny{} that has type \ANYTY{}. For example, the first row in figure~\ref{fig:compile-r7-Lany} shows the compilation of the Boolean \TRUE{}, which must be injected to produce an expression of type \ANYTY{}. % The compilation of addition is shown in the second row of figure~\ref{fig:compile-r7-Lany}. The compilation of addition is representative of many primitive operations: the arguments have type \ANYTY{} and must be projected to \INTTYPE{} before the addition can be performed. The compilation of \key{lambda} (third row of figure~\ref{fig:compile-r7-Lany}) shows what happens when we need to produce type annotations: we simply use \ANYTY{}. % % TODO:update the following for python, and the tests and interpreter. -Jeremy \racket{The compilation of \code{if} and \code{eq?} demonstrate how this pass has to account for some differences in behavior between \LangDyn{} and \LangAny{}. The \LangDyn{} language is more permissive than \LangAny{} regarding what kind of values can be used in various places. For example, the condition of an \key{if} does not have to be a Boolean. For \key{eq?}, the arguments need not be of the same type (in that case the result is \code{\#f}).} \begin{figure}[btp] \centering \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{tabular}{lll} \begin{minipage}{0.27\textwidth} \begin{lstlisting} #t \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.65\textwidth} \begin{lstlisting} (inject #t Boolean) \end{lstlisting} \end{minipage} \\[2ex]\hline \begin{minipage}{0.27\textwidth} \begin{lstlisting} (+ |$e_1$| |$e_2$|) \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.65\textwidth} \begin{lstlisting} (inject (+ (project |$e'_1$| Integer) (project |$e'_2$| Integer)) Integer) \end{lstlisting} \end{minipage} \\[2ex]\hline \begin{minipage}{0.27\textwidth} \begin{lstlisting} (lambda (|$x_1 \ldots$|) |$e$|) \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.65\textwidth} \begin{lstlisting} (inject (lambda: ([|$x_1$|:Any]|$\ldots$|):Any |$e'$|) (Any|$\ldots$|Any -> Any)) \end{lstlisting} \end{minipage} \\[2ex]\hline \begin{minipage}{0.27\textwidth} \begin{lstlisting} (|$e_0$| |$e_1 \ldots e_n$|) \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.65\textwidth} \begin{lstlisting} ((project |$e'_0$| (Any|$\ldots$|Any -> Any)) |$e'_1 \ldots e'_n$|) \end{lstlisting} \end{minipage} \\[2ex]\hline \begin{minipage}{0.27\textwidth} \begin{lstlisting} (vector-ref |$e_1$| |$e_2$|) \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.65\textwidth} \begin{lstlisting} (any-vector-ref |$e_1'$| (project |$e'_2$| Integer)) \end{lstlisting} \end{minipage} \\[2ex]\hline \begin{minipage}{0.27\textwidth} \begin{lstlisting} (if |$e_1$| |$e_2$| |$e_3$|) \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.65\textwidth} \begin{lstlisting} (if (eq? |$e'_1$| (inject #f Boolean)) |$e'_3$| |$e'_2$|) \end{lstlisting} \end{minipage} \\[2ex]\hline \begin{minipage}{0.27\textwidth} \begin{lstlisting} (eq? |$e_1$| |$e_2$|) \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.65\textwidth} \begin{lstlisting} (inject (eq? |$e'_1$| |$e'_2$|) Boolean) \end{lstlisting} \end{minipage} \\[2ex]\hline \begin{minipage}{0.27\textwidth} \begin{lstlisting} (not |$e_1$|) \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.65\textwidth} \begin{lstlisting} (if (eq? |$e'_1$| (inject #f Boolean)) (inject #t Boolean) (inject #f Boolean)) \end{lstlisting} \end{minipage} \end{tabular} \fi} {\if\edition\pythonEd\pythonColor \hspace{-0.8em}\begin{tabular}{|lll|} \hline \begin{minipage}{0.23\textwidth} \begin{lstlisting} True \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.7\textwidth} \begin{lstlisting} Inject(True, BoolType()) \end{lstlisting} \end{minipage} \\[2ex]\hline \begin{minipage}{0.23\textwidth} \begin{lstlisting} |$e_1$| + |$e_2$| \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.7\textwidth} \begin{lstlisting} Inject(Project(|$e'_1$|, IntType()) + Project(|$e'_2$|, IntType()), IntType()) \end{lstlisting} \end{minipage} \\[2ex]\hline \begin{minipage}{0.23\textwidth} \begin{lstlisting} lambda |$x_1 \ldots$|: |$e$| \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.7\textwidth} \begin{lstlisting} Inject(Lambda([(|$x_1$|,AnyType),|$\ldots$|], |$e'$|) FunctionType([AnyType(),|$\ldots$|], AnyType())) \end{lstlisting} \end{minipage} \\[2ex]\hline \begin{minipage}{0.23\textwidth} \begin{lstlisting} |$e_0$|(|$e_1 \ldots e_n$|) \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.7\textwidth} \begin{lstlisting} Call(Project(|$e'_0$|, FunctionType([AnyType(),|$\ldots$|], AnyType())), |$e'_1, \ldots, e'_n$|) \end{lstlisting} \end{minipage} \\[2ex]\hline \begin{minipage}{0.23\textwidth} \begin{lstlisting} |$e_1$|[|$e_2$|] \end{lstlisting} \end{minipage} & $\Rightarrow$ & \begin{minipage}{0.7\textwidth} \begin{lstlisting} Call(Name('any_tuple_load'), [|$e_1'$|, Project(|$e_2'$|, IntType())]) \end{lstlisting} \end{minipage} %% \begin{minipage}{0.23\textwidth} %% \begin{lstlisting} %% |$e_2$| if |$e_1$| else |$e_3$| %% \end{lstlisting} %% \end{minipage} %% & %% $\Rightarrow$ %% & %% \begin{minipage}{0.7\textwidth} %% \begin{lstlisting} %% (if (eq? |$e'_1$| (inject #f Boolean)) |$e'_3$| |$e'_2$|) %% \end{lstlisting} %% \end{minipage} %% \\[2ex]\hline %% \begin{minipage}{0.23\textwidth} %% \begin{lstlisting} %% (eq? |$e_1$| |$e_2$|) %% \end{lstlisting} %% \end{minipage} %% & %% $\Rightarrow$ %% & %% \begin{minipage}{0.7\textwidth} %% \begin{lstlisting} %% (inject (eq? |$e'_1$| |$e'_2$|) Boolean) %% \end{lstlisting} %% \end{minipage} %% \\[2ex]\hline %% \begin{minipage}{0.23\textwidth} %% \begin{lstlisting} %% (not |$e_1$|) %% \end{lstlisting} %% \end{minipage} %% & %% $\Rightarrow$ %% & %% \begin{minipage}{0.7\textwidth} %% \begin{lstlisting} %% (if (eq? |$e'_1$| (inject #f Boolean)) %% (inject #t Boolean) (inject #f Boolean)) %% \end{lstlisting} %% \end{minipage} %% \\[2ex]\hline \\\hline \end{tabular} \fi} \end{tcolorbox} \caption{Cast insertion.} \label{fig:compile-r7-Lany} \end{figure} \section{Reveal Casts} \label{sec:reveal-casts-Lany} % TODO: define R'_6 In the \code{reveal\_casts} pass, we recommend compiling \code{Project} into a conditional expression that checks whether the value's tag matches the target type; if it does, the value is converted to a value of the target type by removing the tag; if it does not, the program exits. % {\if\edition\racketEd % To perform these actions we need a new primitive operation, \code{tag-of-any}, and a new form, \code{ValueOf}. The \code{tag-of-any} operation retrieves the type tag from a tagged value of type \code{Any}. The \code{ValueOf} form retrieves the underlying value from a tagged value. The \code{ValueOf} form includes the type for the underlying value that is used by the type checker. % \fi} % {\if\edition\pythonEd\pythonColor % To perform these actions we need two new AST classes: \code{TagOf} and \code{ValueOf}. The \code{TagOf} operation retrieves the type tag from a tagged value of type \ANYTY{}. The \code{ValueOf} operation retrieves the underlying value from a tagged value. The \code{ValueOf} operation includes the type for the underlying value that is used by the type checker. % \fi} If the target type of the projection is \BOOLTY{} or \INTTY{}, then \code{Project} can be translated as follows: \begin{center} \begin{minipage}{1.0\textwidth} {\if\edition\racketEd \begin{lstlisting} (Project |$e$| |$\FType$|) |$\Rightarrow$| (Let |$\itm{tmp}$| |$e'$| (If (Prim 'eq? (list (Prim 'tag-of-any (list (Var |$\itm{tmp}$|))) (Int |$\itm{tagof}(\FType)$|))) (ValueOf |$\itm{tmp}$| |$\FType$|) (Exit))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Project(|$e$|, |$\FType$|) |$\Rightarrow$| Begin([Assign([|$\itm{tmp}$|], |$e'$|)], IfExp(Compare(TagOf(|$\itm{tmp}$|),[Eq()], [Constant(|$\itm{tagof}(\FType)$|)]), ValueOf(|$\itm{tmp}$|, |$\FType$|) Call(Name('exit'), []))) \end{lstlisting} \fi} \end{minipage} \end{center} If the target type of the projection is a tuple or function type, then there is a bit more work to do. For tuples, check that the length of the tuple type matches the length of the tuple. For functions, check that the number of parameters in the function type matches the function's arity. Regarding \code{Inject}, we recommend compiling it to a slightly lower-level primitive operation named \racket{\code{make-any}}\python{\code{make\_any}}. This operation takes a tag instead of a type. \begin{center} \begin{minipage}{1.0\textwidth} {\if\edition\racketEd \begin{lstlisting} (Inject |$e$| |$\FType$|) |$\Rightarrow$| (Prim 'make-any (list |$e'$| (Int |$\itm{tagof}(\FType)$|))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Inject(|$e$|, |$\FType$|) |$\Rightarrow$| Call(Name('make_any'), [|$e'$|, Constant(|$\itm{tagof}(\FType)$|)]) \end{lstlisting} \fi} \end{minipage} \end{center} {\if\edition\pythonEd\pythonColor % The introduction of \code{make\_any} makes it difficult to use bidirectional type checking because we no longer have an expected type to use for type checking the expression $e'$. Thus, we run into difficulty if $e'$ is a \code{Lambda} expression. We recommend translating \code{Lambda} to a new AST class \code{AnnLambda} (for annotated lambda) that contains its return type and the types of its parameters. % \fi} \racket{The type predicates (\code{boolean?}, etc.) can be translated into uses of \code{tag-of-any} and \code{eq?} in a similar way as in the translation of \code{Project}.} {\if\edition\racketEd The \code{any-vector-ref} and \code{any-vector-set!} operations combine the projection action with the vector operation. Also, the read and write operations allow arbitrary expressions for the index, so the type checker for \LangAny{} (figure~\ref{fig:type-check-Lany}) cannot guarantee that the index is within bounds. Thus, we insert code to perform bounds checking at runtime. The translation for \code{any-vector-ref} is as follows, and the other two operations are translated in a similar way: \begin{center} \begin{minipage}{0.95\textwidth} \begin{lstlisting} (Prim 'any-vector-ref (list |$e_1$| |$e_2$|)) |$\Rightarrow$| (Let |$v$| |$e'_1$| (Let |$i$| |$e'_2$| (If (Prim 'eq? (list (Prim 'tag-of-any (list (Var |$v$|))) (Int 2))) (If (Prim '< (list (Var |$i$|) (Prim 'any-vector-length (list (Var |$v$|))))) (Prim 'any-vector-ref (list (Var |$v$|) (Var |$i$|))) (Exit)) (Exit)))) \end{lstlisting} \end{minipage} \end{center} \fi} % {\if\edition\pythonEd\pythonColor % The \code{any\_tuple\_load} operation combines the projection action with the load operation. Also, the load operation allows arbitrary expressions for the index, so the type checker for \LangAny{} (figure~\ref{fig:type-check-Lany}) cannot guarantee that the index is within bounds. Thus, we insert code to perform bounds checking at runtime. The translation for \code{any\_tuple\_load} is as follows. \begin{lstlisting} Call(Name('any_tuple_load'), [|$e_1$|,|$e_2$|]) |$\Rightarrow$| Block([Assign([|$t$|], |$e'_1$|), Assign([|$i$|], |$e'_2$|)], IfExp(Compare(TagOf(|$t$|), [Eq()], [Constant(2)]), IfExp(Compare(|$i$|, [Lt()], [Call(Name('any_len'), [|$t$|])]), Call(Name('any_tuple_load_unsafe'), [|$t$|, |$i$|]), Call(Name('exit'), [])), Call(Name('exit'), []))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \section{Assignment Conversion} \label{sec:convert-assignments-Lany} Update this pass to handle the \code{TagOf}, \code{ValueOf}, and \code{AnnLambda} AST classes. \section{Closure Conversion} \label{sec:closure-conversion-Lany} Update this pass to handle the \code{TagOf}, \code{ValueOf}, and \code{AnnLambda} AST classes. \fi} \section{Remove Complex Operands} \label{sec:rco-Lany} \racket{The \code{ValueOf} and \code{Exit} forms are both complex expressions. The subexpression of \code{ValueOf} must be atomic.} % \python{The \code{ValueOf} and \code{TagOf} operations are both complex expressions. Their subexpressions must be atomic.} \section{Explicate Control and \LangCAny{}} \label{sec:explicate-Lany} The output of \code{explicate\_control} is the \LangCAny{} language, whose syntax definition is shown in figure~\ref{fig:c5-syntax}. % \racket{The \code{ValueOf} form that we added to \LangAny{} remains an expression and the \code{Exit} expression becomes a $\Tail$. Also, note that the index argument of \code{vector-ref} and \code{vector-set!} is an $\Atm$, instead of an integer as it was in \LangCVec{} (figure~\ref{fig:c2-syntax}).} % \python{Update the auxiliary functions \code{explicate\_tail}, \code{explicate\_effect}, and \code{explicate\_pred} as appropriate to handle the new expressions in \LangCAny{}. } \newcommand{\CanyASTPython}{ \begin{array}{lcl} \Exp &::=& \CALL{\VAR{\skey{make\_any}}}{\LS \Atm,\Atm \RS}\\ &\MID& \key{TagOf}\LP \Atm \RP \MID \key{ValueOf}\LP \Atm , \FType \RP \\ &\MID& \CALL{\VAR{\skey{any\_tuple\_load\_unsafe}}}{\LS \Atm,\Atm \RS}\\ &\MID& \CALL{\VAR{\skey{any\_len}}}{\LS \Atm \RS} \\ &\MID& \CALL{\VAR{\skey{exit}}}{\LS\RS} \end{array} } \newcommand{\CanyASTRacket}{ \begin{array}{lcl} \Exp &::= & \BINOP{\key{'any-vector-ref}}{\Atm}{\Atm} \\ &\MID& (\key{Prim}~\key{'any-vector-set!}\,(\key{list}\,\Atm\,\Atm\,\Atm))\\ &\MID& \VALUEOF{\Atm}{\FType} \\ \Tail &::= & \LP\key{Exit}\RP \end{array} } \begin{figure}[tp] \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\CvarASTRacket} \\ \hline \gray{\CifASTRacket} \\ \hline \gray{\CloopASTRacket} \\ \hline \gray{\CtupASTRacket} \\ \hline \gray{\CfunASTRacket} \\ \hline \gray{\ClambdaASTRacket} \\ \hline \CanyASTRacket \\ \begin{array}{lcl} \LangCAnyM{} & ::= & \PROGRAMDEFS{\itm{info}}{\LP\Def\ldots\RP} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\CifASTPython} \\ \hline \gray{\CtupASTPython} \\ \hline \gray{\CfunASTPython} \\ \hline \gray{\ClambdaASTPython} \\ \hline \CanyASTPython \\ \begin{array}{lcl} \LangCAnyM{} & ::= & \CPROGRAMDEFS{\LS\Def\code{,}\ldots\RS} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangCAny{}, extending \LangCLam{} (figure~\ref{fig:Clam-syntax}).} \label{fig:c5-syntax} \end{figure} \section{Select Instructions} \label{sec:select-Lany} \index{subject}{select instructions} In the \code{select\_instructions} pass, we translate the primitive operations on the \ANYTY{} type to x86 instructions that manipulate the three tag bits of the tagged value. In the following descriptions, given an atom $e$ we use a primed variable $e'$ to refer to the result of translating $e$ into an x86 argument: \paragraph{\racket{\code{make-any}}\python{\code{make\_any}}} We recommend compiling the \racket{\code{make-any}}\python{\code{make\_any}} operation as follows if the tag is for \INTTY{} or \BOOLTY{}. The \key{salq} instruction shifts the destination to the left by the number of bits specified by its source argument (in this case three, the length of the tag), and it preserves the sign of the integer. We use the \key{orq} instruction to combine the tag and the value to form the tagged value. {\if\edition\racketEd \begin{lstlisting} (Assign |\itm{lhs}| (Prim 'make-any (list |$e$| (Int |$\itm{tag}$|)))) |$\Rightarrow$| movq |$e'$|, |\itm{lhs'}| salq $3, |\itm{lhs'}| orq $|$\itm{tag}$|, |\itm{lhs'}| \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor \begin{lstlisting} Assign([|\itm{lhs}|], Call(Name('make_any'), [|$e$|, Constant(|$\itm{tag}$|)])) |$\Rightarrow$| movq |$e'$|, |\itm{lhs'}| salq $3, |\itm{lhs'}| orq $|$\itm{tag}$|, |\itm{lhs'}| \end{lstlisting} \fi} % The instruction selection\index{subject}{instruction selection} for tuples and procedures is different because there is no need to shift them to the left. The rightmost 3 bits are already zeros, so we simply combine the value and the tag using \key{orq}. \\ % {\if\edition\racketEd \begin{center} \begin{minipage}{\textwidth} \begin{lstlisting} (Assign |\itm{lhs}| (Prim 'make-any (list |$e$| (Int |$\itm{tag}$|)))) |$\Rightarrow$| movq |$e'$|, |\itm{lhs'}| orq $|$\itm{tag}$|, |\itm{lhs'}| \end{lstlisting} \end{minipage} \end{center} \fi} % {\if\edition\pythonEd\pythonColor \begin{lstlisting} Assign([|\itm{lhs}|], Call(Name('make_any'), [|$e$|, Constant(|$\itm{tag}$|)])) |$\Rightarrow$| movq |$e'$|, |\itm{lhs'}| orq $|$\itm{tag}$|, |\itm{lhs'}| \end{lstlisting} \fi} \paragraph{\racket{\code{tag-of-any}}\python{\code{TagOf}}} Recall that the \racket{\code{tag-of-any}}\python{\code{TagOf}} operation extracts the type tag from a value of type \ANYTY{}. The type tag is the bottom $3$ bits, so we obtain the tag by taking the bitwise-and of the value with $111$ ($7$ decimal). % {\if\edition\racketEd \begin{lstlisting} (Assign |\itm{lhs}| (Prim 'tag-of-any (list |$e$|))) |$\Rightarrow$| movq |$e'$|, |\itm{lhs'}| andq $7, |\itm{lhs'}| \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor \begin{lstlisting} Assign([|\itm{lhs}|], TagOf(|$e$|)) |$\Rightarrow$| movq |$e'$|, |\itm{lhs'}| andq $7, |\itm{lhs'}| \end{lstlisting} \fi} \paragraph{\code{ValueOf}} The instructions for \key{ValueOf} also differ, depending on whether the type $T$ is a pointer (tuple or function) or not (integer or Boolean). The following shows the instruction selection for integers and Booleans, in which we produce an untagged value by shifting it to the right by 3 bits: % {\if\edition\racketEd \begin{lstlisting} (Assign |\itm{lhs}| (ValueOf |$e$| |$T$|)) |$\Rightarrow$| movq |$e'$|, |\itm{lhs'}| sarq $3, |\itm{lhs'}| \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor \begin{lstlisting} Assign([|\itm{lhs}|], ValueOf(|$e$|, |$T$|)) |$\Rightarrow$| movq |$e'$|, |\itm{lhs'}| sarq $3, |\itm{lhs'}| \end{lstlisting} \fi} % In the case for tuples and procedures, we zero out the rightmost 3 bits. We accomplish this by creating the bit pattern $\ldots 0111$ ($7$ decimal) and apply bitwise-not to obtain $\ldots 11111000$ (-8 decimal), which we \code{movq} into the destination $\itm{lhs'}$. Finally, we apply \code{andq} with the tagged value to get the desired result. % {\if\edition\racketEd \begin{lstlisting} (Assign |\itm{lhs}| (ValueOf |$e$| |$T$|)) |$\Rightarrow$| movq $|$-8$|, |\itm{lhs'}| andq |$e'$|, |\itm{lhs'}| \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor \begin{lstlisting} Assign([|\itm{lhs}|], ValueOf(|$e$|, |$T$|)) |$\Rightarrow$| movq $|$-8$|, |\itm{lhs'}| andq |$e'$|, |\itm{lhs'}| \end{lstlisting} \fi} %% \paragraph{Type Predicates} We leave it to the reader to %% devise a sequence of instructions to implement the type predicates %% \key{boolean?}, \key{integer?}, \key{vector?}, and \key{procedure?}. \paragraph{\racket{\code{any-vector-length}}\python{\code{any\_len}}} The \racket{\code{any-vector-length}}\python{\code{any\_len}} operation combines the effect of \code{ValueOf} with accessing the length of a tuple from the tag stored at the zero index of the tuple. {\if\edition\racketEd \begin{lstlisting} (Assign |$\itm{lhs}$| (Prim 'any-vector-length (list |$e_1$|))) |$\Longrightarrow$| movq $|$-8$|, %r11 andq |$e_1'$|, %r11 movq 0(%r11), %r11 andq $126, %r11 sarq $1, %r11 movq %r11, |$\itm{lhs'}$| \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Assign([|$\itm{lhs}$|], Call(Name('any_len'), [|$e_1$|])) |$\Longrightarrow$| movq $|$-8$|, %r11 andq |$e_1'$|, %r11 movq 0(%r11), %r11 andq $126, %r11 sarq $1, %r11 movq %r11, |$\itm{lhs'}$| \end{lstlisting} \fi} \paragraph{\racket{\code{any-vector-ref}}\python{\code{\code{any\_tuple\_load\_unsafe}}}} This operation combines the effect of \code{ValueOf} with reading an element of the tuple (see section~\ref{sec:select-instructions-gc}). However, the index may be an arbitrary atom, so instead of computing the offset at compile time, we must generate instructions to compute the offset at runtime as follows. Note the use of the new instruction \code{imulq}. \begin{center} \begin{minipage}{0.96\textwidth} {\if\edition\racketEd \begin{lstlisting} (Assign |$\itm{lhs}$| (Prim 'any-vector-ref (list |$e_1$| |$e_2$|))) |$\Longrightarrow$| movq |$\neg 111$|, %r11 andq |$e_1'$|, %r11 movq |$e_2'$|, %rax addq $1, %rax imulq $8, %rax addq %rax, %r11 movq 0(%r11) |$\itm{lhs'}$| \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor \begin{lstlisting} Assign([|$\itm{lhs}$|], Call(Name('any_tuple_load_unsafe'), [|$e_1$|,|$e_2$|])) |$\Longrightarrow$| movq $|$-8$|, %r11 andq |$e_1'$|, %r11 movq |$e_2'$|, %rax addq $1, %rax imulq $8, %rax addq %rax, %r11 movq 0(%r11) |$\itm{lhs'}$| \end{lstlisting} \fi} \end{minipage} \end{center} % $ pacify font lock %% \paragraph{\racket{\code{any-vector-set!}}\python{\code{any\_tuple\_store}}} %% The code generation for %% \racket{\code{any-vector-set!}}\python{\code{any\_tuple\_store}} is %% analogous to the above translation for reading from a tuple. \section{Register Allocation for \LangAny{} } \label{sec:register-allocation-Lany} \index{subject}{register allocation} There is an interesting interaction between tagged values and garbage collection that has an impact on register allocation. A variable of type \ANYTY{} might refer to a tuple, and therefore it might be a root that needs to be inspected and copied during garbage collection. Thus, we need to treat variables of type \ANYTY{} in a similar way to variables of tuple type for purposes of register allocation, with particular attention to the following: \begin{itemize} \item If a variable of type \ANYTY{} is live during a function call, then it must be spilled. This can be accomplished by changing \code{build\_interference} to mark all variables of type \ANYTY{} that are live after a \code{callq} to be interfering with all the registers. \item If a variable of type \ANYTY{} is spilled, it must be spilled to the root stack instead of the normal procedure call stack. \end{itemize} Another concern regarding the root stack is that the garbage collector needs to differentiate among (1) plain old pointers to tuples, (2) a tagged value that points to a tuple, and (3) a tagged value that is not a tuple. We enable this differentiation by choosing not to use the tag $000$ in the $\itm{tagof}$ function. Instead, that bit pattern is reserved for identifying plain old pointers to tuples. That way, if one of the first three bits is set, then we have a tagged value and inspecting the tag can differentiate between tuples ($010$) and the other kinds of values. %% \begin{exercise}\normalfont %% Expand your compiler to handle \LangAny{} as discussed in the last few %% sections. Create 5 new programs that use the \ANYTY{} type and the %% new operations (\code{Inject}, \code{Project}, etc.). Test your %% compiler on these new programs and all of your previously created test %% programs. %% \end{exercise} \begin{exercise}\normalfont\normalsize Expand your compiler to handle \LangDyn{} as outlined in this chapter. Create tests for \LangDyn{} by adapting ten of your previous test programs by removing type annotations. Add five more test programs that specifically rely on the language being dynamically typed. That is, they should not be legal programs in a statically typed language, but nevertheless they should be valid \LangDyn{} programs that run to completion without error. \end{exercise} Figure~\ref{fig:Ldyn-passes} provides an overview of the passes needed for the compilation of \LangDyn{}. \begin{figure}[bthp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.85] \node (Lfun) at (0,4) {\large \LangDyn{}}; \node (Lfun-2) at (4,4) {\large \LangDyn{}}; \node (Lfun-3) at (8,4) {\large \LangDyn{}}; \node (Lfun-4) at (12,4) {\large \LangDynFunRef{}}; \node (Lfun-5) at (12,2) {\large \LangAnyFunRef{}}; \node (Lfun-6) at (8,2) {\large \LangAnyFunRef{}}; \node (Lfun-7) at (4,2) {\large \LangAnyFunRef{}}; \node (F1-2) at (0,2) {\large \LangAnyFunRef{}}; \node (F1-3) at (0,0) {\large \LangAnyFunRef{}}; \node (F1-4) at (4,0) {\large \LangAnyAlloc{}}; \node (F1-5) at (8,0) {\large \LangAnyAlloc{}}; \node (F1-6) at (12,0) {\large \LangAnyAlloc{}}; \node (C3-2) at (0,-2) {\large \LangCAny{}}; \node (x86-2) at (0,-4) {\large \LangXIndCallVar{}}; \node (x86-2-1) at (0,-6) {\large \LangXIndCallVar{}}; \node (x86-2-2) at (4,-6) {\large \LangXIndCallVar{}}; \node (x86-3) at (4,-4) {\large \LangXIndCallVar{}}; \node (x86-4) at (8,-4) {\large \LangXIndCall{}}; \node (x86-5) at (8,-6) {\large \LangXIndCall{}}; \path[->,bend left=15] (Lfun) edge [above] node {\ttfamily\footnotesize shrink} (Lfun-2); \path[->,bend left=15] (Lfun-2) edge [above] node {\ttfamily\footnotesize uniquify} (Lfun-3); \path[->,bend left=15] (Lfun-3) edge [above] node {\ttfamily\footnotesize reveal\_functions} (Lfun-4); \path[->,bend left=15] (Lfun-4) edge [left] node {\ttfamily\footnotesize cast\_insert} (Lfun-5); \path[->,bend left=15] (Lfun-5) edge [below] node {\ttfamily\footnotesize reveal\_casts} (Lfun-6); \path[->,bend left=15] (Lfun-6) edge [below] node {\ttfamily\footnotesize convert\_assignments} (Lfun-7); \path[->,bend right=15] (Lfun-7) edge [above] node {\ttfamily\footnotesize convert\_to\_closures} (F1-2); \path[->,bend right=15] (F1-2) edge [right] node {\ttfamily\footnotesize limit\_functions} (F1-3); \path[->,bend right=15] (F1-3) edge [below] node {\ttfamily\footnotesize expose\_allocation} (F1-4); \path[->,bend right=15] (F1-4) edge [below] node {\ttfamily\footnotesize uncover\_get!} (F1-5); \path[->,bend left=15] (F1-5) edge [above] node {\ttfamily\footnotesize remove\_complex\_operands} (F1-6); \path[->,bend left=10] (F1-6) edge [below] node {\ttfamily\footnotesize \ \ \ \ \ explicate\_control} (C3-2); \path[->,bend left=15] (C3-2) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend right=15] (x86-2) edge [right] node {\ttfamily\footnotesize uncover\_live} (x86-2-1); \path[->,bend right=15] (x86-2-1) edge [below] node {\ttfamily\footnotesize build\_interference} (x86-2-2); \path[->,bend right=15] (x86-2-2) edge [right] node {\ttfamily\footnotesize allocate\_registers} (x86-3); \path[->,bend left=15] (x86-3) edge [above] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend left=15] (x86-4) edge [right] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \fi} {\if\edition\pythonEd\pythonColor \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.85] \node (Lfun) at (0,4) {\large \LangDyn{}}; \node (Lfun-2) at (4,4) {\large \LangDyn{}}; \node (Lfun-3) at (8,4) {\large \LangDyn{}}; \node (Lfun-4) at (12,4) {\large \LangDynFunRef{}}; \node (Lfun-5) at (12,2) {\large \LangAnyFunRef{}}; \node (Lfun-6) at (8,2) {\large \LangAnyFunRef{}}; \node (Lfun-7) at (4,2) {\large \LangAnyFunRef{}}; \node (F1-2) at (0,2) {\large \LangAnyFunRef{}}; \node (F1-3) at (0,0) {\large \LangAnyFunRef{}}; \node (F1-5) at (4,0) {\large \LangAnyAlloc{}}; \node (F1-6) at (8,0) {\large \LangAnyAlloc{}}; \node (C3-2) at (0,-2) {\large \LangCAny{}}; \node (x86-2) at (0,-4) {\large \LangXIndCallVar{}}; \node (x86-3) at (4,-4) {\large \LangXIndCallVar{}}; \node (x86-4) at (8,-4) {\large \LangXIndCall{}}; \node (x86-5) at (12,-4) {\large \LangXIndCall{}}; \path[->,bend left=15] (Lfun) edge [above] node {\ttfamily\footnotesize shrink} (Lfun-2); \path[->,bend left=15] (Lfun-2) edge [above] node {\ttfamily\footnotesize uniquify} (Lfun-3); \path[->,bend left=15] (Lfun-3) edge [above] node {\ttfamily\footnotesize reveal\_functions} (Lfun-4); \path[->,bend left=15] (Lfun-4) edge [left] node {\ttfamily\footnotesize cast\_insert} (Lfun-5); \path[->,bend left=15] (Lfun-5) edge [below] node {\ttfamily\footnotesize reveal\_casts} (Lfun-6); \path[->,bend right=15] (Lfun-6) edge [above] node {\ttfamily\footnotesize convert\_assignments} (Lfun-7); \path[->,bend right=15] (Lfun-7) edge [above] node {\ttfamily\footnotesize convert\_to\_closures} (F1-2); \path[->,bend right=15] (F1-2) edge [right] node {\ttfamily\footnotesize limit\_functions} (F1-3); \path[->,bend right=15] (F1-3) edge [below] node {\ttfamily\footnotesize expose\_allocation} (F1-5); \path[->,bend left=15] (F1-5) edge [above] node {\ttfamily\footnotesize remove\_complex\_operands} (F1-6); \path[->,bend left=10] (F1-6) edge [below] node {\ttfamily\footnotesize \ \ \ \ \ \ \ \ explicate\_control} (C3-2); \path[->,bend right=15] (C3-2) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend right=15] (x86-2) edge [below] node {\ttfamily\footnotesize assign\_homes} (x86-3); \path[->,bend right=15] (x86-3) edge [below] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend left=15] (x86-4) edge [above] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \fi} \end{tcolorbox} \caption{Diagram of the passes for \LangDyn{}, a dynamically typed language.} \label{fig:Ldyn-passes} \end{figure} % Further Reading %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% {\if\edition\pythonEd\pythonColor %% \chapter{Objects} %% \label{ch:Lobject} %% \index{subject}{objects} %% \index{subject}{classes} %% \setcounter{footnote}{0} %% \fi} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Gradual Typing} \label{ch:Lgrad} \index{subject}{gradual typing} \setcounter{footnote}{0} This chapter studies the language \LangGrad{}, in which the programmer can choose between static and dynamic type checking in different parts of a program, thereby mixing the statically typed \LangLam{} language with the dynamically typed \LangDyn{}. There are several approaches to mixing static and dynamic typing, including multilanguage integration~\citep{Tobin-Hochstadt:2006fk,Matthews:2007zr} and hybrid type checking~\citep{Flanagan:2006mn,Gronski:2006uq}. In this chapter we focus on \emph{gradual typing}\index{subject}{gradual typing}, in which the programmer controls the amount of static versus dynamic checking by adding or removing type annotations on parameters and variables~\citep{Anderson:2002kd,Siek:2006bh}. The definition of the concrete syntax of \LangGrad{} is shown in figure~\ref{fig:Lgrad-concrete-syntax}, and the definition of its abstract syntax is shown in figure~\ref{fig:Lgrad-syntax}. The main syntactic difference between \LangLam{} and \LangGrad{} is that type annotations are optional, which is specified in the grammar using the \Param{} and \itm{ret} nonterminals. In the abstract syntax, type annotations are not optional, but we use the \CANYTY{} type when a type annotation is absent. % Both the type checker and the interpreter for \LangGrad{} require some interesting changes to enable gradual typing, which we discuss in the next two sections. \newcommand{\LgradGrammarRacket}{ \begin{array}{lcl} \Type &::=& \LP\Type \ldots \; \key{->}\; \Type\RP \\ \Param &::=& \Var \MID \LS\Var \key{:} \Type\RS \\ \itm{ret} &::=& \epsilon \MID \key{:} \Type \\ \Exp &::=& \LP\Exp \; \Exp \ldots\RP \MID \CGLAMBDA{\LP\Param\ldots\RP}{\itm{ret}}{\Exp} \\ &\MID& \LP \key{procedure-arity}~\Exp\RP \\ \Def &::=& \CGDEF{\Var}{\Param\ldots}{\itm{ret}}{\Exp} \end{array} } \newcommand{\LgradASTRacket}{ \begin{array}{lcl} \Type &::=& \LP\Type \ldots \; \key{->}\; \Type\RP \\ \Param &::=& \Var \MID \LS\Var \key{:} \Type\RS \\ \Exp &::=& \APPLY{\Exp}{\Exp\ldots} \MID \LAMBDA{\LP\Param\ldots\RP}{\Type}{\Exp} \\ \itm{op} &::=& \code{procedure-arity} \\ \Def &::=& \FUNDEF{\Var}{\LP\Param\ldots\RP}{\Type}{\code{'()}}{\Exp} \end{array} } \newcommand{\LgradGrammarPython}{ \begin{array}{lcl} \Type &::=& \key{Any} \MID \key{int} \MID \key{bool} \MID \key{tuple}\LS \Type \code{, } \ldots \RS \MID \key{Callable}\LS \LS \Type \key{,} \ldots \RS \key{, } \Type \RS \\ \Exp &::=& \CAPPLY{\Exp}{\Exp\code{,} \ldots} \MID \CLAMBDA{\Var\code{, }\ldots}{\Exp} \MID \CARITY{\Exp} \\ \Stmt &::=& \CANNASSIGN{\Var}{\Type}{\Exp} \MID \CRETURN{\Exp} \\ \Param &::=& \Var \MID \Var \key{:} \Type \\ \itm{ret} &::=& \epsilon \MID \key{->}~\Type \\ \Def &::=& \CGDEF{\Var}{\Param\key{, }\ldots}{\itm{ret}}{\Stmt^{+}} \end{array} } \newcommand{\LgradASTPython}{ \begin{array}{lcl} \Type &::=& \key{AnyType()} \MID \key{IntType()} \MID \key{BoolType()} \MID \key{VoidType()}\\ &\MID& \key{TupleType}\LP\Type^{*}\RP \MID \key{FunctionType}\LP \Type^{*} \key{, } \Type \RP \\ \Exp &::=& \CALL{\Exp}{\Exp^{*}} \MID \LAMBDA{\Var^{*}}{\Exp}\\ &\MID& \ARITY{\Exp} \\ \Stmt &::=& \ANNASSIGN{\Var}{\Type}{\Exp} \MID \RETURN{\Exp} \\ \Param &::=& \LP\Var\key{,}\Type\RP \\ \Def &::=& \FUNDEF{\Var}{\Param^{*}}{\Type}{}{\Stmt^{+}} \end{array} } \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintGrammarRacket{}} \\ \hline \gray{\LvarGrammarRacket{}} \\ \hline \gray{\LifGrammarRacket{}} \\ \hline \gray{\LwhileGrammarRacket} \\ \hline \gray{\LtupGrammarRacket} \\ \hline \LgradGrammarRacket \\ \begin{array}{lcl} \LangGradM{} &::=& \gray{\Def\ldots \; \Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintGrammarPython{}} \\ \hline \gray{\LvarGrammarPython{}} \\ \hline \gray{\LifGrammarPython{}} \\ \hline \gray{\LwhileGrammarPython} \\ \hline \gray{\LtupGrammarPython} \\ \hline \LgradGrammarPython \\ \begin{array}{lcl} \LangGradM{} &::=& \Def\ldots \Stmt\ldots \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The concrete syntax of \LangGrad{}, extending \LangVec{} (figure~\ref{fig:Lvec-concrete-syntax}).} \label{fig:Lgrad-concrete-syntax} \end{figure} \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintOpAST} \\ \hline \gray{\LvarASTRacket{}} \\ \hline \gray{\LifASTRacket{}} \\ \hline \gray{\LwhileASTRacket{}} \\ \hline \gray{\LtupASTRacket{}} \\ \hline \LgradASTRacket \\ \begin{array}{lcl} \LangGradM{} &::=& \PROGRAMDEFSEXP{\code{'()}}{\LP\Def\ldots\RP}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintASTPython{}} \\ \hline \gray{\LvarASTPython{}} \\ \hline \gray{\LifASTPython{}} \\ \hline \gray{\LwhileASTPython} \\ \hline \gray{\LtupASTPython} \\ \hline \LgradASTPython \\ \begin{array}{lcl} \LangGradM{} &::=& \PROGRAM{}{\LS \Def \ldots \Stmt \ldots \RS} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangGrad{}, extending \LangVec{} (figure~\ref{fig:Lvec-syntax}).} \label{fig:Lgrad-syntax} \end{figure} % TODO: more road map -Jeremy %\clearpage \section{Type Checking \LangGrad{} \vspace{-2pt}} \label{sec:gradual-type-check} We begin by discussing the type checking of a partially typed variant of the \code{map} example from chapter~\ref{ch:Lfun}, shown in figure~\ref{fig:gradual-map}. The \code{map} function itself is statically typed, so there is nothing special happening there with respect to type checking. On the other hand, the \code{inc} function does not have type annotations, so the type checker assigns the type \CANYTY{} to parameter \code{x} and the return type. Now consider the \code{+} operator inside \code{inc}. It expects both arguments to have type \INTTY{}, but its first argument \code{x} has type \CANYTY{}. In a gradually typed language, such differences are allowed so long as the types are \emph{consistent}; that is, they are equal except in places where there is an \CANYTY{} type. That is, the type \CANYTY{} is consistent with every other type. Figure~\ref{fig:consistent} shows the definition of the \racket{\code{consistent?}}\python{\code{consistent}} method. % So the type checker allows the \code{+} operator to be applied to \code{x} because \CANYTY{} is consistent with \INTTY{}. % Next consider the call to the \code{map} function shown in figure~\ref{fig:gradual-map} with the arguments \code{inc} and a tuple. The \code{inc} function has type \racket{\code{(Any -> Any)}}\python{\code{Callable[[Any],Any]}}, but parameter \code{f} of \code{map} has type \racket{\code{(Integer -> Integer)}}\python{\code{Callable[[int],int]}}. The type checker for \LangGrad{} accepts this call because the two types are consistent. \begin{figure}[btp] % gradual_test_9.rkt \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define (map [f : (Integer -> Integer)] [v : (Vector Integer Integer)]) : (Vector Integer Integer) (vector (f (vector-ref v 0)) (f (vector-ref v 1)))) (define (inc x) (+ x 1)) (vector-ref (map inc (vector 0 41)) 1) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def map(f : Callable[[int], int], v : tuple[int,int]) -> tuple[int,int]: return f(v[0]), f(v[1]) def inc(x): return x + 1 t = map(inc, (0, 41)) print(t[1]) \end{lstlisting} \fi} \end{tcolorbox} \caption{A partially typed version of the \code{map} example.} \label{fig:gradual-map} \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define/public (consistent? t1 t2) (match* (t1 t2) [('Integer 'Integer) #t] [('Boolean 'Boolean) #t] [('Void 'Void) #t] [('Any t2) #t] [(t1 'Any) #t] [(`(Vector ,ts1 ...) `(Vector ,ts2 ...)) (for/and ([t1 ts1] [t2 ts2]) (consistent? t1 t2))] [(`(,ts1 ... -> ,rt1) `(,ts2 ... -> ,rt2)) (and (for/and ([t1 ts1] [t2 ts2]) (consistent? t1 t2)) (consistent? rt1 rt2))] [(other wise) #f])) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def consistent(self, t1, t2): match (t1, t2): case (AnyType(), _): return True case (_, AnyType()): return True case (FunctionType(ps1, rt1), FunctionType(ps2, rt2)): return all(map(self.consistent, ps1, ps2)) and consistent(rt1, rt2) case (TupleType(ts1), TupleType(ts2)): return all(map(self.consistent, ts1, ts2)) case (_, _): return t1 == t2 \end{lstlisting} \fi} \vspace{-5pt} \end{tcolorbox} \caption{The consistency method on types.} \label{fig:consistent} \end{figure} It is also helpful to consider how gradual typing handles programs with an error, such as applying \code{map} to a function that sometimes returns a Boolean, as shown in figure~\ref{fig:map-maybe_inc}. The type checker for \LangGrad{} accepts this program because the type of \code{maybe\_inc} is consistent with the type of parameter \code{f} of \code{map}; that is, \racket{\code{(Any -> Any)}}\python{\code{Callable[[Any],Any]}} is consistent with \racket{\code{(Integer -> Integer)}}\python{\code{Callable[[int],int]}}. One might say that a gradual type checker is optimistic in that it accepts programs that might execute without a runtime type error. % The definition of the type checker for \LangGrad{} is shown in figures~\ref{fig:type-check-Lgradual-1}, \ref{fig:type-check-Lgradual-2}, and \ref{fig:type-check-Lgradual-3}. %% \begin{figure}[tp] %% \centering %% \fbox{ %% \begin{minipage}{0.96\textwidth} %% \small %% \[ %% \begin{array}{lcl} %% \Exp &::=& \ldots \MID \CAST{\Exp}{\Type}{\Type} \\ %% \LangCastM{} &::=& \gray{ \PROGRAMDEFSEXP{\code{'()}}{\LP\Def\ldots\RP}{\Exp} } %% \end{array} %% \] %% \end{minipage} %% } %% \caption{The abstract syntax of \LangCast{}, extending \LangLam{} (figure~\ref{fig:Lwhile-syntax}).} %% \label{fig:Lgrad-prime-syntax} %% \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define (map [f : (Integer -> Integer)] [v : (Vector Integer Integer)]) : (Vector Integer Integer) (vector (f (vector-ref v 0)) (f (vector-ref v 1)))) (define (inc x) (+ x 1)) (define (true) #t) (define (maybe_inc x) (if (eq? 0 (read)) (inc x) (true))) (vector-ref (map maybe_inc (vector 0 41)) 0) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def map(f : Callable[[int], int], v : tuple[int,int]) -> tuple[int,int]: return f(v[0]), f(v[1]) def inc(x): return x + 1 def true(): return True def maybe_inc(x): return inc(x) if input_int() == 0 else true() t = map(maybe_inc, (0, 41)) print(t[1]) \end{lstlisting} \fi} \vspace{-5pt} \end{tcolorbox} \caption{A variant of the \code{map} example with an error.} \label{fig:map-maybe_inc} \end{figure} Running this program with input \code{1} triggers an error when the \code{maybe\_inc} function returns \racket{\code{\#t}}\python{\code{True}}. The \LangGrad{} language performs checking at runtime to ensure the integrity of the static types, such as the \racket{\code{(Integer -> Integer)}}\python{\code{Callable[[int],int]}} annotation on parameter \code{f} of \code{map}. Here we give a preview of how the runtime checking is accomplished; the following sections provide the details. The runtime checking is carried out by a new \code{Cast} AST node that is generated in a new pass named \code{cast\_insert}. The output of \code{cast\_insert} is a program in the \LangCast{} language, which simply adds \code{Cast} and \CANYTY{} to \LangLam{}. % Figure~\ref{fig:map-cast} shows the output of \code{cast\_insert} for \code{map} and \code{maybe\_inc}. The idea is that \code{Cast} is inserted every time the type checker encounters two types that are consistent but not equal. In the \code{inc} function, \code{x} is cast to \INTTY{} and the result of the \code{+} is cast to \CANYTY{}. In the call to \code{map}, the \code{inc} argument is cast from \racket{\code{(Any -> Any)}} \python{\code{Callable[[Any], Any]}} to \racket{\code{(Integer -> Integer)}}\python{\code{Callable[[int],int]}}. % In the next section we see how to interpret the \code{Cast} node. \begin{figure}[btp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define (map [f : (Integer -> Integer)] [v : (Vector Integer Integer)]) : (Vector Integer Integer) (vector (f (vector-ref v 0)) (f (vector-ref v 1)))) (define (inc [x : Any]) : Any (cast (+ (cast x Any Integer) 1) Integer Any)) (define (true) : Any (cast #t Boolean Any)) (define (maybe_inc [x : Any]) : Any (if (eq? 0 (read)) (inc x) (true))) (vector-ref (map (cast maybe_inc (Any -> Any) (Integer -> Integer)) (vector 0 41)) 0) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] def map(f : Callable[[int], int], v : tuple[int,int]) -> tuple[int,int]: return f(v[0]), f(v[1]) def inc(x : Any) -> Any: return Cast(Cast(x, Any, int) + 1, int, Any) def true() -> Any: return Cast(True, bool, Any) def maybe_inc(x : Any) -> Any: return inc(x) if input_int() == 0 else true() t = map(Cast(maybe_inc, Callable[[Any], Any], Callable[[int], int]), (0, 41)) print(t[1]) \end{lstlisting} \fi} \vspace{-5pt} \end{tcolorbox} \caption{Output of the \code{cast\_insert} pass for the \code{map} and \code{maybe\_inc} example.} \label{fig:map-cast} \end{figure} {\if\edition\pythonEd\pythonColor \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] class TypeCheckLgrad(TypeCheckLlambda): def type_check_exp(self, e, env) -> Type: match e: case Name(id): return env[id] case Constant(value) if isinstance(value, bool): return BoolType() case Constant(value) if isinstance(value, int): return IntType() case Call(Name('input_int'), []): return IntType() case BinOp(left, op, right): left_type = self.type_check_exp(left, env) self.check_consistent(left_type, IntType(), left) right_type = self.type_check_exp(right, env) self.check_consistent(right_type, IntType(), right) return IntType() case IfExp(test, body, orelse): test_t = self.type_check_exp(test, env) self.check_consistent(test_t, BoolType(), test) body_t = self.type_check_exp(body, env) orelse_t = self.type_check_exp(orelse, env) self.check_consistent(body_t, orelse_t, e) return self.join_types(body_t, orelse_t) case Call(func, args): func_t = self.type_check_exp(func, env) args_t = [self.type_check_exp(arg, env) for arg in args] match func_t: case FunctionType(params_t, return_t) if len(params_t) == len(args_t): for (arg_t, param_t) in zip(args_t, params_t): self.check_consistent(param_t, arg_t, e) return return_t case AnyType(): return AnyType() case _: raise Exception('type_check_exp: in call, unexpected ' + repr(func_t)) ... case _: raise Exception('type_check_exp: unexpected ' + repr(e)) \end{lstlisting} \end{tcolorbox} \caption{Type checking expressions in the \LangGrad{} language.} \label{fig:type-check-Lgradual-1} \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting} def check_exp(self, e, expected_ty, env): match e: case Lambda(params, body): match expected_ty: case FunctionType(params_t, return_t): new_env = env.copy().update(zip(params, params_t)) e.has_type = expected_ty body_ty = self.type_check_exp(body, new_env) self.check_consistent(body_ty, return_t) case AnyType(): new_env = env.copy().update((p, AnyType()) for p in params) e.has_type = FunctionType([AnyType()for _ in params],AnyType()) body_ty = self.type_check_exp(body, new_env) case _: raise Exception('lambda is not of type ' + str(expected_ty)) case _: e_ty = self.type_check_exp(e, env) self.check_consistent(e_ty, expected_ty, e) \end{lstlisting} \end{tcolorbox} \caption{Checking expressions with respect to a type in the \LangGrad{} language.} \label{fig:type-check-Lgradual-2} \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting} def type_check_stmt(self, s, env, return_type): match s: case Assign([Name(id)], value): value_ty = self.type_check_exp(value, env) if id in env: self.check_consistent(env[id], value_ty, value) else: env[id] = value_ty ... case _: raise Exception('type_check_stmts: unexpected ' + repr(ss)) def type_check_stmts(self, ss, env, return_type): for s in ss: self.type_check_stmt(s, env, return_type) \end{lstlisting} \end{tcolorbox} \caption{Type checking statements in the \LangGrad{} language.} \label{fig:type-check-Lgradual-3} \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting} def join_types(self, t1, t2): match (t1, t2): case (AnyType(), _): return t2 case (_, AnyType()): return t1 case (FunctionType(ps1, rt1), FunctionType(ps2, rt2)): return FunctionType(list(map(self.join_types, ps1, ps2)), self.join_types(rt1,rt2)) case (TupleType(ts1), TupleType(ts2)): return TupleType(list(map(self.join_types, ts1, ts2))) case (_, _): return t1 def check_consistent(self, t1, t2, e): if not self.consistent(t1, t2): raise Exception('error: ' + repr(t1) + ' inconsistent with ' \ + repr(t2) + ' in ' + repr(e)) \end{lstlisting} \end{tcolorbox} \caption{Auxiliary methods for type checking \LangGrad{}.} \label{fig:type-check-Lgradual-aux} \end{figure} \fi} {\if\edition\racketEd \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define/override (type-check-exp env) (lambda (e) (define recur (type-check-exp env)) (match e [(Prim op es) #:when (not (set-member? explicit-prim-ops op)) (define-values (new-es ts) (for/lists (exprs types) ([e es]) (recur e))) (define t-ret (type-check-op op ts e)) (values (Prim op new-es) t-ret)] [(Prim 'eq? (list e1 e2)) (define-values (e1^ t1) (recur e1)) (define-values (e2^ t2) (recur e2)) (check-consistent? t1 t2 e) (define T (meet t1 t2)) (values (Prim 'eq? (list e1^ e2^)) 'Boolean)] [(Prim 'and (list e1 e2)) (recur (If e1 e2 (Bool #f)))] [(Prim 'or (list e1 e2)) (define tmp (gensym 'tmp)) (recur (Let tmp e1 (If (Var tmp) (Var tmp) e2)))] [(If e1 e2 e3) (define-values (e1^ T1) (recur e1)) (define-values (e2^ T2) (recur e2)) (define-values (e3^ T3) (recur e3)) (check-consistent? T1 'Boolean e) (check-consistent? T2 T3 e) (define Tif (meet T2 T3)) (values (If e1^ e2^ e3^) Tif)] [(SetBang x e1) (define-values (e1^ T1) (recur e1)) (define varT (dict-ref env x)) (check-consistent? T1 varT e) (values (SetBang x e1^) 'Void)] [(WhileLoop e1 e2) (define-values (e1^ T1) (recur e1)) (check-consistent? T1 'Boolean e) (define-values (e2^ T2) ((type-check-exp env) e2)) (values (WhileLoop e1^ e2^) 'Void)] [(Prim 'vector-length (list e1)) (define-values (e1^ t) (recur e1)) (match t [`(Vector ,ts ...) (values (Prim 'vector-length (list e1^)) 'Integer)] ['Any (values (Prim 'vector-length (list e1^)) 'Integer)])] \end{lstlisting} \end{tcolorbox} \caption{Type checker for the \LangGrad{} language, part 1.} \label{fig:type-check-Lgradual-1} \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] [(Prim 'vector-ref (list e1 e2)) (define-values (e1^ t1) (recur e1)) (define-values (e2^ t2) (recur e2)) (check-consistent? t2 'Integer e) (match t1 [`(Vector ,ts ...) (match e2^ [(Int i) (unless (and (0 . <= . i) (i . < . (length ts))) (error 'type-check "invalid index ~a in ~a" i e)) (values (Prim 'vector-ref (list e1^ (Int i))) (list-ref ts i))] [else (values (Prim 'vector-ref (list e1^ e2^)) 'Any)])] ['Any (values (Prim 'vector-ref (list e1^ e2^)) 'Any)] [else (error 'type-check "expected vector not ~a\nin ~v" t1 e)])] [(Prim 'vector-set! (list e1 e2 e3) ) (define-values (e1^ t1) (recur e1)) (define-values (e2^ t2) (recur e2)) (define-values (e3^ t3) (recur e3)) (check-consistent? t2 'Integer e) (match t1 [`(Vector ,ts ...) (match e2^ [(Int i) (unless (and (0 . <= . i) (i . < . (length ts))) (error 'type-check "invalid index ~a in ~a" i e)) (check-consistent? (list-ref ts i) t3 e) (values (Prim 'vector-set! (list e1^ (Int i) e3^)) 'Void)] [else (values (Prim 'vector-set! (list e1^ e2^ e3^)) 'Void)])] ['Any (values (Prim 'vector-set! (list e1^ e2^ e3^)) 'Void)] [else (error 'type-check "expected vector not ~a\nin ~v" t1 e)])] [(Apply e1 e2s) (define-values (e1^ T1) (recur e1)) (define-values (e2s^ T2s) (for/lists (e* ty*) ([e2 e2s]) (recur e2))) (match T1 [`(,T1ps ... -> ,T1rt) (for ([T2 T2s] [Tp T1ps]) (check-consistent? T2 Tp e)) (values (Apply e1^ e2s^) T1rt)] [`Any (values (Apply e1^ e2s^) 'Any)] [else (error 'type-check "expected function not ~a\nin ~v" T1 e)])] [(Lambda params Tr e1) (define-values (xs Ts) (for/lists (l1 l2) ([p params]) (match p [`[,x : ,T] (values x T)] [(? symbol? x) (values x 'Any)]))) (define-values (e1^ T1) ((type-check-exp (append (map cons xs Ts) env)) e1)) (check-consistent? Tr T1 e) (values (Lambda (for/list ([x xs] [T Ts]) `[,x : ,T]) Tr e1^) `(,@Ts -> ,Tr))] [else ((super type-check-exp env) e)] ))) \end{lstlisting} \end{tcolorbox} \caption{Type checker for the \LangGrad{} language, part 2.} \label{fig:type-check-Lgradual-2} \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting} (define/override (type-check-def env) (lambda (e) (match e [(Def f params rt info body) (define-values (xs ps) (for/lists (l1 l2) ([p params]) (match p [`[,x : ,T] (values x T)] [(? symbol? x) (values x 'Any)]))) (define new-env (append (map cons xs ps) env)) (define-values (body^ ty^) ((type-check-exp new-env) body)) (check-consistent? ty^ rt e) (Def f (for/list ([x xs] [T ps]) `[,x : ,T]) rt info body^)] [else (error 'type-check "ill-formed function definition ~a" e)] ))) (define/override (type-check-program e) (match e [(Program info body) (define-values (body^ ty) ((type-check-exp '()) body)) (check-consistent? ty 'Integer e) (ProgramDefsExp info '() body^)] [(ProgramDefsExp info ds body) (define new-env (for/list ([d ds]) (cons (Def-name d) (fun-def-type d)))) (define ds^ (for/list ([d ds]) ((type-check-def new-env) d))) (define-values (body^ ty) ((type-check-exp new-env) body)) (check-consistent? ty 'Integer e) (ProgramDefsExp info ds^ body^)] [else (super type-check-program e)])) \end{lstlisting} \end{tcolorbox} \caption{Type checker for the \LangGrad{} language, part 3.} \label{fig:type-check-Lgradual-3} \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting} (define/public (join t1 t2) (match* (t1 t2) [('Integer 'Integer) 'Integer] [('Boolean 'Boolean) 'Boolean] [('Void 'Void) 'Void] [('Any t2) t2] [(t1 'Any) t1] [(`(Vector ,ts1 ...) `(Vector ,ts2 ...)) `(Vector ,@(for/list ([t1 ts1] [t2 ts2]) (join t1 t2)))] [(`(,ts1 ... -> ,rt1) `(,ts2 ... -> ,rt2)) `(,@(for/list ([t1 ts1] [t2 ts2]) (join t1 t2)) -> ,(join rt1 rt2))])) (define/public (meet t1 t2) (match* (t1 t2) [('Integer 'Integer) 'Integer] [('Boolean 'Boolean) 'Boolean] [('Void 'Void) 'Void] [('Any t2) 'Any] [(t1 'Any) 'Any] [(`(Vector ,ts1 ...) `(Vector ,ts2 ...)) `(Vector ,@(for/list ([t1 ts1] [t2 ts2]) (meet t1 t2)))] [(`(,ts1 ... -> ,rt1) `(,ts2 ... -> ,rt2)) `(,@(for/list ([t1 ts1] [t2 ts2]) (meet t1 t2)) -> ,(meet rt1 rt2))])) (define/public (check-consistent? t1 t2 e) (unless (consistent? t1 t2) (error 'type-check "~a is inconsistent with ~a\nin ~v" t1 t2 e))) (define explicit-prim-ops (set-union (type-predicates) (set 'procedure-arity 'eq? 'not 'and 'or 'vector 'vector-length 'vector-ref 'vector-set! 'any-vector-length 'any-vector-ref 'any-vector-set!))) (define/override (fun-def-type d) (match d [(Def f params rt info body) (define ps (for/list ([p params]) (match p [`[,x : ,T] T] [(? symbol?) 'Any] [else (error 'fun-def-type "unmatched parameter ~a" p)]))) `(,@ps -> ,rt)] [else (error 'fun-def-type "ill-formed definition in ~a" d)])) \end{lstlisting} \end{tcolorbox} \caption{Auxiliary functions for type checking \LangGrad{}.} \label{fig:type-check-Lgradual-aux} \end{figure} \fi} \clearpage \section{Interpreting \LangCast{} \vspace{-2pt}} \label{sec:interp-casts} The runtime behavior of casts involving simple types such as \INTTY{} and \BOOLTY{} is straightforward. For example, a cast from \INTTY{} to \CANYTY{} can be accomplished with the \code{Inject} operator of \LangAny{}, which puts the integer into a tagged value (figure~\ref{fig:interp-Lany}). Similarly, a cast from \CANYTY{} to \INTTY{} is accomplished with the \code{Project} operator, by checking the value's tag and either retrieving the underlying integer or signaling an error if the tag is not the one for integers (figure~\ref{fig:interp-Lany-aux}). % Things get more interesting with casts involving \racket{function and tuple types}\python{function, tuple, and array types}. Consider the cast of the function \code{maybe\_inc} from \racket{\code{(Any -> Any)}}\python{\code{Callable[[Any], Any]}} to \racket{\code{(Integer -> Integer)}}\python{\code{Callable[[int], int]}} shown in figure~\ref{fig:map-maybe_inc}. When the \code{maybe\_inc} function flows through this cast at runtime, we don't know whether it will return an integer, because that depends on the input from the user. The \LangCast{} interpreter therefore delays the checking of the cast until the function is applied. To do so it wraps \code{maybe\_inc} in a new function that casts its parameter from \INTTY{} to \CANYTY{}, applies \code{maybe\_inc}, and then casts the return value from \CANYTY{} to \INTTY{}. {\if\edition\pythonEd\pythonColor % There are further complications regarding casts on mutable data, such as the \code{list} type introduced in the challenge assignment of section~\ref{sec:arrays}. % \fi} % Consider the example presented in figure~\ref{fig:map-bang} that defines a partially typed version of \code{map} whose parameter \code{v} has type \racket{\code{(Vector Any Any)}}\python{\code{list[Any]}} and that updates \code{v} in place instead of returning a new tuple. We name this function \code{map\_inplace}. We apply \code{map\_inplace} to \racket{a tuple}\python{an array} of integers, so the type checker inserts a cast from \racket{\code{(Vector Integer Integer)}}\python{\code{list[int]}} to \racket{\code{(Vector Any Any)}}\python{\code{list[Any]}}. A naive way for the \LangCast{} interpreter to cast between \racket{tuple}\python{array} types would be to build a new \racket{tuple}\python{array} whose elements are the result of casting each of the original elements to the target type. However, this approach is not valid for mutable data structures. In the example of figure~\ref{fig:map-bang}, if the cast created a new \racket{tuple}\python{array}, then the updates inside \code{map\_inplace} would happen to the new \racket{tuple}\python{array} and not the original one. Instead the interpreter needs to create a new kind of value, a \emph{proxy}, that intercepts every \racket{tuple}\python{array} operation. On a read, the proxy reads from the underlying \racket{tuple}\python{array} and then applies a cast to the resulting value. On a write, the proxy casts the argument value and then performs the write to the underlying \racket{tuple}\python{array}. \racket{ For the first \code{(vector-ref v 0)} in \code{map\_inplace}, the proxy casts \code{0} from \INTTY{} to \CANYTY{}. For the first \code{vector-set!}, the proxy casts a tagged \code{1} from \CANYTY{} to \INTTY{}. } \python{ For the subscript \code{v[i]} in \code{f(v[i])} of \code{map\_inplace}, the proxy casts the integer from \INTTY{} to \CANYTY{}. For the subscript on the left of the assignment, the proxy casts the tagged value from \CANYTY{} to \INTTY{}. } Finally we consider casts between the \CANYTY{} type and higher-order types such as functions and \racket{tuples}\python{lists}. Figure~\ref{fig:map-any} shows a variant of \code{map\_inplace} in which parameter \code{v} does not have a type annotation, so it is given type \CANYTY{}. In the call to \code{map\_inplace}, the \racket{tuple}\python{list} has type \racket{\code{(Vector Integer Integer)}}\python{\code{list[int]}}, so the type checker inserts a cast to \CANYTY{}. A first thought is to use \code{Inject}, but that doesn't work because \racket{\code{(Vector Integer Integer)}}\python{\code{list[int]}} is not a flat type. Instead, we must first cast to \racket{\code{(Vector Any Any)}}\python{\code{list[Any]}}, which is flat, and then inject to \CANYTY{}. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] % gradual_test_11.rkt {\if\edition\racketEd \begin{lstlisting} (define (map_inplace [f : (Any -> Any)] [v : (Vector Any Any)]) : Void (begin (vector-set! v 0 (f (vector-ref v 0))) (vector-set! v 1 (f (vector-ref v 1))))) (define (inc x) (+ x 1)) (let ([v (vector 0 41)]) (begin (map_inplace inc v) (vector-ref v 1))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def map_inplace(f : Callable[[int], int], v : list[Any]) -> None: i = 0 while i != len(v): v[i] = f(v[i]) i = i + 1 def inc(x : int) -> int: return x + 1 v = [0, 41] map_inplace(inc, v) print(v[1]) \end{lstlisting} \fi} \end{tcolorbox} \caption{An example involving casts on arrays.} \label{fig:map-bang} \end{figure} \begin{figure}[btp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define (map_inplace [f : (Any -> Any)] v) : Void (begin (vector-set! v 0 (f (vector-ref v 0))) (vector-set! v 1 (f (vector-ref v 1))))) (define (inc x) (+ x 1)) (let ([v (vector 0 41)]) (begin (map_inplace inc v) (vector-ref v 1))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def map_inplace(f : Callable[[Any], Any], v) -> None: i = 0 while i != len(v): v[i] = f(v[i]) i = i + 1 def inc(x): return x + 1 v = [0, 41] map_inplace(inc, v) print(v[1]) \end{lstlisting} \fi} \end{tcolorbox} \caption{Casting \racket{a tuple}\python{an array} to \CANYTY{}.} \label{fig:map-any} \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define/public (apply_cast v s t) (match* (s t) [(t1 t2) #:when (equal? t1 t2) v] [('Any t2) (match t2 [`(,ts ... -> ,rt) (define any->any `(,@(for/list ([t ts]) 'Any) -> Any)) (define v^ (apply-project v any->any)) (apply_cast v^ any->any `(,@ts -> ,rt))] [`(Vector ,ts ...) (define vec-any `(Vector ,@(for/list ([t ts]) 'Any))) (define v^ (apply-project v vec-any)) (apply_cast v^ vec-any `(Vector ,@ts))] [else (apply-project v t2)])] [(t1 'Any) (match t1 [`(,ts ... -> ,rt) (define any->any `(,@(for/list ([t ts]) 'Any) -> Any)) (define v^ (apply_cast v `(,@ts -> ,rt) any->any)) (apply-inject v^ (any-tag any->any))] [`(Vector ,ts ...) (define vec-any `(Vector ,@(for/list ([t ts]) 'Any))) (define v^ (apply_cast v `(Vector ,@ts) vec-any)) (apply-inject v^ (any-tag vec-any))] [else (apply-inject v (any-tag t1))])] [(`(Vector ,ts1 ...) `(Vector ,ts2 ...)) (define x (gensym 'x)) (define cast-reads (for/list ([t1 ts1] [t2 ts2]) `(function (,x) ,(Cast (Var x) t1 t2) ()))) (define cast-writes (for/list ([t1 ts1] [t2 ts2]) `(function (,x) ,(Cast (Var x) t2 t1) ()))) `(vector-proxy ,(vector v (apply vector cast-reads) (apply vector cast-writes)))] [(`(,ts1 ... -> ,rt1) `(,ts2 ... -> ,rt2)) (define xs (for/list ([t2 ts2]) (gensym 'x))) `(function ,xs ,(Cast (Apply (Value v) (for/list ([x xs][t1 ts1][t2 ts2]) (Cast (Var x) t2 t1))) rt1 rt2) ())] )) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] def apply_cast(self, value, src, tgt): match (src, tgt): case (AnyType(), FunctionType(ps2, rt2)): anyfun = FunctionType([AnyType() for p in ps2], AnyType()) return self.apply_cast(self.apply_project(value, anyfun), anyfun, tgt) case (AnyType(), TupleType(ts2)): anytup = TupleType([AnyType() for t1 in ts2]) return self.apply_cast(self.apply_project(value, anytup), anytup, tgt) case (AnyType(), ListType(t2)): anylist = ListType([AnyType() for t1 in ts2]) return self.apply_cast(self.apply_project(value, anylist), anylist, tgt) case (AnyType(), AnyType()): return value case (AnyType(), _): return self.apply_project(value, tgt) case (FunctionType(ps1,rt1), AnyType()): anyfun = FunctionType([AnyType() for p in ps1], AnyType()) return self.apply_inject(self.apply_cast(value, src, anyfun), anyfun) case (TupleType(ts1), AnyType()): anytup = TupleType([AnyType() for t1 in ts1]) return self.apply_inject(self.apply_cast(value, src, anytup), anytup) case (ListType(t1), AnyType()): anylist = ListType(AnyType()) return self.apply_inject(self.apply_cast(value,src,anylist), anylist) case (_, AnyType()): return self.apply_inject(value, src) case (FunctionType(ps1, rt1), FunctionType(ps2, rt2)): params = [generate_name('x') for p in ps2] args = [Cast(Name(x), t2, t1) for (x,t1,t2) in zip(params, ps1, ps2)] body = Cast(Call(ValueExp(value), args), rt1, rt2) return Function('cast', params, [Return(body)], {}) case (TupleType(ts1), TupleType(ts2)): x = generate_name('x') reads = [Function('cast', [x], [Return(Cast(Name(x), t1, t2))], {}) for (t1,t2) in zip(ts1,ts2)] return ProxiedTuple(value, reads) case (ListType(t1), ListType(t2)): x = generate_name('x') read = Function('cast', [x], [Return(Cast(Name(x), t1, t2))], {}) write = Function('cast', [x], [Return(Cast(Name(x), t2, t1))], {}) return ProxiedList(value, read, write) case (t1, t2) if t1 == t2: return value case (t1, t2): raise Exception('apply_cast unexpected ' + repr(src) + ' ' + repr(tgt)) def apply_inject(self, value, src): return Tagged(value, self.type_to_tag(src)) def apply_project(self, value, tgt): match value: case Tagged(val, tag) if self.type_to_tag(tgt) == tag: return val case _: raise Exception('apply_project, unexpected ' + repr(value)) \end{lstlisting} \fi} \end{tcolorbox} \caption{The \code{apply\_cast} auxiliary method.} \label{fig:apply_cast} \end{figure} The \LangCast{} interpreter uses an auxiliary function named \code{apply\_cast} to cast a value from a source type to a target type, shown in figure~\ref{fig:apply_cast}. You'll find that it handles all the kinds of casts that we've discussed in this section. % The definition of the interpreter for \LangCast{} is shown in figure~\ref{fig:interp-Lcast}, with the case for \code{Cast} dispatching to \code{apply\_cast}. \racket{To handle the addition of tuple proxies, we update the tuple primitives in \code{interp-op} using the functions given in figure~\ref{fig:guarded-tuple}.} Next we turn to the individual passes needed for compiling \LangGrad{}. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define interp-Lcast-class (class interp-Llambda-class (super-new) (inherit apply-fun apply-inject apply-project) (define/override (interp-op op) (match op ['vector-length guarded-vector-length] ['vector-ref guarded-vector-ref] ['vector-set! guarded-vector-set!] ['any-vector-ref (lambda (v i) (match v [`(tagged ,v^ ,tg) (guarded-vector-ref v^ i)]))] ['any-vector-set! (lambda (v i a) (match v [`(tagged ,v^ ,tg) (guarded-vector-set! v^ i a)]))] ['any-vector-length (lambda (v) (match v [`(tagged ,v^ ,tg) (guarded-vector-length v^)]))] [else (super interp-op op)] )) (define/override ((interp-exp env) e) (define (recur e) ((interp-exp env) e)) (match e [(Value v) v] [(Cast e src tgt) (apply_cast (recur e) src tgt)] [else ((super interp-exp env) e)])) )) (define (interp-Lcast p) (send (new interp-Lcast-class) interp-program p)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] class InterpLcast(InterpLany): def interp_exp(self, e, env): match e: case Cast(value, src, tgt): v = self.interp_exp(value, env) return self.apply_cast(v, src, tgt) case ValueExp(value): return value ... case _: return super().interp_exp(e, env) \end{lstlisting} \fi} \end{tcolorbox} \caption{The interpreter for \LangCast{}.} \label{fig:interp-Lcast} \end{figure} {\if\edition\racketEd \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] (define (guarded-vector-ref vec i) (match vec [`(vector-proxy ,proxy) (define val (guarded-vector-ref (vector-ref proxy 0) i)) (define rd (vector-ref (vector-ref proxy 1) i)) (apply-fun rd (list val) 'guarded-vector-ref)] [else (vector-ref vec i)])) (define (guarded-vector-set! vec i arg) (match vec [`(vector-proxy ,proxy) (define wr (vector-ref (vector-ref proxy 2) i)) (define arg^ (apply-fun wr (list arg) 'guarded-vector-set!)) (guarded-vector-set! (vector-ref proxy 0) i arg^)] [else (vector-set! vec i arg)])) (define (guarded-vector-length vec) (match vec [`(vector-proxy ,proxy) (guarded-vector-length (vector-ref proxy 0))] [else (vector-length vec)])) \end{lstlisting} %% {\if\edition\pythonEd\pythonColor %% \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] %% UNDER CONSTRUCTION %% \end{lstlisting} %% \fi} \end{tcolorbox} \caption{The \code{guarded-vector} auxiliary functions.} \label{fig:guarded-tuple} \end{figure} \fi} {\if\edition\pythonEd\pythonColor \section{Overload Resolution \vspace{-2pt}} \label{sec:gradual-resolution} Recall that when we added support for arrays in section~\ref{sec:arrays}, the syntax for the array operations were the same as for tuple operations (for example, accessing an element and getting the length). So we performed overload resolution, with a pass named \code{resolve}, to separate the array and tuple operations. In particular, we introduced the primitives \code{array\_load}, \code{array\_store}, and \code{array\_len}. For gradual typing, we further overload these operators to work on values of type \CANYTY{}. Thus, the \code{resolve} pass should be updated with new cases for the \CANYTY{} type, translating the element access and length operations to the primitives \code{any\_load}, \code{any\_store}, and \code{any\_len}. \fi} \section{Cast Insertion \vspace{-2pt}} \label{sec:gradual-insert-casts} In our discussion of type checking of \LangGrad{}, we mentioned how the runtime aspect of type checking is carried out by the \code{Cast} AST node, which is added to the program by a new pass named \code{cast\_insert}. The target of this pass is the \LangCast{} language. We now discuss the details of this pass. The \code{cast\_insert} pass is closely related to the type checker for \LangGrad{} (starting in figure~\ref{fig:type-check-Lgradual-1}). In particular, the type checker allows implicit casts between consistent types. The job of the \code{cast\_insert} pass is to make those casts explicit. It does so by inserting \code{Cast} nodes into the AST. % For the most part, the implicit casts occur in places where the type checker checks two types for consistency. Consider the case for binary operators in figure~\ref{fig:type-check-Lgradual-1}. The type checker requires that the type of the left operand is consistent with \INTTY{}. Thus, the \code{cast\_insert} pass should insert a \code{Cast} around the left operand, converting from its type to \INTTY{}. The story is similar for the right operand. It is not always necessary to insert a cast, for example, if the left operand already has type \INTTY{} then there is no need for a \code{Cast}. Some of the implicit casts are not as straightforward. One such case arises with the conditional expression. In figure~\ref{fig:type-check-Lgradual-1} we see that the type checker requires that the two branches have consistent types and that type of the conditional expression is the meet of the branches' types. In the target language \LangCast{}, both branches will need to have the same type, and that type will be the type of the conditional expression. Thus, each branch requires a \code{Cast} to convert from its type to the meet of the branches' types. The case for the function call exhibits another interesting situation. If the function expression is of type \CANYTY{}, then it needs to be cast to a function type so that it can be used in a function call in \LangCast{}. Which function type should it be cast to? The parameter and return types are unknown, so we can simply use \CANYTY{} for all of them. Furthermore, in \LangCast{} the argument types will need to exactly match the parameter types, so we must cast all the arguments to type \CANYTY{} (if they are not already of that type). {\if\edition\racketEd % Likewise, the cases for the tuple operators \code{vector-length}, \code{vector-ref}, and \code{vector-set!} need to handle the situation where the tuple expression is of type \CANYTY{}. Instead of handling these situations with casts, we recommend translating the special-purpose variants of the tuple operators that handle tuples of type \CANYTY{}: \code{any-vector-length}, \code{any-vector-ref}, and \code{any-vector-set!}. % \fi} \section{Lower Casts \vspace{-2pt}} \label{sec:lower_casts} The next step in the journey toward x86 is the \code{lower\_casts} pass that translates the casts in \LangCast{} to the lower-level \code{Inject} and \code{Project} operators and new operators for proxies, extending the \LangLam{} language to \LangProxy{}. The \LangProxy{} language can also be described as an extension of \LangAny{}, with the addition of proxies. We recommend creating an auxiliary function named \code{lower\_cast} that takes an expression (in \LangCast{}), a source type, and a target type and translates it to an expression in \LangProxy{}. The \code{lower\_cast} function can follow a code structure similar to the \code{apply\_cast} function (figure~\ref{fig:apply_cast}) used in the interpreter for \LangCast{}, because it must handle the same cases as \code{apply\_cast} and it needs to mimic the behavior of \code{apply\_cast}. The most interesting cases concern the casts involving \racket{tuple and function types}\python{tuple, array, and function types}. {\if\edition\racketEd As mentioned in section~\ref{sec:interp-casts}, a cast from one tuple type to another tuple type is accomplished by creating a proxy that intercepts the operations on the underlying tuple. Here we make the creation of the proxy explicit with the \code{vector-proxy} AST node. It takes three arguments: the first is an expression for the tuple, the second is a tuple of functions for casting an element that is being read from the tuple, and the third is a tuple of functions for casting an element that is being written to the array. You can create the functions for reading and writing using lambda expressions. Also, as we show in the next section, we need to differentiate these tuples of functions from the user-created ones, so we recommend using a new AST node named \code{raw-vector} instead of \code{vector}. % Figure~\ref{fig:map-bang-lower-cast} shows the output of \code{lower\_casts} on the example given in figure~\ref{fig:map-bang} that involved casting a tuple of integers to a tuple of \CANYTY{}. \fi} {\if\edition\pythonEd\pythonColor As mentioned in section~\ref{sec:interp-casts}, a cast from one array type to another array type is accomplished by creating a proxy that intercepts the operations on the underlying array. Here we make the creation of the proxy explicit with the \code{ListProxy} AST node. It takes fives arguments: the first is an expression for the array, the second is a function for casting an element that is being read from the array, the third is a function for casting an element that is being written to the array, the fourth is the type of the underlying array, and the fifth is the type of the proxied array. You can create the functions for reading and writing using lambda expressions. A cast between two tuple types can be handled in a similar manner. We create a proxy with the \code{TupleProxy} AST node. Tuples are immutable, so there is no need for a function to cast the value during a write. Because there is a separate element type for each slot in the tuple, we need more than one function for casting during a read: we need a tuple of functions. % Also, as we show in the next section, we need to differentiate these tuples from the user-created ones, so we recommend using a new AST node named \code{RawTuple} instead of \code{Tuple} to create the tuples of functions. % Figure~\ref{fig:map-bang-lower-cast} shows the output of \code{lower\_casts} on the example given in figure~\ref{fig:map-bang} that involves casting an array of integers to an array of \CANYTY{}. \fi} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define (map_inplace [f : (Any -> Any)] [v : (Vector Any Any)]) : Void (begin (vector-set! v 0 (f (vector-ref v 0))) (vector-set! v 1 (f (vector-ref v 1))))) (define (inc [x : Any]) : Any (inject (+ (project x Integer) 1) Integer)) (let ([v (vector 0 41)]) (begin (map_inplace inc (vector-proxy v (raw-vector (lambda: ([x9 : Integer]) : Any (inject x9 Integer)) (lambda: ([x9 : Integer]) : Any (inject x9 Integer))) (raw-vector (lambda: ([x9 : Any]) : Integer (project x9 Integer)) (lambda: ([x9 : Any]) : Integer (project x9 Integer))))) (vector-ref v 1))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] def map_inplace(f : Callable[[int], int], v : list[Any]) -> void: i = 0 while i != array_len(v): array_store(v, i, inject(f(project(array_load(v, i), int)), int)) i = (i + 1) def inc(x : int) -> int: return (x + 1) def main() -> int: v = [0, 41] map_inplace(inc, array_proxy(v, list[int], list[Any])) print(array_load(v, 1)) return 0 \end{lstlisting} \fi} \end{tcolorbox} \caption{Output of \code{lower\_casts} on the example shown in figure~\ref{fig:map-bang}.} \label{fig:map-bang-lower-cast} \end{figure} A cast from one function type to another function type is accomplished by generating a \code{lambda} whose parameter and return types match the target function type. The body of the \code{lambda} should cast the parameters from the target type to the source type. (Yes, backward! Functions are contravariant\index{subject}{contravariant} in the parameters.) Afterward, call the underlying function and then cast the result from the source return type to the target return type. Figure~\ref{fig:map-lower-cast} shows the output of the \code{lower\_casts} pass on the \code{map} example give in figure~\ref{fig:gradual-map}. Note that the \code{inc} argument in the call to \code{map} is wrapped in a \code{lambda}. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define (map [f : (Integer -> Integer)] [v : (Vector Integer Integer)]) : (Vector Integer Integer) (vector (f (vector-ref v 0)) (f (vector-ref v 1)))) (define (inc [x : Any]) : Any (inject (+ (project x Integer) 1) Integer)) (vector-ref (map (lambda: ([x9 : Integer]) : Integer (project (inc (inject x9 Integer)) Integer)) (vector 0 41)) 1) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\footnotesize] def map(f : Callable[[int], int], v : tuple[int,int]) -> tuple[int,int]: return (f(v[0]), f(v[1]),) def inc(x : any) -> any: return inject((project(x, int) + 1), int) def main() -> int: t = map(lambda x: project(inc(inject(x, int)), int), (0, 41,)) print(t[1]) return 0 \end{lstlisting} \fi} \end{tcolorbox} \caption{Output of \code{lower\_casts} on the example shown in figure~\ref{fig:gradual-map}.} \label{fig:map-lower-cast} \end{figure} \section{Differentiate Proxies \vspace{-2pt}} \label{sec:differentiate-proxies} So far, the responsibility of differentiating tuples and tuple proxies has been the job of the interpreter. % \racket{For example, the interpreter for \LangCast{} implements \code{vector-ref} using the \code{guarded-vector-ref} function shown in figure~\ref{fig:guarded-tuple}.} % In the \code{differentiate\_proxies} pass we shift this responsibility to the generated code. We begin by designing the output language \LangPVec{}. In \LangGrad{} we used the type \TUPLETYPENAME{} for both real tuples and tuple proxies. \python{Similarly, we use the type \code{list} for both arrays and array proxies.} In \LangPVec{} we return the \TUPLETYPENAME{} type to its original meaning, as the type of just tuples, and we introduce a new type, \PTUPLETYNAME{}, whose values can be either real tuples or tuple proxies. % {\if\edition\pythonEd\pythonColor Likewise, we return the \ARRAYTYPENAME{} type to its original meaning, as the type of arrays, and we introduce a new type, \PARRAYTYNAME{}, whose values can be either arrays or array proxies. These new types come with a suite of new primitive operations. \fi} {\if\edition\racketEd A tuple proxy is represented by a tuple containing three things: (1) the underlying tuple, (2) a tuple of functions for casting elements that are read from the tuple, and (3) a tuple of functions for casting values to be written to the tuple. So, we define the following abbreviation for the type of a tuple proxy: \[ \itm{TupleProxy} (T\ldots \Rightarrow T'\ldots) = (\ttm{Vector}~\PTUPLETY{T\ldots} ~R~ W) \] where $R = (\ttm{Vector}~(T\to T') \ldots)$ and $W = (\ttm{Vector}~(T'\to T) \ldots)$. % Next we describe each of the new primitive operations. \begin{description} \item[\code{inject-vector} : (\key{Vector} $T \ldots$) $\to$ (\key{PVector} $T \ldots$)]\ \\ % This operation brands a vector as a value of the \code{PVector} type. \item[\code{inject-proxy} : $\itm{TupleProxy}(T\ldots \Rightarrow T'\ldots)$ $\to$ (\key{PVector} $T' \ldots$)]\ \\ % This operation brands a vector proxy as value of the \code{PVector} type. \item[\code{proxy?} : (\key{PVector} $T \ldots$) $\to$ \BOOLTY{}] \ \\ % This returns true if the value is a tuple proxy and false if it is a real tuple. \item[\code{project-vector} : (\key{PVector} $T \ldots$) $\to$ (\key{Vector} $T \ldots$)]\ \\ % Assuming that the input is a tuple, this operation returns the tuple. \item[\code{proxy-vector-length} : (\key{PVector} $T \ldots$) $\to$ \INTTY{}]\ \\ % Given a tuple proxy, this operation returns the length of the tuple. \item[\code{proxy-vector-ref} : (\key{PVector} $T \ldots$) $\to$ ($i$ : \INTTY{}) $\to$ $T_i$]\ \\ % Given a tuple proxy, this operation returns the $i$th element of the tuple. \item[\code{proxy-vector-set!} : (\key{PVector} $T \ldots$) $\to$ ($i$ : \INTTY{}) $\to$ $T_i$ $\to$ \key{Void}]\ \\ Given a tuple proxy, this operation writes a value to the $i$th element of the tuple. \end{description} \fi} {\if\edition\pythonEd\pythonColor % A tuple proxy is represented by a tuple containing (1) the underlying tuple and (2) a tuple of functions for casting elements that are read from the tuple. The \LangPVec{} language includes the following AST classes and primitive functions. \begin{description} \item[\code{InjectTuple}] \ \\ % This AST node brands a tuple as a value of the \PTUPLETYNAME{} type. \item[\code{InjectTupleProxy}]\ \\ % This AST node brands a tuple proxy as value of the \PTUPLETYNAME{} type. \item[\code{is\_tuple\_proxy}]\ \\ % This primitive returns true if the value is a tuple proxy and false if it is a tuple. \item[\code{project\_tuple}]\ \\ % Converts a tuple that is branded as \PTUPLETYNAME{} back to a tuple. \item[\code{proxy\_tuple\_len}]\ \\ % Given a tuple proxy, returns the length of the underlying tuple. \item[\code{proxy\_tuple\_load}]\ \\ % Given a tuple proxy, returns the $i$th element of the underlying tuple. \end{description} An array proxy is represented by a tuple containing (1) the underlying array, (2) a function for casting elements that are read from the array, and (3) a function for casting elements that are written to the array. The \LangPVec{} language includes the following AST classes and primitive functions. \begin{description} \item[\code{InjectList}]\ \\ This AST node brands an array as a value of the \PARRAYTYNAME{} type. \item[\code{InjectListProxy}]\ \\ % This AST node brands an array proxy as a value of the \PARRAYTYNAME{} type. \item[\code{is\_array\_proxy}]\ \\ % Returns true if the value is an array proxy and false if it is an array. \item[\code{project\_array}]\ \\ % Converts an array that is branded as \PARRAYTYNAME{} back to an array. \item[\code{proxy\_array\_len}]\ \\ % Given an array proxy, returns the length of the underlying array. \item[\code{proxy\_array\_load}]\ \\ % Given an array proxy, returns the $i$th element of the underlying array. \item[\code{proxy\_array\_store}]\ \\ % Given an array proxy, writes a value to the $i$th element of the underlying array. \end{description} \fi} Now we discuss the translation that differentiates tuples and arrays from proxies. First, every type annotation in the program is translated (recursively) to replace \TUPLETYPENAME{} with \PTUPLETYNAME{}. Next, we insert uses of \PTUPLETYNAME{} operations in the appropriate places. For example, we wrap every tuple creation with an \racket{\code{inject-vector}}\python{\code{InjectTuple}}. % {\if\edition\racketEd \begin{minipage}{0.96\textwidth} \begin{lstlisting} (vector |$e_1 \ldots e_n$|) |$\Rightarrow$| (inject-vector (vector |$e'_1 \ldots e'_n$|)) \end{lstlisting} \end{minipage} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Tuple(|$e_1, \ldots, e_n$|) |$\Rightarrow$| InjectTuple(Tuple(|$e'_1, \ldots, e'_n$|)) \end{lstlisting} \fi} The \racket{\code{raw-vector}}\python{\code{RawTuple}} AST node that we introduced in the previous section does not get injected. {\if\edition\racketEd \begin{lstlisting} (raw-vector |$e_1 \ldots e_n$|) |$\Rightarrow$| (vector |$e'_1 \ldots e'_n$|) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} RawTuple(|$e_1, \ldots, e_n$|) |$\Rightarrow$| Tuple(|$e'_1, \ldots, e'_n$|) \end{lstlisting} \fi} The \racket{\code{vector-proxy}}\python{\code{TupleProxy}} AST translates as follows: % {\if\edition\racketEd \begin{lstlisting} (vector-proxy |$e_1~e_2~e_3$|) |$\Rightarrow$| (inject-proxy (vector |$e'_1~e'_2~e'_3$|)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} TupleProxy(|$e_1, e_2, T_1, T_2$|) |$\Rightarrow$| InjectTupleProxy(Tuple(|$e'_1,e'_2, T'_1, T'_2$|)) \end{lstlisting} \fi} We translate the element access operations into conditional expressions that check whether the value is a proxy and then dispatch to either the appropriate proxy tuple operation or the regular tuple operation. {\if\edition\racketEd \begin{lstlisting} (vector-ref |$e_1$| |$i$|) |$\Rightarrow$| (let ([|$v~e_1$|]) (if (proxy? |$v$|) (proxy-vector-ref |$v$| |$i$|) (vector-ref (project-vector |$v$|) |$i$|) \end{lstlisting} \fi} % Note that in the branch for a tuple, we must apply \racket{\code{project-vector}}\python{\code{project\_tuple}} before reading from the tuple. The translation of array operations is similar to the ones for tuples. \section{Reveal Casts \vspace{-2pt}} \label{sec:reveal-casts-gradual} {\if\edition\racketEd Recall that the \code{reveal\_casts} pass (section~\ref{sec:reveal-casts-Lany}) is responsible for lowering \code{Inject} and \code{Project} into lower-level operations. % In particular, \code{Project} turns into a conditional expression that inspects the tag and retrieves the underlying value. Here we need to augment the translation of \code{Project} to handle the situation in which the target type is \code{PVector}. Instead of using \code{vector-length} we need to use \code{proxy-vector-length}. \begin{lstlisting} (project |$e$| (PVector Any|$_1$| |$\ldots$| Any|$_n$|)) |$\Rightarrow$| (let |$\itm{tmp}$| |$e'$| (if (eq? (tag-of-any |$\itm{tmp}$| 2)) (let |$\itm{tup}$| (value-of |$\itm{tmp}$| (PVector Any |$\ldots$| Any)) (if (eq? (proxy-vector-length |$\itm{tup}$|) |$n$|) |$\itm{tup}$| (exit))) (exit))) \end{lstlisting} \fi} % {\if\edition\pythonEd\pythonColor Recall that the $\itm{tagof}$ function determines the bits used to identify values of different types, and it is used in the \code{reveal\_casts} pass in the translation of \code{Project}. The \PTUPLETYNAME{} and \PARRAYTYNAME{} types can be mapped to $010$ in binary ($2$ is decimal), just like the tuple and array types. \fi} % Otherwise, the only other changes are adding cases that copy the new AST nodes. \section{Closure Conversion \vspace{-2pt}} \label{sec:closure-conversion-gradual} The auxiliary function that translates type annotations needs to be updated to handle the \PTUPLETYNAME{} \racket{type}\python{and \PARRAYTYNAME{} types}. % Otherwise, the only other changes are adding cases that copy the new AST nodes. \section{Select Instructions \vspace{-2pt}} \label{sec:select-instructions-gradual} \index{subject}{select instructions} Recall that the \code{select\_instructions} pass is responsible for lowering the primitive operations into x86 instructions. So, we need to translate the new operations on \PTUPLETYNAME{} \python{and \PARRAYTYNAME{}} to x86. To do so, the first question we need to answer is how to differentiate between tuple and tuple proxies\python{, and likewise for arrays and array proxies}. We need just one bit to accomplish this; we use the bit in position $63$ of the 64-bit tag at the front of every tuple (see figure~\ref{fig:tuple-rep})\python{ or array (section~\ref{sec:array-rep})}. So far, this bit has been set to $0$, so for \racket{\code{inject-vector}}\python{\code{InjectTuple}} we leave it that way. {\if\edition\racketEd \begin{lstlisting} (Assign |$\itm{lhs}$| (Prim 'inject-vector (list |$e_1$|))) |$\Rightarrow$| movq |$e'_1$|, |$\itm{lhs'}$| \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Assign([|$\itm{lhs}$|], InjectTuple(|$e_1$|)) |$\Rightarrow$| movq |$e'_1$|, |$\itm{lhs'}$| \end{lstlisting} \fi} \python{The translation for \code{InjectList} is also a move instruction.} \noindent On the other hand, \racket{\code{inject-proxy}}\python{\code{InjectTupleProxy}} sets bit $63$ to $1$. % {\if\edition\racketEd \begin{lstlisting} (Assign |$\itm{lhs}$| (Prim 'inject-proxy (list |$e_1$|))) |$\Rightarrow$| movq |$e'_1$|, %r11 movq |$(1 << 63)$|, %rax orq 0(%r11), %rax movq %rax, 0(%r11) movq %r11, |$\itm{lhs'}$| \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Assign([|$\itm{lhs}$|], InjectTupleProxy(|$e_1$|)) |$\Rightarrow$| movq |$e'_1$|, %r11 movq |$(1 << 63)$|, %rax orq 0(%r11), %rax movq %rax, 0(%r11) movq %r11, |$\itm{lhs'}$| \end{lstlisting} \fi} \python{\noindent The translation for \code{InjectListProxy} should set bit $63$ of the tag and also bit $62$, to differentiate between arrays and tuples.} The \racket{\code{proxy?} operation consumes}% \python{\code{is\_tuple\_proxy} and \code{is\_array\_proxy} operations consume} the information so carefully stashed away by the injections. It isolates bit $63$ to tell whether the value is a proxy. % {\if\edition\racketEd \begin{lstlisting} (Assign |$\itm{lhs}$| (Prim 'proxy? (list |$e_1$|))) |$\Rightarrow$| movq |$e_1'$|, %r11 movq 0(%r11), %rax sarq $63, %rax andq $1, %rax movq %rax, |$\itm{lhs'}$| \end{lstlisting} \fi}% % {\if\edition\pythonEd\pythonColor \begin{lstlisting} Assign([|$\itm{lhs}$|], Call(Name('is_tuple_proxy'), [|$e_1$|])) |$\Rightarrow$| movq |$e_1'$|, %r11 movq 0(%r11), %rax sarq $63, %rax andq $1, %rax movq %rax, |$\itm{lhs'}$| \end{lstlisting} \fi}% % The \racket{\code{project-vector} operation is} \python{\code{project\_tuple} and \code{project\_array} operations are} straightforward to translate, so we leave that to the reader. Regarding the element access operations for tuples\python{ and arrays}, the runtime provides procedures that implement them (they are recursive functions!), so here we simply need to translate these tuple operations into the appropriate function call. For example, here is the translation for \racket{\code{proxy-vector-ref}}\python{\code{proxy\_tuple\_load}}. {\if\edition\racketEd \begin{minipage}{0.96\textwidth} \begin{lstlisting} (Assign |$\itm{lhs}$| (Prim 'proxy-vector-ref (list |$e_1$| |$e_2$|))) |$\Rightarrow$| movq |$e_1'$|, %rdi movq |$e_2'$|, %rsi callq proxy_vector_ref movq %rax, |$\itm{lhs'}$| \end{lstlisting} \end{minipage} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Assign([|$\itm{lhs}$|], Call(Name('proxy_tuple_load'), [|$e_1$|, |$e_2$|])) |$\Rightarrow$| movq |$e_1'$|, %rdi movq |$e_2'$|, %rsi callq proxy_vector_ref movq %rax, |$\itm{lhs'}$| \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor % TODO: revisit the names vecof for python -Jeremy We translate \code{proxy\_array\_load} to \code{proxy\_vecof\_ref}, \code{proxy\_array\_store} to \code{proxy\_vecof\_set}, and \code{proxy\_array\_len} to \code{proxy\_vecof\_length}. \fi} We have another batch of operations to deal with: those for the \CANYTY{} type. Recall that we generate an \racket{\code{any-vector-ref}}\python{\code{any\_load\_unsafe}} when there is a element access on something of type \CANYTY{}, and similarly for \racket{\code{any-vector-set!}}\python{\code{any\_store\_unsafe}} and \racket{\code{any-vector-length}}\python{\code{any\_len}}. In section~\ref{sec:select-Lany} we selected instructions for these operations on the basis of the idea that the underlying value was a tuple or array. But in the current setting, the underlying value is of type \PTUPLETYNAME{}\python{ or \PARRAYTYNAME{}}. We have added three runtime functions to deal with this: \code{proxy\_vector\_ref}, \code{proxy\_vector\_set}, and \code{proxy\_vector\_length} that inspect bit $62$ of the tag to determine whether the value is a proxy, and then dispatches to the the appropriate code. % So \racket{\code{any-vector-ref}}\python{\code{any\_load\_unsafe}} can be translated as follows. We begin by projecting the underlying value out of the tagged value and then call the \code{proxy\_vector\_ref} procedure in the runtime. {\if\edition\racketEd \begin{lstlisting} (Assign |$\itm{lhs}$| (Prim 'any-vector-ref (list |$e_1$| |$e_2$|))) |$\Rightarrow$| movq |$\neg 111$|, %rdi andq |$e_1'$|, %rdi movq |$e_2'$|, %rsi callq proxy_vector_ref movq %rax, |$\itm{lhs'}$| \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Assign([|$\itm{lhs}$|], Call(Name('any_load_unsafe'), [|$e_1$|, |$e_2$|])) |$\Rightarrow$| movq |$\neg 111$|, %rdi andq |$e_1'$|, %rdi movq |$e_2'$|, %rsi callq proxy_vector_ref movq %rax, |$\itm{lhs'}$| \end{lstlisting} \fi} \noindent The \racket{\code{any-vector-set!}}\python{\code{any\_store\_unsafe}} and \racket{\code{any-vector-length}}\python{\code{any\_len}} operators are translated in a similar way. Alternatively, you could generate instructions to open-code the \code{proxy\_vector\_ref}, \code{proxy\_vector\_set}, and \code{proxy\_vector\_length} functions. \begin{exercise}\normalfont\normalsize Implement a compiler for the gradually typed \LangGrad{} language by extending and adapting your compiler for \LangLam{}. Create ten new partially typed test programs. In addition to testing with these new programs, test your compiler on all the tests for \LangLam{} and for \LangDyn{}. % \racket{Sometimes you may get a type-checking error on the \LangDyn{} programs, but you can adapt them by inserting a cast to the \CANYTY{} type around each subexpression that has caused a type error. Although \LangDyn{} does not have explicit casts, you can induce one by wrapping the subexpression \code{e} with a call to an unannotated identity function, as follows: \code{((lambda (x) x) e)}.} % \python{Sometimes you may get a type-checking error on the \LangDyn{} programs, but you can adapt them by inserting a temporary variable of type \CANYTY{} that is initialized with the troublesome expression.} \end{exercise} \begin{figure}[t] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.85] \node (Lgradual) at (0,4) {\large \LangGrad{}}; \node (Lgradual2) at (4,4) {\large \LangCast{}}; \node (Lgradual3) at (8,4) {\large \LangProxy{}}; \node (Lgradual4) at (12,4) {\large \LangPVec{}}; \node (Lgradualr) at (12,2) {\large \LangPVec{}}; \node (Lgradualp) at (8,2) {\large \LangPVec{}}; \node (Llambdapp) at (4,2) {\large \LangPVecFunRef{}}; \node (Llambdaproxy-4) at (0,2) {\large \LangPVecFunRef{}}; \node (Llambdaproxy-5) at (0,0) {\large \LangPVecFunRef{}}; %\node (F1-1) at (4,0) {\large \LangPVecFunRef{}}; \node (F1-2) at (8,0) {\large \LangPVecFunRef{}}; \node (F1-3) at (12,0) {\large \LangPVecFunRef{}}; \node (F1-4) at (12,-2) {\large \LangPVecAlloc{}}; \node (F1-5) at (8,-2) {\large \LangPVecAlloc{}}; \node (F1-6) at (4,-2) {\large \LangPVecAlloc{}}; \node (C3-2) at (0,-2) {\large \LangCLoopPVec{}}; \node (x86-2) at (0,-4) {\large \LangXIndCallVar{}}; \node (x86-2-1) at (0,-6) {\large \LangXIndCallVar{}}; \node (x86-2-2) at (4,-6) {\large \LangXIndCallVar{}}; \node (x86-3) at (4,-4) {\large \LangXIndCallVar{}}; \node (x86-4) at (8,-4) {\large \LangXIndCall{}}; \node (x86-5) at (8,-6) {\large \LangXIndCall{}}; \path[->,bend left=15] (Lgradual) edge [above] node {\ttfamily\footnotesize cast\_insert} (Lgradual2); \path[->,bend left=15] (Lgradual2) edge [above] node {\ttfamily\footnotesize lower\_casts} (Lgradual3); \path[->,bend left=15] (Lgradual3) edge [above] node {\ttfamily\footnotesize differentiate\_proxies} (Lgradual4); \path[->,bend left=15] (Lgradual4) edge [left] node {\ttfamily\footnotesize shrink} (Lgradualr); \path[->,bend left=15] (Lgradualr) edge [above] node {\ttfamily\footnotesize uniquify} (Lgradualp); \path[->,bend right=15] (Lgradualp) edge [above] node {\ttfamily\footnotesize reveal\_functions} (Llambdapp); %% \path[->,bend left=15] (Llambdaproxy-4) edge [left] node %% {\ttfamily\footnotesize resolve} (Lgradualr); \path[->,bend right=15] (Llambdapp) edge [above] node {\ttfamily\footnotesize reveal\_casts} (Llambdaproxy-4); \path[->,bend right=15] (Llambdaproxy-4) edge [right] node {\ttfamily\footnotesize convert\_assignments} (Llambdaproxy-5); \path[->,bend right=10] (Llambdaproxy-5) edge [above] node {\ttfamily\footnotesize convert\_to\_closures} (F1-2); \path[->,bend left=15] (F1-2) edge [above] node {\ttfamily\footnotesize limit\_functions} (F1-3); \path[->,bend left=15] (F1-3) edge [left] node {\ttfamily\footnotesize expose\_allocation} (F1-4); \path[->,bend left=15] (F1-4) edge [below] node {\ttfamily\footnotesize uncover\_get!} (F1-5); \path[->,bend right=15] (F1-5) edge [above] node {\ttfamily\footnotesize remove\_complex\_operands} (F1-6); \path[->,bend right=15] (F1-6) edge [above] node {\ttfamily\footnotesize explicate\_control} (C3-2); \path[->,bend right=15] (C3-2) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend right=15] (x86-2) edge [right] node {\ttfamily\footnotesize uncover\_live} (x86-2-1); \path[->,bend right=15] (x86-2-1) edge [below] node {\ttfamily\footnotesize build\_interference} (x86-2-2); \path[->,bend right=15] (x86-2-2) edge [right] node {\ttfamily\footnotesize allocate\_registers} (x86-3); \path[->,bend left=15] (x86-3) edge [above] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend left=15] (x86-4) edge [right] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \fi} {\if\edition\pythonEd\pythonColor \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.80] \node (Lgradual) at (0,4) {\large \LangGrad{}}; \node (Lgradual2) at (4,4) {\large \LangGrad{}}; \node (Lgradual3) at (8,4) {\large \LangCast{}}; \node (Lgradual4) at (12,4) {\large \LangProxy{}}; \node (Lgradualr) at (12,2) {\large \LangPVec{}}; \node (Lgradualp) at (8,2) {\large \LangPVec{}}; \node (Llambdapp) at (4,2) {\large \LangPVec{}}; \node (Llambdaproxy-4) at (0,2) {\large \LangPVecFunRef{}}; \node (Llambdaproxy-5) at (0,0) {\large \LangPVecFunRef{}}; \node (F1-1) at (4,0) {\large \LangPVecFunRef{}}; \node (F1-2) at (8,0) {\large \LangPVecFunRef{}}; \node (F1-3) at (12,0) {\large \LangPVecFunRef{}}; \node (F1-5) at (8,-2) {\large \LangPVecAlloc{}}; \node (F1-6) at (4,-2) {\large \LangPVecAlloc{}}; \node (C3-2) at (0,-2) {\large \LangCLoopPVec{}}; \node (x86-2) at (0,-4) {\large \LangXIndCallVar{}}; \node (x86-3) at (4,-4) {\large \LangXIndCallVar{}}; \node (x86-4) at (8,-4) {\large \LangXIndCall{}}; \node (x86-5) at (12,-4) {\large \LangXIndCall{}}; \path[->,bend left=15] (Lgradual) edge [above] node {\ttfamily\footnotesize shrink} (Lgradual2); \path[->,bend left=15] (Lgradual2) edge [above] node {\ttfamily\footnotesize uniquify} (Lgradual3); \path[->,bend left=15] (Lgradual3) edge [above] node {\ttfamily\footnotesize reveal\_functions} (Lgradual4); \path[->,bend left=15] (Lgradual4) edge [left] node {\ttfamily\footnotesize resolve} (Lgradualr); \path[->,bend left=15] (Lgradualr) edge [below] node {\ttfamily\footnotesize cast\_insert} (Lgradualp); \path[->,bend right=15] (Lgradualp) edge [above] node {\ttfamily\footnotesize lower\_casts} (Llambdapp); \path[->,bend right=15] (Llambdapp) edge [above] node {\ttfamily\footnotesize differentiate\_proxies} (Llambdaproxy-4); \path[->,bend right=15] (Llambdaproxy-4) edge [right] node {\ttfamily\footnotesize reveal\_casts} (Llambdaproxy-5); \path[->,bend right=15] (Llambdaproxy-5) edge [below] node {\ttfamily\footnotesize convert\_assignments} (F1-1); \path[->,bend left=15] (F1-1) edge [above] node {\ttfamily\footnotesize convert\_to\_closures} (F1-2); \path[->,bend left=15] (F1-2) edge [above] node {\ttfamily\footnotesize limit\_functions} (F1-3); \path[->,bend left=15] (F1-3) edge [right] node {\ttfamily\footnotesize expose\_allocation} (F1-5); \path[->,bend right=15] (F1-5) edge [above] node {\ttfamily\footnotesize remove\_complex\_operands} (F1-6); \path[->,bend right=15] (F1-6) edge [above] node {\ttfamily\footnotesize explicate\_control} (C3-2); \path[->,bend right=15] (C3-2) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend right=15] (x86-2) edge [below] node {\ttfamily\footnotesize assign\_homes} (x86-3); \path[->,bend right=15] (x86-3) edge [below] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend left=15] (x86-4) edge [above] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \fi} \end{tcolorbox} \caption{Diagram of the passes for \LangGrad{} (gradual typing).} \label{fig:Lgradual-passes} \end{figure} Figure~\ref{fig:Lgradual-passes} provides an overview of the passes needed for the compilation of \LangGrad{}. \section{Further Reading} This chapter just scratches the surface of gradual typing. The basic approach described here is missing two key ingredients that one would want in a implementation of gradual typing: blame tracking~\citep{Tobin-Hochstadt:2006fk,Wadler:2009qv} and space-efficient casts~\citep{Herman:2006uq,Herman:2010aa}. The problem addressed by blame tracking is that when a cast on a higher-order value fails, it often does so at a point in the program that is far removed from the original cast. Blame tracking is a technique for propagating extra information through casts and proxies so that when a cast fails, the error message can point back to the original location of the cast in the source program. The problem addressed by space-efficient casts also relates to higher-order casts. It turns out that in partially typed programs, a function or tuple can flow through a great many casts at runtime. With the approach described in this chapter, each cast adds another \code{lambda} wrapper or a tuple proxy. Not only does this take up considerable space, but it also makes the function calls and tuple operations slow. For example, a partially typed version of quicksort could, in the worst case, build a chain of proxies of length $O(n)$ around the tuple, changing the overall time complexity of the algorithm from $O(n^2)$ to $O(n^3)$! \citet{Herman:2006uq} suggested a solution to this problem by representing casts using the coercion calculus of \citet{Henglein:1994nz}, which prevents the creation of long chains of proxies by compressing them into a concise normal form. \citet{Siek:2015ab} give an algorithm for compressing coercions, and \citet{Kuhlenschmidt:2019aa} show how to implement these ideas in the Grift compiler: \begin{center} \url{https://github.com/Gradual-Typing/Grift} \end{center} There are also interesting interactions between gradual typing and other language features, such as generics, information-flow types, and type inference, to name a few. We recommend to the reader the online gradual typing bibliography for more material: \begin{center} \url{http://samth.github.io/gradual-typing-bib/} \end{center} % TODO: challenge problem: % type analysis and type specialization? % coercions? %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Generics} \label{ch:Lpoly} \setcounter{footnote}{0} This chapter studies the compilation of generics\index{subject}{generics} (aka parametric polymorphism\index{subject}{parametric polymorphism}), compiling the \LangPoly{} subset of \racket{Typed Racket}\python{Python}. Generics enable programmers to make code more reusable by parameterizing functions and data structures with respect to the types on which they operate. For example, figure~\ref{fig:map-poly} revisits the \code{map} example and this time gives it a more fitting type. This \code{map} function is parameterized with respect to the element type of the tuple. The type of \code{map} is the following generic type specified by the \code{All} type with parameter \code{T}: {\if\edition\racketEd \begin{lstlisting} (All (T) ((T -> T) (Vector T T) -> (Vector T T))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} All[[T], Callable[[Callable[[T],T], tuple[T,T]], tuple[T,T]]] \end{lstlisting} \fi} % The idea is that \code{map} can be used at \emph{all} choices of a type for parameter \code{T}. In the example shown in figure~\ref{fig:map-poly} we apply \code{map} to a tuple of integers, implicitly choosing \racket{\code{Integer}}\python{\code{int}} for \code{T}, but we could have just as well applied \code{map} to a tuple of Booleans. % A \emph{monomorphic} function is simply one that is not generic. % We use the term \emph{instantiation} for the process (within the language implementation) of turning a generic function into a monomorphic one, where the type parameters have been replaced by types. {\if\edition\pythonEd\pythonColor % In Python, when writing a generic function such as \code{map}, one does not explicitly write its generic type (using \code{All}). Instead, that the function is generic is implied by the use of type variables (such as \code{T}) in the type annotations of its parameters. % \fi} \begin{figure}[tbp] % poly_test_2.rkt \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (: map (All (T) ((T -> T) (Vector T T) -> (Vector T T)))) (define (map f v) (vector (f (vector-ref v 0)) (f (vector-ref v 1)))) (define (inc [x : Integer]) : Integer (+ x 1)) (vector-ref (map inc (vector 0 41)) 1) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def map(f : Callable[[T],T], tup : tuple[T,T]) -> tuple[T,T]: return (f(tup[0]), f(tup[1])) def add1(x : int) -> int: return x + 1 t = map(add1, (0, 41)) print(t[1]) \end{lstlisting} \fi} \end{tcolorbox} \caption{A generic version of the \code{map} function.} \label{fig:map-poly} \end{figure} Figure~\ref{fig:Lpoly-concrete-syntax} presents the definition of the concrete syntax of \LangPoly{}, and figure~\ref{fig:Lpoly-syntax} shows the definition of the abstract syntax. % {\if\edition\racketEd We add a second form for function definitions in which a type declaration comes before the \code{define}. In the abstract syntax, the return type in the \code{Def} is \CANYTY{}, but that should be ignored in favor of the return type in the type declaration. (The \CANYTY{} comes from using the same parser as discussed in chapter~\ref{ch:Ldyn}.) The presence of a type declaration enables the use of an \code{All} type for a function, thereby making it generic. \fi} % The grammar for types is extended to include the type of a generic (\code{All}) and type variables\python{\ (\code{GenericVar} in the abstract syntax)}. \newcommand{\LpolyGrammarRacket}{ \begin{array}{lcl} \Type &::=& \LP\key{All}~\LP\Var\ldots\RP~ \Type\RP \MID \Var \\ \Def &::=& \LP\key{:}~\Var~\Type\RP \\ && \LP\key{define}~ \LP\Var ~ \Var\ldots\RP ~ \Exp\RP \end{array} } \newcommand{\LpolyASTRacket}{ \begin{array}{lcl} \Type &::=& \LP\key{All}~\LP\Var\ldots\RP~ \Type\RP \MID \Var \\ \Def &::=& \DECL{\Var}{\Type} \\ && \DEF{\Var}{\LP\Var \ldots\RP}{\key{'Any}}{\code{'()}}{\Exp} \end{array} } \newcommand{\LpolyGrammarPython}{ \begin{array}{lcl} \Type &::=& \key{All}\LS \LS\Var\ldots\RS,\Type\RS \MID \Var \end{array} } \newcommand{\LpolyASTPython}{ \begin{array}{lcl} \Type &::=& \key{AllType}\LP\LS\Var\ldots\RS, \Type\RP \MID \key{GenericVar}\LP\Var\RP \end{array} } \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \footnotesize {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintGrammarRacket{}} \\ \hline \gray{\LvarGrammarRacket{}} \\ \hline \gray{\LifGrammarRacket{}} \\ \hline \gray{\LwhileGrammarRacket} \\ \hline \gray{\LtupGrammarRacket} \\ \hline \gray{\LfunGrammarRacket} \\ \hline \gray{\LlambdaGrammarRacket} \\ \hline \LpolyGrammarRacket \\ \begin{array}{lcl} \LangPoly{} &::=& \Def \ldots ~ \Exp \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintGrammarPython{}} \\ \hline \gray{\LvarGrammarPython{}} \\ \hline \gray{\LifGrammarPython{}} \\ \hline \gray{\LwhileGrammarPython} \\ \hline \gray{\LtupGrammarPython} \\ \hline \gray{\LfunGrammarPython} \\ \hline \gray{\LlambdaGrammarPython} \\\hline \LpolyGrammarPython \\ \begin{array}{lcl} \LangPoly{} &::=& \Def\ldots \Stmt\ldots \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The concrete syntax of \LangPoly{}, extending \LangLam{} (figure~\ref{fig:Llam-concrete-syntax}).} \label{fig:Lpoly-concrete-syntax} \end{figure} \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \footnotesize {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintOpAST} \\ \hline \gray{\LvarASTRacket{}} \\ \hline \gray{\LifASTRacket{}} \\ \hline \gray{\LwhileASTRacket{}} \\ \hline \gray{\LtupASTRacket{}} \\ \hline \gray{\LfunASTRacket} \\ \hline \gray{\LlambdaASTRacket} \\ \hline \LpolyASTRacket \\ \begin{array}{lcl} \LangPoly{} &::=& \PROGRAMDEFSEXP{\code{'()}}{\LP\Def\ldots\RP}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintASTPython} \\ \hline \gray{\LvarASTPython{}} \\ \hline \gray{\LifASTPython{}} \\ \hline \gray{\LwhileASTPython{}} \\ \hline \gray{\LtupASTPython{}} \\ \hline \gray{\LfunASTPython} \\ \hline \gray{\LlambdaASTPython} \\ \hline \LpolyASTPython \\ \begin{array}{lcl} \LangPoly{} &::=& \PROGRAM{}{\LS \Def \ldots \Stmt \ldots \RS} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangPoly{}, extending \LangLam{} (figure~\ref{fig:Llam-syntax}).} \label{fig:Lpoly-syntax} \end{figure} By including the \code{All} type in the $\Type$ nonterminal of the grammar we choose to make generics first class, which has interesting repercussions on the compiler.\footnote{The Python \code{typing} library does not include syntax for the \code{All} type. It is inferred for functions whose type annotations contain type variables.} Many languages with generics, such as C++~\citep{stroustrup88:_param_types} and Standard ML~\citep{Milner:1990fk}, support only second-class generics, so it may be helpful to see an example of first-class generics in action. In figure~\ref{fig:apply-twice} we define a function \code{apply\_twice} whose parameter is a generic function. Indeed, because the grammar for $\Type$ includes the \code{All} type, a generic function may also be returned from a function or stored inside a tuple. The body of \code{apply\_twice} applies the generic function \code{f} to a Boolean and also to an integer, which would not be possible if \code{f} were not generic. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (: apply_twice ((All (U) (U -> U)) -> Integer)) (define (apply_twice f) (if (f #t) (f 42) (f 777))) (: id (All (T) (T -> T))) (define (id x) x) (apply_twice id) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def apply_twice(f : All[[U], Callable[[U],U]]) -> int: if f(True): return f(42) else: return f(777) def id(x: T) -> T: return x print(apply_twice(id)) \end{lstlisting} \fi} \end{tcolorbox} \caption{An example illustrating first-class generics.} \label{fig:apply-twice} \end{figure} The type checker for \LangPoly{} shown in figure~\ref{fig:type-check-Lpoly} has several new responsibilities (compared to \LangLam{}) which we discuss in the following paragraphs. {\if\edition\pythonEd\pythonColor % Regarding function definitions, if the type annotations on its parameters contain generic variables, then the function is generic and therefore its type is an \code{All} type wrapped around a function type. Otherwise the function is monomorphic and its type is simply a function type. % \fi} The type checking of a function application is extended to handle the case in which the operator expression is a generic function. In that case the type arguments are deduced by matching the types of the parameters with the types of the arguments. % The \code{match\_types} auxiliary function (figure~\ref{fig:type-check-Lpoly-aux}) carries out this deduction by recursively descending through a parameter type \code{param\_ty} and the corresponding argument type \code{arg\_ty}, making sure that they are equal except when there is a type parameter in the parameter type. Upon encountering a type parameter for the first time, the algorithm deduces an association of the type parameter to the corresponding part of the argument type. If it is not the first time that the type parameter has been encountered, the algorithm looks up its deduced type and makes sure that it is equal to the corresponding part of the argument type. The return type of the application is the return type of the generic function with the type parameters replaced by the deduced type arguments, using the \code{substitute\_type} auxiliary function, which is also listed in figure~\ref{fig:type-check-Lpoly-aux}. The type checker extends type equality to handle the \code{All} type. This is not quite as simple as for other types, such as function and tuple types, because two \code{All} types can be syntactically different even though they are equivalent. For example, \begin{center} \racket{\code{(All (T) (T -> T))}}\python{\code{All[[T], Callable[[T], T]]}} \end{center} is equivalent to \begin{center} \racket{\code{(All (U) (U -> U))}}\python{\code{All[[U], Callable[[U], U]]}}. \end{center} Two generic types are equal if they differ only in the choice of the names of the type parameters. The definition of type equality shown in figure~\ref{fig:type-check-Lpoly-aux} renames the type parameters in one type to match the type parameters of the other type. {\if\edition\racketEd % The type checker also ensures that only defined type variables appear in type annotations. The \code{check\_well\_formed} function for which the definition is shown in figure~\ref{fig:well-formed-types} recursively inspects a type, making sure that each type variable has been defined. % \fi} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\scriptsize] (define type-check-poly-class (class type-check-Llambda-class (super-new) (inherit check-type-equal?) (define/override (type-check-apply env e1 es) (define-values (e^ ty) ((type-check-exp env) e1)) (define-values (es^ ty*) (for/lists (es^ ty*) ([e (in-list es)]) ((type-check-exp env) e))) (match ty [`(,ty^* ... -> ,rt) (for ([arg-ty ty*] [param-ty ty^*]) (check-type-equal? arg-ty param-ty (Apply e1 es))) (values e^ es^ rt)] [`(All ,xs (,tys ... -> ,rt)) (define env^ (append (for/list ([x xs]) (cons x 'Type)) env)) (define env^^ (for/fold ([env^^ env^]) ([arg-ty ty*] [param-ty tys]) (match_types env^^ param-ty arg-ty))) (define targs (for/list ([x xs]) (match (dict-ref env^^ x (lambda () #f)) [#f (error 'type-check "type variable ~a not deduced\nin ~v" x (Apply e1 es))] [ty ty]))) (values (Inst e^ ty targs) es^ (substitute_type env^^ rt))] [else (error 'type-check "expected a function, not ~a" ty)])) (define/override ((type-check-exp env) e) (match e [(Lambda `([,xs : ,Ts] ...) rT body) (for ([T Ts]) ((check_well_formed env) T)) ((check_well_formed env) rT) ((super type-check-exp env) e)] [(HasType e1 ty) ((check_well_formed env) ty) ((super type-check-exp env) e)] [else ((super type-check-exp env) e)])) (define/override ((type-check-def env) d) (verbose 'type-check "poly/def" d) (match d [(Generic ts (Def f (and p:t* (list `[,xs : ,ps] ...)) rt info body)) (define ts-env (for/list ([t ts]) (cons t 'Type))) (for ([p ps]) ((check_well_formed ts-env) p)) ((check_well_formed ts-env) rt) (define new-env (append ts-env (map cons xs ps) env)) (define-values (body^ ty^) ((type-check-exp new-env) body)) (check-type-equal? ty^ rt body) (Generic ts (Def f p:t* rt info body^))] [else ((super type-check-def env) d)])) (define/override (type-check-program p) (match p [(Program info body) (type-check-program (ProgramDefsExp info '() body))] [(ProgramDefsExp info ds body) (define ds^ (combine-decls-defs ds)) (define new-env (for/list ([d ds^]) (cons (def-name d) (fun-def-type d)))) (define ds^^ (for/list ([d ds^]) ((type-check-def new-env) d))) (define-values (body^ ty) ((type-check-exp new-env) body)) (check-type-equal? ty 'Integer body) (ProgramDefsExp info ds^^ body^)])) )) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\small] def type_check_exp(self, e, env): match e: case Call(Name(f), args) if f in builtin_functions: return super().type_check_exp(e, env) case Call(func, args): func_t = self.type_check_exp(func, env) func.has_type = func_t match func_t: case AllType(ps, FunctionType(p_tys, rt)): for arg in args: arg.has_type = self.type_check_exp(arg, env) arg_tys = [arg.has_type for arg in args] deduced = {} for (p, a) in zip(p_tys, arg_tys): self.match_types(p, a, deduced, e) return self.substitute_type(rt, deduced) case _: return super().type_check_exp(e, env) case _: return super().type_check_exp(e, env) def type_check(self, p): match p: case Module(body): env = {} for s in body: match s: case FunctionDef(name, params, bod, dl, returns, comment): params_t = [t for (x,t) in params] ty_params = set() for t in params_t: ty_params |$\mid$|= self.generic_variables(t) ty = FunctionType(params_t, returns) if len(ty_params) > 0: ty = AllType(list(ty_params), ty) env[name] = ty self.check_stmts(body, IntType(), env) case _: raise Exception('type_check: unexpected ' + repr(p)) \end{lstlisting} \fi} \end{tcolorbox} \caption{Type checker for the \LangPoly{} language.} \label{fig:type-check-Lpoly} \end{figure} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting}[basicstyle=\ttfamily\scriptsize] (define/override (type-equal? t1 t2) (match* (t1 t2) [(`(All ,xs ,T1) `(All ,ys ,T2)) (define env (map cons xs ys)) (type-equal? (substitute_type env T1) T2)] [(other wise) (super type-equal? t1 t2)])) (define/public (match_types env pt at) (match* (pt at) [('Integer 'Integer) env] [('Boolean 'Boolean) env] [('Void 'Void) env] [('Any 'Any) env] [(`(Vector ,pts ...) `(Vector ,ats ...)) (for/fold ([env^ env]) ([pt1 pts] [at1 ats]) (match_types env^ pt1 at1))] [(`(,pts ... -> ,prt) `(,ats ... -> ,art)) (define env^ (match_types env prt art)) (for/fold ([env^^ env^]) ([pt1 pts] [at1 ats]) (match_types env^^ pt1 at1))] [(`(All ,pxs ,pt1) `(All ,axs ,at1)) (define env^ (append (map cons pxs axs) env)) (match_types env^ pt1 at1)] [((? symbol? x) at) (match (dict-ref env x (lambda () #f)) [#f (error 'type-check "undefined type variable ~a" x)] ['Type (cons (cons x at) env)] [t^ (check-type-equal? at t^ 'matching) env])] [(other wise) (error 'type-check "mismatch ~a != a" pt at)])) (define/public (substitute_type env pt) (match pt ['Integer 'Integer] ['Boolean 'Boolean] ['Void 'Void] ['Any 'Any] [`(Vector ,ts ...) `(Vector ,@(for/list ([t ts]) (substitute_type env t)))] [`(,ts ... -> ,rt) `(,@(for/list ([t ts]) (substitute_type env t)) -> ,(substitute_type env rt))] [`(All ,xs ,t) `(All ,xs ,(substitute_type (append (map cons xs xs) env) t))] [(? symbol? x) (dict-ref env x)] [else (error 'type-check "expected a type not ~a" pt)])) (define/public (combine-decls-defs ds) (match ds ['() '()] [`(,(Decl name type) . (,(Def f params _ info body) . ,ds^)) (unless (equal? name f) (error 'type-check "name mismatch, ~a != ~a" name f)) (match type [`(All ,xs (,ps ... -> ,rt)) (define params^ (for/list ([x params] [T ps]) `[,x : ,T])) (cons (Generic xs (Def name params^ rt info body)) (combine-decls-defs ds^))] [`(,ps ... -> ,rt) (define params^ (for/list ([x params] [T ps]) `[,x : ,T])) (cons (Def name params^ rt info body) (combine-decls-defs ds^))] [else (error 'type-check "expected a function type, not ~a" type) ])] [`(,(Def f params rt info body) . ,ds^) (cons (Def f params rt info body) (combine-decls-defs ds^))])) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting}[basicstyle=\ttfamily\scriptsize] def match_types(self, param_ty, arg_ty, deduced, e): match (param_ty, arg_ty): case (GenericVar(id), _): if id in deduced: self.check_type_equal(arg_ty, deduced[id], e) else: deduced[id] = arg_ty case (AllType(ps, ty), AllType(arg_ps, arg_ty)): rename = {ap:p for (ap,p) in zip(arg_ps, ps)} new_arg_ty = self.substitute_type(arg_ty, rename) self.match_types(ty, new_arg_ty, deduced, e) case (TupleType(ps), TupleType(ts)): for (p, a) in zip(ps, ts): self.match_types(p, a, deduced, e) case (ListType(p), ListType(a)): self.match_types(p, a, deduced, e) case (FunctionType(pps, prt), FunctionType(aps, art)): for (pp, ap) in zip(pps, aps): self.match_types(pp, ap, deduced, e) self.match_types(prt, art, deduced, e) case (IntType(), IntType()): pass case (BoolType(), BoolType()): pass case _: raise Exception('mismatch: ' + str(param_ty) + '\n!= ' + str(arg_ty)) def substitute_type(self, ty, var_map): match ty: case GenericVar(id): return var_map[id] case AllType(ps, ty): new_map = copy.deepcopy(var_map) for p in ps: new_map[p] = GenericVar(p) return AllType(ps, self.substitute_type(ty, new_map)) case TupleType(ts): return TupleType([self.substitute_type(t, var_map) for t in ts]) case ListType(ty): return ListType(self.substitute_type(ty, var_map)) case FunctionType(pts, rt): return FunctionType([self.substitute_type(p, var_map) for p in pts], self.substitute_type(rt, var_map)) case IntType(): return IntType() case BoolType(): return BoolType() case _: raise Exception('substitute_type: unexpected ' + repr(ty)) def check_type_equal(self, t1, t2, e): match (t1, t2): case (AllType(ps1, ty1), AllType(ps2, ty2)): rename = {p2: GenericVar(p1) for (p1,p2) in zip(ps1,ps2)} return self.check_type_equal(ty1, self.substitute_type(ty2, rename), e) case (_, _): return super().check_type_equal(t1, t2, e) \end{lstlisting} \fi} \end{tcolorbox} \caption{Auxiliary functions for type checking \LangPoly{}.} \label{fig:type-check-Lpoly-aux} \end{figure} {\if\edition\racketEd \begin{figure}[tbp] \begin{tcolorbox}[colback=white] \begin{lstlisting} (define/public ((check_well_formed env) ty) (match ty ['Integer (void)] ['Boolean (void)] ['Void (void)] [(? symbol? a) (match (dict-ref env a (lambda () #f)) ['Type (void)] [else (error 'type-check "undefined type variable ~a" a)])] [`(Vector ,ts ...) (for ([t ts]) ((check_well_formed env) t))] [`(,ts ... -> ,t) (for ([t ts]) ((check_well_formed env) t)) ((check_well_formed env) t)] [`(All ,xs ,t) (define env^ (append (for/list ([x xs]) (cons x 'Type)) env)) ((check_well_formed env^) t)] [else (error 'type-check "unrecognized type ~a" ty)])) \end{lstlisting} \end{tcolorbox} \caption{Well-formed types.} \label{fig:well-formed-types} \end{figure} \fi} % TODO: interpreter for R'_10 \clearpage \section{Compiling Generics} \label{sec:compiling-poly} Broadly speaking, there are four approaches to compiling generics, as follows: \begin{description} \item[Monomorphization] generates a different version of a generic function for each set of type arguments with which it is used, producing type-specialized code. This approach results in the most efficient code but requires whole-program compilation (no separate compilation) and may increase code size. Unfortunately, monomorphization is incompatible with first-class generics because it is not always possible to determine which generic functions are used with which type arguments during compilation. (It can be done at runtime with just-in-time compilation.) Monomorphization is used to compile C++ templates~\citep{stroustrup88:_param_types} and generic functions in NESL~\citep{Blelloch:1993aa} and ML~\citep{Weeks:2006aa}. \item[Uniform representation] generates one version of each generic function and requires all values to have a common \emph{boxed} format, such as the tagged values of type \CANYTY{} in \LangAny{}. Both generic and monomorphic code is compiled similarly to code in a dynamically typed language (like \LangDyn{}), in which primitive operators require their arguments to be projected from \CANYTY{} and their results to be injected into \CANYTY{}. (In object-oriented languages, the projection is accomplished via virtual method dispatch.) The uniform representation approach is compatible with separate compilation and with first-class generics. However, it produces the least efficient code because it introduces overhead in the entire program. This approach is used in Java~\citep{Bracha:1998fk}, CLU~\citep{liskov79:_clu_ref,Liskov:1993dk}, and some implementations of ML~\citep{Cardelli:1984aa,Appel:1987aa}. \item[Mixed representation] generates one version of each generic function, using a boxed representation for type variables. However, monomorphic code is compiled as usual (as in \LangLam{}), and conversions are performed at the boundaries between monomorphic code and polymorphic code (for example, when a generic function is instantiated and called). This approach is compatible with separate compilation and first-class generics and maintains efficiency in monomorphic code. The trade-off is increased overhead at the boundary between monomorphic and generic code. This approach is used in implementations of ML~\citep{Leroy:1992qb} and Java, starting in Java 5 with the addition of autoboxing. \item[Type passing] uses the unboxed representation in both monomorphic and generic code. Each generic function is compiled to a single function with extra parameters that describe the type arguments. The type information is used by the generated code to determine how to access the unboxed values at runtime. This approach is used in implementation of Napier88~\citep{Morrison:1991aa} and ML~\citep{Harper:1995um}. Type passing is compatible with separate compilation and first-class generics and maintains the efficiency for monomorphic code. There is runtime overhead in polymorphic code from dispatching on type information. \end{description} In this chapter we use the mixed representation approach, partly because of its favorable attributes and partly because it is straightforward to implement using the tools that we have already built to support gradual typing. The work of compiling generic functions is performed in two passes, \code{resolve} and \code{erase\_types}, that we discuss next. The output of \code{erase\_types} is \LangCast{} (section~\ref{sec:gradual-insert-casts}), so the rest of the compilation is handled by the compiler of chapter~\ref{ch:Lgrad}. \section{Resolve Instantiation} \label{sec:generic-resolve} Recall that the type checker for \LangPoly{} deduces the type arguments at call sites to a generic function. The purpose of the \code{resolve} pass is to turn this implicit instantiation into an explicit one, by adding \code{inst} nodes to the syntax of the intermediate language. An \code{inst} node records the mapping of type parameters to type arguments. The semantics of the \code{inst} node is to instantiate the result of its first argument, a generic function, to produce a monomorphic function. However, because the interpreter never analyzes type annotations, instantiation can be a no-op and simply return the generic function. % The output language of the \code{resolve} pass is \LangInst{}, for which the definition is shown in figure~\ref{fig:Lpoly-prime-syntax}. {\if\edition\racketEd The \code{resolve} pass combines the type declaration and polymorphic function into a single definition, using the \code{Poly} form, to make polymorphic functions more convenient to process in the next pass of the compiler. \fi} \newcommand{\LinstASTRacket}{ \begin{array}{lcl} \Type &::=& \LP\key{All}~\LP\Var\ldots\RP~ \Type\RP \MID \Var \\ \Exp &::=& \INST{\Exp}{\Type}{\LP\Type\ldots\RP} \\ \Def &::=& \gray{ \DEF{\Var}{\LP\LS\Var \key{:} \Type\RS \ldots\RP}{\Type}{\code{'()}}{\Exp} } \\ &\MID& \LP\key{Poly}~\LP\Var\ldots\RP~ \DEF{\Var}{\LP\LS\Var \key{:} \Type\RS \ldots\RP}{\Type}{\code{'()}}{\Exp}\RP \end{array} } \newcommand{\LinstASTPython}{ \begin{array}{lcl} \Type &::=& \key{AllType}\LP\LS\Var\ldots\RS, \Type\RP \MID \Var \\ \Exp &::=& \INST{\Exp}{\LC\Var\key{:}\Type\ldots\RC} \end{array} } \begin{figure}[tp] \centering \begin{tcolorbox}[colback=white] \small {\if\edition\racketEd \[ \begin{array}{l} \gray{\LintOpAST} \\ \hline \gray{\LvarASTRacket{}} \\ \hline \gray{\LifASTRacket{}} \\ \hline \gray{\LwhileASTRacket{}} \\ \hline \gray{\LtupASTRacket{}} \\ \hline \gray{\LfunASTRacket} \\ \hline \gray{\LlambdaASTRacket} \\ \hline \LinstASTRacket \\ \begin{array}{lcl} \LangInst{} &::=& \PROGRAMDEFSEXP{\code{'()}}{\LP\Def\ldots\RP}{\Exp} \end{array} \end{array} \] \fi} {\if\edition\pythonEd\pythonColor \[ \begin{array}{l} \gray{\LintASTPython} \\ \hline \gray{\LvarASTPython{}} \\ \hline \gray{\LifASTPython{}} \\ \hline \gray{\LwhileASTPython{}} \\ \hline \gray{\LtupASTPython{}} \\ \hline \gray{\LfunASTPython} \\ \hline \gray{\LlambdaASTPython} \\ \hline \LinstASTPython \\ \begin{array}{lcl} \LangInst{} &::=& \PROGRAM{}{\LS \Def \ldots \Stmt \ldots \RS} \end{array} \end{array} \] \fi} \end{tcolorbox} \caption{The abstract syntax of \LangInst{}, extending \LangLam{} (figure~\ref{fig:Llam-syntax}).} \label{fig:Lpoly-prime-syntax} \end{figure} The output of the \code{resolve} pass on the generic \code{map} example is listed in figure~\ref{fig:map-resolve}. Note that the use of \code{map} is wrapped in an \code{inst} node, with the parameter \code{T} chosen to be \racket{\code{Integer}}\python{\code{int}}. \begin{figure}[tbp] % poly_test_2.rkt \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (poly (T) (define (map [f : (T -> T)] [v : (Vector T T)]) : (Vector T T) (vector (f (vector-ref v 0)) (f (vector-ref v 1))))) (define (inc [x : Integer]) : Integer (+ x 1)) (vector-ref ((inst map (All (T) ((T -> T) (Vector T T) -> (Vector T T))) (Integer)) inc (vector 0 41)) 1) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def map(f : Callable[[T],T], tup : tuple[T,T]) -> tuple[T,T]: return (f(tup[0]), f(tup[1])) def add1(x : int) -> int: return x + 1 t = inst(map, {T: int})(add1, (0, 41)) print(t[1]) \end{lstlisting} \fi} \end{tcolorbox} \caption{Output of the \code{resolve} pass on the \code{map} example.} \label{fig:map-resolve} \end{figure} \section{Erase Generic Types} \label{sec:erase_types} We use the \CANYTY{} type presented in chapter~\ref{ch:Ldyn} to represent type variables. For example, figure~\ref{fig:map-erase} shows the output of the \code{erase\_types} pass on the generic \code{map} (figure~\ref{fig:map-poly}). The occurrences of type parameter \code{a} are replaced by \CANYTY{}, and the generic \code{All} types are removed from the type of \code{map}. \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{lstlisting} (define (map [f : (Any -> Any)] [v : (Vector Any Any)]) : (Vector Any Any) (vector (f (vector-ref v 0)) (f (vector-ref v 1)))) (define (inc [x : Integer]) : Integer (+ x 1)) (vector-ref ((cast map ((Any -> Any) (Vector Any Any) -> (Vector Any Any)) ((Integer -> Integer) (Vector Integer Integer) -> (Vector Integer Integer))) inc (vector 0 41)) 1) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} def map(f : Callable[[Any],Any], tup : tuple[Any,Any])-> tuple[Any,Any]: return (f(tup[0]), f(tup[1])) def add1(x : int) -> int: return (x + 1) def main() -> int: t = cast(map, |$T_1$|, |$T_2$|)(add1, (0, 41)) print(t[1]) return 0 \end{lstlisting} {\small where\\ $T_1 = $ \code{Callable[[Callable[[Any], Any],tuple[Any,Any]], tuple[Any,Any]]}\\ $T_2 = $ \code{Callable[[Callable[[int], int],tuple[int,int]], tuple[int,int]]} } \fi} \end{tcolorbox} \caption{The generic \code{map} example after type erasure.} \label{fig:map-erase} \end{figure} This process of type erasure creates a challenge at points of instantiation. For example, consider the instantiation of \code{map} shown in figure~\ref{fig:map-resolve}. The type of \code{map} is % {\if\edition\racketEd \begin{lstlisting} (All (T) ((T -> T) (Vector T T) -> (Vector T T))) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} All[[T], Callable[[Callable[[T], T], tuple[T, T]], tuple[T, T]]] \end{lstlisting} \fi} % and it is instantiated to % {\if\edition\racketEd \begin{lstlisting} ((Integer -> Integer) (Vector Integer Integer) -> (Vector Integer Integer)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Callable[[Callable[[int], int], tuple[int, int]], tuple[int, int]] \end{lstlisting} \fi} % After erasure, the type of \code{map} is % {\if\edition\racketEd \begin{lstlisting} ((Any -> Any) (Vector Any Any) -> (Vector Any Any)) \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} Callable[[Callable[[Any], Any], tuple[Any, Any]], tuple[Any, Any]] \end{lstlisting} \fi} % but we need to convert it to the instantiated type. This is easy to do in the language \LangCast{} with a single \code{cast}. In the example shown in figure~\ref{fig:map-erase}, the instantiation of \code{map} has been compiled to a \code{cast} from the type of \code{map} to the instantiated type. The source and the target type of a cast must be consistent (figure~\ref{fig:consistent}), which indeed is the case because both the source and target are obtained from the same generic type of \code{map}, replacing the type parameters with \CANYTY{} in the former and with the deduced type arguments in the latter. (Recall that the \CANYTY{} type is consistent with any type.) To implement the \code{erase\_types} pass, we first recommend defining a recursive function that translates types, named \code{erase\_type}. It replaces type variables with \CANYTY{} as follows. % {\if\edition\racketEd \begin{lstlisting} |$T$| |$\Rightarrow$| Any \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} GenericVar(|$T$|) |$\Rightarrow$| Any \end{lstlisting} \fi} % \noindent The \code{erase\_type} function also removes the generic \code{All} types. % {\if\edition\racketEd \begin{lstlisting} (All |$xs$| |$T_1$|) |$\Rightarrow$| |$T'_1$| \end{lstlisting} \fi} {\if\edition\pythonEd\pythonColor \begin{lstlisting} AllType(|$xs$|, |$T_1$|) |$\Rightarrow$| |$T'_1$| \end{lstlisting} \fi} where $T'_1$ is the result of applying \code{erase\_type} to $T_1$. % In this compiler pass, apply the \code{erase\_type} function to all the type annotations in the program. Regarding the translation of expressions, the case for \code{Inst} is the interesting one. We translate it into a \code{Cast}, as shown next. The type of the subexpression $e$ is a generic type of the form \racket{$\LP\key{All}~\itm{xs}~T\RP$}\python{$\key{AllType}\LP\itm{xs}, T\RP$}. The source type of the cast is the erasure of $T$, the type $T_s$. % {\if\edition\racketEd % The target type $T_t$ is the result of substituting the argument types $ts$ for the type parameters $xs$ in $T$ and then performing type erasure. % \begin{lstlisting} (Inst |$e$| (All |$xs$| |$T$|) |$ts$|) |$\Rightarrow$| (Cast |$e'$| |$T_s$| |$T_t$|) \end{lstlisting} % where $T_t = \LP\code{erase\_type}~\LP\code{substitute\_type}~s~T\RP\RP$, and $s = \LP\code{map}~\code{cons}~xs~ts\RP$. \fi} {\if\edition\pythonEd\pythonColor % The target type $T_t$ is the result of substituting the deduced argument types $d$ in $T$ and then performing type erasure. % \begin{lstlisting} Inst(|$e$|, |$d$|) |$\Rightarrow$| Cast(|$e'$|, |$T_s$|, |$T_t$|) \end{lstlisting} % where $T_t = \code{erase\_type}\LP\code{substitute\_type}\LP d, T\RP\RP$. \fi} Finally, each generic function is translated to a regular function in which type erasure has been applied to all the type annotations and the body. %% \begin{lstlisting} %% (Poly |$ts$| (Def |$f$| ([|$x_1$| : |$T_1$|] |$\ldots$|) |$T_r$| |$\itm{info}$| |$e$|)) %% |$\Rightarrow$| %% (Def |$f$| ([|$x_1$| : |$T'_1$|] |$\ldots$|) |$T'_r$| |$\itm{info}$| |$e'$|) %% \end{lstlisting} \begin{exercise}\normalfont\normalsize Implement a compiler for the polymorphic language \LangPoly{} by extending and adapting your compiler for \LangGrad{}. Create six new test programs that use polymorphic functions. Some of them should make use of first-class generics. \end{exercise} \begin{figure}[tbp] \begin{tcolorbox}[colback=white] {\if\edition\racketEd \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.85] \node (Lpoly) at (0,4) {\large \LangPoly{}}; \node (Lpolyp) at (4,4) {\large \LangInst{}}; \node (Lgradualp) at (8,4) {\large \LangCast{}}; \node (Llambdapp) at (12,4) {\large \LangProxy{}}; \node (Llambdaproxy) at (12,2) {\large \LangPVec{}}; \node (Llambdaproxy-2) at (8,2) {\large \LangPVec{}}; \node (Llambdaproxy-3) at (4,2) {\large \LangPVec{}}; \node (Llambdaproxy-4) at (0,2) {\large \LangPVecFunRef{}}; \node (Llambdaproxy-5) at (0,0) {\large \LangPVecFunRef{}}; \node (F1-1) at (4,0) {\large \LangPVecFunRef{}}; \node (F1-2) at (8,0) {\large \LangPVecFunRef{}}; \node (F1-3) at (12,0) {\large \LangPVecFunRef{}}; \node (F1-4) at (12,-2) {\large \LangPVecAlloc{}}; \node (F1-5) at (8,-2) {\large \LangPVecAlloc{}}; \node (F1-6) at (4,-2) {\large \LangPVecAlloc{}}; \node (C3-2) at (0,-2) {\large \LangCLoopPVec{}}; \node (x86-2) at (0,-4) {\large \LangXIndCallVar{}}; \node (x86-2-1) at (0,-6) {\large \LangXIndCallVar{}}; \node (x86-2-2) at (4,-6) {\large \LangXIndCallVar{}}; \node (x86-3) at (4,-4) {\large \LangXIndCallVar{}}; \node (x86-4) at (8,-4) {\large \LangXIndCall{}}; \node (x86-5) at (8,-6) {\large \LangXIndCall{}}; \path[->,bend left=15] (Lpoly) edge [above] node {\ttfamily\footnotesize resolve} (Lpolyp); \path[->,bend left=15] (Lpolyp) edge [above] node {\ttfamily\footnotesize erase\_types} (Lgradualp); \path[->,bend left=15] (Lgradualp) edge [above] node {\ttfamily\footnotesize lower\_casts} (Llambdapp); \path[->,bend left=15] (Llambdapp) edge [left] node {\ttfamily\footnotesize differentiate\_proxies} (Llambdaproxy); \path[->,bend left=15] (Llambdaproxy) edge [below] node {\ttfamily\footnotesize shrink} (Llambdaproxy-2); \path[->,bend right=15] (Llambdaproxy-2) edge [above] node {\ttfamily\footnotesize uniquify} (Llambdaproxy-3); \path[->,bend right=15] (Llambdaproxy-3) edge [above] node {\ttfamily\footnotesize reveal\_functions} (Llambdaproxy-4); \path[->,bend right=15] (Llambdaproxy-4) edge [right] node {\ttfamily\footnotesize reveal\_casts} (Llambdaproxy-5); \path[->,bend right=15] (Llambdaproxy-5) edge [below] node {\ttfamily\footnotesize convert\_assignments} (F1-1); \path[->,bend left=15] (F1-1) edge [above] node {\ttfamily\footnotesize convert\_to\_closures} (F1-2); \path[->,bend left=15] (F1-2) edge [above] node {\ttfamily\footnotesize limit\_functions} (F1-3); \path[->,bend left=15] (F1-3) edge [left] node {\ttfamily\footnotesize expose\_allocation} (F1-4); \path[->,bend left=15] (F1-4) edge [below] node {\ttfamily\footnotesize uncover\_get!} (F1-5); \path[->,bend right=15] (F1-5) edge [above] node {\ttfamily\footnotesize remove\_complex\_operands} (F1-6); \path[->,bend right=15] (F1-6) edge [above] node {\ttfamily\footnotesize explicate\_control} (C3-2); \path[->,bend right=15] (C3-2) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend right=15] (x86-2) edge [right] node {\ttfamily\footnotesize uncover\_live} (x86-2-1); \path[->,bend right=15] (x86-2-1) edge [below] node {\ttfamily\footnotesize build\_interference} (x86-2-2); \path[->,bend right=15] (x86-2-2) edge [right] node {\ttfamily\footnotesize allocate\_registers} (x86-3); \path[->,bend left=15] (x86-3) edge [above] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend left=15] (x86-4) edge [right] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \fi} {\if\edition\pythonEd\pythonColor \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.85] \node (Lgradual) at (0,4) {\large \LangPoly{}}; \node (Lgradual2) at (4,4) {\large \LangPoly{}}; \node (Lgradual3) at (8,4) {\large \LangPoly{}}; \node (Lgradual4) at (12,4) {\large \LangPoly{}}; \node (Lgradualr) at (12,2) {\large \LangInst{}}; \node (Llambdapp) at (8,2) {\large \LangCast{}}; \node (Llambdaproxy-4) at (4,2) {\large \LangPVec{}}; \node (Llambdaproxy-5) at (0,2) {\large \LangPVec{}}; \node (F1-1) at (0,0) {\large \LangPVec{}}; \node (F1-2) at (4,0) {\large \LangPVec{}}; \node (F1-3) at (8,0) {\large \LangPVec{}}; \node (F1-5) at (12,0) {\large \LangPVecAlloc{}}; \node (F1-6) at (12,-2) {\large \LangPVecAlloc{}}; \node (C3-2) at (0,-2) {\large \LangCLoopPVec{}}; \node (x86-2) at (0,-4) {\large \LangXIndCallVar{}}; \node (x86-3) at (4,-4) {\large \LangXIndCallVar{}}; \node (x86-4) at (8,-4) {\large \LangXIndCall{}}; \node (x86-5) at (12,-4) {\large \LangXIndCall{}}; \path[->,bend left=15] (Lgradual) edge [above] node {\ttfamily\footnotesize shrink} (Lgradual2); \path[->,bend left=15] (Lgradual2) edge [above] node {\ttfamily\footnotesize uniquify} (Lgradual3); \path[->,bend left=15] (Lgradual3) edge [above] node {\ttfamily\footnotesize reveal\_functions} (Lgradual4); \path[->,bend left=15] (Lgradual4) edge [left] node {\ttfamily\footnotesize resolve} (Lgradualr); \path[->,bend left=15] (Lgradualr) edge [below] node {\ttfamily\footnotesize erase\_types} (Llambdapp); \path[->,bend right=15] (Llambdapp) edge [above] node {\ttfamily\footnotesize differentiate\_proxies} (Llambdaproxy-4); \path[->,bend right=15] (Llambdaproxy-4) edge [above] node {\ttfamily\footnotesize reveal\_casts} (Llambdaproxy-5); \path[->,bend right=15] (Llambdaproxy-5) edge [right] node {\ttfamily\footnotesize convert\_assignments} (F1-1); \path[->,bend right=15] (F1-1) edge [below] node {\ttfamily\footnotesize convert\_to\_closures} (F1-2); \path[->,bend right=15] (F1-2) edge [below] node {\ttfamily\footnotesize limit\_functions} (F1-3); \path[->,bend left=15] (F1-3) edge [above] node {\ttfamily\footnotesize expose\_allocation} (F1-5); \path[->,bend left=15] (F1-5) edge [left] node {\ttfamily\footnotesize remove\_complex\_operands} (F1-6); \path[->,bend left=5] (F1-6) edge [below] node {\ttfamily\footnotesize explicate\_control} (C3-2); \path[->,bend right=15] (C3-2) edge [right] node {\ttfamily\footnotesize select\_instructions} (x86-2); \path[->,bend right=15] (x86-2) edge [below] node {\ttfamily\footnotesize assign\_homes} (x86-3); \path[->,bend right=15] (x86-3) edge [below] node {\ttfamily\footnotesize patch\_instructions} (x86-4); \path[->,bend left=15] (x86-4) edge [above] node {\ttfamily\footnotesize prelude\_and\_conclusion} (x86-5); \end{tikzpicture} \fi} \end{tcolorbox} \caption{Diagram of the passes for \LangPoly{} (generics).} \label{fig:Lpoly-passes} \end{figure} Figure~\ref{fig:Lpoly-passes} provides an overview of the passes needed to compile \LangPoly{}. % TODO: challenge problem: specialization of instantiations % Further Reading %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \clearpage \appendix \chapter{Appendix} \setcounter{footnote}{0} {\if\edition\racketEd \section{Interpreters} \label{appendix:interp} \index{subject}{interpreter} We provide interpreters for each of the source languages \LangInt{}, \LangVar{}, $\ldots$ in the files \code{interp-Lint.rkt}, \code{interp-Lvar.rkt}, and so on. The interpreters for the intermediate languages \LangCVar{} and \LangCIf{} are in \code{interp-Cvar.rkt} and \code{interp-C1.rkt}. The interpreters for \LangCVec{}, \LangCFun{}, pseudo-x86, and x86 are in the \key{interp.rkt} file. \section{Utility Functions} \label{appendix:utilities} The utility functions described in this section are in the \key{utilities.rkt} file of the support code. \paragraph{\code{interp-tests}} This function runs the compiler passes and the interpreters on each of the specified tests to check whether each pass is correct. The \key{interp-tests} function has the following parameters: \begin{description} \item[name (a string)] A name to identify the compiler. \item[typechecker] A function of exactly one argument that either raises an error using the \code{error} function when it encounters a type error, or returns \code{\#f} when it encounters a type error. If there is no type error, the type checker returns the program. \item[passes] A list with one entry per pass. An entry is a list consisting of four things: \begin{enumerate} \item a string giving the name of the pass; \item the function that implements the pass (a translator from AST to AST); \item a function that implements the interpreter (a function from AST to result value) for the output language; and, \item a type checker for the output language. Type checkers for all the $\Lang{}$ and $\CLang{}$ languages are provided in the support code. For example, the type checkers for \LangVar{} and \LangCVar{} are in \code{type-check-Lvar.rkt} and \code{type-check-Cvar.rkt}. The type checker entry is optional. The support code does not provide type checkers for the x86 languages. \end{enumerate} \item[source-interp] An interpreter for the source language. The interpreters from appendix~\ref{appendix:interp} make a good choice. \item[test-family (a string)] For example, \code{"var"} or \code{"cond"}. \item[tests] A list of test numbers that specifies which tests to run (explained next). \end{description} % The \key{interp-tests} function assumes that the subdirectory \key{tests} has a collection of Racket programs whose names all start with the family name, followed by an underscore and then the test number, and ending with the file extension \key{.rkt}. Also, for each test program that calls \code{read} one or more times, there is a file with the same name except that the file extension is \key{.in}, which provides the input for the Racket program. If the test program is expected to fail type checking, then there should be an empty file of the same name with extension \key{.tyerr}. \paragraph{\code{compiler-tests}} This function runs the compiler passes to generate x86 (a \key{.s} file) and then runs the GNU C compiler (gcc) to generate machine code. It runs the machine code and checks that the output is $42$. The parameters to the \code{compiler-tests} function are similar to those of the \code{interp-tests} function, and they consist of \begin{itemize} \item a compiler name (a string), \item a type checker, \item description of the passes, \item name of a test-family, and \item a list of test numbers. \end{itemize} \paragraph{\code{compile-file}} This function takes a description of the compiler passes (see the comment for \key{interp-tests}) and returns a function that, given a program file name (a string ending in \key{.rkt}), applies all the passes and writes the output to a file whose name is the same as the program file name with extension \key{.rkt} replaced by \key{.s}. \paragraph{\code{read-program}} This function takes a file path and parses that file (it must be a Racket program) into an abstract syntax tree. \paragraph{\code{parse-program}} This function takes an S-expression representation of an abstract syntax tree and converts it into the struct-based representation. \paragraph{\code{assert}} This function takes two parameters, a string (\code{msg}) and Boolean (\code{bool}), and displays the message \key{msg} if the Boolean \key{bool} is false. \paragraph{\code{lookup}} % remove discussion of lookup? -Jeremy This function takes a key and an alist and returns the first value that is associated with the given key, if there is one. If not, an error is triggered. The alist may contain both immutable pairs (built with \key{cons}) and mutable pairs (built with \key{mcons}). %The \key{map2} function ... \fi} %\racketEd \section{x86 Instruction Set Quick Reference} \label{sec:x86-quick-reference} \index{subject}{x86} Table~\ref{tab:x86-instr} lists some x86 instructions and what they do. We write $A \to B$ to mean that the value of $A$ is written into location $B$. Address offsets are given in bytes. The instruction arguments $A, B, C$ can be immediate constants (such as \code{\$4}), registers (such as \code{\%rax}), or memory references (such as \code{-4(\%ebp)}). Most x86 instructions allow at most one memory reference per instruction. Other operands must be immediates or registers. \begin{table}[tbp] \captionabove{Quick reference for the x86 instructions used in this book.} \label{tab:x86-instr} \centering \begin{tabular}{l|l} \textbf{Instruction} & \textbf{Operation} \\ \hline \texttt{addq} $A$, $B$ & $A + B \to B$\\ \texttt{negq} $A$ & $- A \to A$ \\ \texttt{subq} $A$, $B$ & $B - A \to B$\\ \texttt{imulq} $A$, $B$ & $A \times B \to B$ ($B$ must be a register).\\ \texttt{callq} $L$ & Pushes the return address and jumps to label $L$. \\ \texttt{callq} \texttt{*}$A$ & Calls the function at the address $A$. \\ \texttt{retq} & Pops the return address and jumps to it. \\ \texttt{popq} $A$ & $*\texttt{rsp} \to A;\, \texttt{rsp} + 8 \to \texttt{rsp}$ \\ \texttt{pushq} $A$ & $\texttt{rsp} - 8 \to \texttt{rsp};\, A \to *\texttt{rsp}$\\ \texttt{leaq} $A$, $B$ & $A \to B$ ($B$ must be a register.) \\ \texttt{cmpq} $A$, $B$ & Compare $A$ and $B$ and set the flag register ($B$ must not be an immediate). \\ \texttt{je} $L$ & \multirow{5}{3.7in}{Jump to label $L$ if the flag register matches the condition code of the instruction; otherwise go to the next instructions. The condition codes are \key{e} for \emph{equal}, \key{l} for \emph{less}, \key{le} for \emph{less or equal}, \key{g} for \emph{greater}, and \key{ge} for \emph{greater or equal}.} \\ \texttt{jl} $L$ & \\ \texttt{jle} $L$ & \\ \texttt{jg} $L$ & \\ \texttt{jge} $L$ & \\ \texttt{jmp} $L$ & Jump to label $L$. \\ \texttt{movq} $A$, $B$ & $A \to B$ \\ \texttt{movzbq} $A$, $B$ & \multirow{3}{3.7in}{$A \to B$, \text{where } $A$ is a single-byte register (e.g., \texttt{al} or \texttt{cl}), $B$ is an 8-byte register, and the extra bytes of $B$ are set to zero.} \\ & \\ & \\ \texttt{notq} $A$ & $\sim A \to A$ (bitwise complement)\\ \texttt{orq} $A$, $B$ & $A \mid B \to B$ (bitwise-or)\\ \texttt{andq} $A$, $B$ & $A \& B \to B$ (bitwise-and)\\ \texttt{salq} $A$, $B$ & $B$ \texttt{<<} $A \to B$ (arithmetic shift left, where $A$ is a constant)\\ \texttt{sarq} $A$, $B$ & $B$ \texttt{>>} $A \to B$ (arithmetic shift right, where $A$ is a constant)\\ \texttt{sete} $A$ & \multirow{5}{3.7in}{If the flag matches the condition code, then $1 \to A$; else $0 \to A$. Refer to \texttt{je} for the description of the condition codes. $A$ must be a single byte register (e.g., \texttt{al} or \texttt{cl}).} \\ \texttt{setl} $A$ & \\ \texttt{setle} $A$ & \\ \texttt{setg} $A$ & \\ \texttt{setge} $A$ & \end{tabular} \end{table} \backmatter \addtocontents{toc}{\vspace{11pt}} \cleardoublepage % needed for right page number in TOC for References %% \nocite{*} is a way to get all the entries in the .bib file to %% print in the bibliography: \nocite{*}\let\bibname\refname \addcontentsline{toc}{fmbm}{\refname} \printbibliography %\printindex{authors}{Author Index} \printindex{subject}{Index} \end{document} % LocalWords: Nano Siek CC NC ISBN wonks wizardry Backus nanopasses % LocalWords: dataflow nx generics autoboxing Hulman Ch CO Dybvig aa % LocalWords: Abelson uq Felleisen Flatt Lutz vp vj Sweigart vn Matz % LocalWords: Matthes github gcc MacOS Chez Friedman's Dipanwita fk % LocalWords: Sarkar Dybvig's Abdulaziz Ghuloum bh IU Factora Bor qf % LocalWords: Cameron Kuhlenschmidt Vollmer Vitousek Yuh Nystrom AST % LocalWords: Tolmach Wollowski ASTs Aho ast struct int backquote op % LocalWords: args neg def init UnaryOp USub func BinOp Naur BNF rkt % LocalWords: fixnum datatype structure's arith exp stmt Num Expr tr % LocalWords: plt PSF ref CPython cpython reynolds interp cond fx pe % LocalWords: arg Hitchhiker's TODO nullary Lvar Lif cnd thn var sam % LocalWords: IfExp Bool InterpLvar InterpLif InterpRVar alist jane % LocalWords: basicstyle kate dict alists env stmts ss len lhs globl % LocalWords: rsp rbp rax rbx rcx rdx rsi rdi movq retq callq jmp es % LocalWords: pushq subq popq negq addq arity uniquify Cvar instr cg % LocalWords: Seq CProgram gensym lib Fprivate Flist tmp ANF Danvy % LocalWords: rco Flists py rhs unhandled cont immediates lstlisting % LocalWords: numberstyle Cormen sudoku Balakrishnan ve aka DSATUR % LocalWords: Brelaz eu Gebremedhin Omari deletekeywords min JGS wb % LocalWords: morekeywords fullflexible goto allocator tuples Wailes % LocalWords: Kernighan runtime Freiburg Thiemann Bloomington unary % LocalWords: eq prog rcl definitional Evaluator os % LocalWords: subexpression evaluator InterpLint lcl quadwords concl % LocalWords: nanopass subexpressions decompositions Lawall Hatcliff % LocalWords: subdirectory monadic Moggi mon utils macosx unix repr % LocalWords: Uncomment undirected vertices callee Liveness liveness % LocalWords: frozenset unordered Appel Rosen pqueue cmp Fortran vl % LocalWords: Horwitz Kempe colorable subgraph kx iteratively Matula % LocalWords: ys ly Palsberg si JoeQ cardinality Poletto Booleans hj % LocalWords: subscriptable MyPy Lehtosalo Listof Pairof indexable % LocalWords: bool boolop NotEq LtE GtE refactor els orelse BoolOp % LocalWords: boolean initializer param exprs TypeCheckLvar msg Tt % LocalWords: isinstance TypeCheckLif tyerr xorq bytereg al dh dl ne % LocalWords: le ge cmpq movzbq EFLAGS jle inlined setl je jl Cif % LocalWords: lll pred IfStmt sete CFG tsort multigraph FunctionType % LocalWords: Wijngaarden Plotkin Logothetis PeytonJones SetBang Ph % LocalWords: WhileLoop unboxes Lwhile unbox InterpLwhile rhsT varT % LocalWords: Tbody TypeCheckLwhile acyclic mainstart mainconclusion % LocalWords: versa Kildall Kleene worklist enqueue dequeue deque tb % LocalWords: GetBang effectful SPERBER Lfun tuple implementer's tup % LocalWords: indices HasType Lvec InterpLtup tuple's vec ty Ungar % LocalWords: TypeCheckLtup Detlefs Tene FromSpace ToSpace Diwan ptr % LocalWords: Siebert TupleType endian salq sarq fromspace rootstack % LocalWords: uint th vecinit alloc GlobalValue andq bitwise ior elt % LocalWords: dereferencing StructDef Vectorof vectorof Lvecof Jacek % LocalWords: AllocateArray cheney tospace Dieckmann Shahriyar di xs % LocalWords: Shidal Osterlund Gamari lexically FunctionDef IntType % LocalWords: BoolType VoidType ProgramDefsExp vals params ps ds num % LocalWords: InterpLfun FunRef TypeCheckLfun leaq callee's mainDef % LocalWords: ProgramDefs TailCall tailjmp IndirectCallq TailJmp rT % LocalWords: prepending addstart addconclusion Cardelli Llambda typ % LocalWords: Llambda InterpLlambda AnnAssign Dunfield bodyT str fvs % LocalWords: TypeCheckLlambda annot dereference clos fvts closTy tg % LocalWords: Minamide AllocateClosure Gilray Milner morphos subtype % LocalWords: polymorphism untyped AnyType dataclass untag Ldyn conc % LocalWords: lookup InterpLdyn elif tagof Lany TypeCheckLany tv orq % LocalWords: AnnLambda InterpLany ClosureTuple ValueOf TagOf imulq % LocalWords: untagged multi Tobin Hochstadt zr mn Gronski kd ret Tp % LocalWords: Tif src tgt Lcast wr contravariant PVector un Lgradual % LocalWords: Lgradualp Llambdapp Llambdaproxy Wadler qv quicksort % LocalWords: Henglein nz coercions Grift parametetric parameterized % LocalWords: parameterizing stroustrup subst tys targs decls defs % LocalWords: pts ats prt pxs axs Decl Monomorphization NESL CLU qb % LocalWords: monomorphization Blelloch monomorphic Bracha unboxed % LocalWords: instantiation Lpoly Lpolyp typechecker mcons ebp jge % LocalWords: notq setle setg setge uncredited LT Std groundbreaking % LocalWords: colback GitHub inputint nonatomic ea tcolorbox bassed % LocalWords: tikzpicture Chaitin's Belady's Cocke Freiburghouse Lt % LocalWords: lessthan lessthaneq greaterthan greaterthaneq Gt pt Te % LocalWords: ts escapechar Tc bl ch cl cc foo lt metavariables vars % LocalWords: trans naively IR rep assoc ListType TypeCheckLarray dz % LocalWords: Mult InterpLarray lst array's generation's Collins inc % LocalWords: Cutler Kelsey val rt bod conflates reg inlining lam AF % LocalWords: ASTPython body's bot todo rs ls TypeCheckLgrad ops ab % LocalWords: value's inplace anyfun anytup anylist ValueExp proxied % LocalWords: ProxiedTuple ProxiedList InterpLcast ListProxy vectof % LocalWords: TupleProxy RawTuple InjectTuple InjectTupleProxy vecof % LocalWords: InjectList InjectListProxy unannotated Lgradualr poly % LocalWords: GenericVar AllType Inst builtin ap pps aps pp deepcopy % LocalWords: liskov clu Liskov dk Napier um inst popl jg seq ith qy % LocalWords: racketEd subparts subpart nonterminal nonterminals Dyn % LocalWords: pseudocode underapproximation underapproximations LALR % LocalWords: semilattices overapproximate incrementing Earley docs % LocalWords: multilanguage Prelim shinan DeRemer lexer Lesk LPAR cb % LocalWords: RPAR abcbab abc bzca usub paren expr lang WS Tomita qr % LocalWords: subparses LCCN ebook hardcover epub pdf LCSH LCC DDC % LocalWords: LC partialevaluation pythonEd TOC TrappedError