Compiler Research Projects
in Programming Languages and Compilers
- List of home pages of researches in programming languages and compilers,
maintained by Mark Leone.
- ACM SIGPLAN
- SIGPLAN is a Special Interest Group
of ACM that focuses on Programming Languages.
In particular, SIGPLAN explores programming language concepts and tools,
focusing on design, implementation and efficient use. Its members are the
programming language users, developers, implementors, theoreticians, researchers
- Berkeley: Cool
- The Classroom Object-Oriented Language
- Cool is a small language designed for use in an undergraduate compiler
course project. While small enough for a one term project, Cool still has
many of the features of modern programming languages, including objects,
automatic memory management, and strong static typing.
- California, University
of - Irvine PS Project
- The long term goals of the PS Project are to develop program transformations
for automatically exploiting all of the useful parallelism in a given program,
to provide a tool for the study and development of parallelizing compilers,
to investigate the relative trade-offs between run-time and compile-time
parallelism exploitation, and finally to examine the relationship between
parallel languages and parallelizing compilers.
- California, University of
- Santa Barbara OOCSB Object Oriented Compilers
- Compiler optimizations for object-oriented languages, dynamic (run-time
compilation), run-time system aspects (such as dispatch mechanisms or garbage
collection), and studies of the instruction-level behavior of object-oriented
- Compilers for Embedded Projects
- Our research efforts mainly aim at eliminating this bottleneck by providing compiler technology and tools that permit the use of compilers also for embedded
system design. We are developing novel code generation and optimization techniques, with emphasis on DSPs, which are capable of generating high-quality
machine code. In addition, we are working on methods for model-based retargetable compilation.
- CMU Fx Project
- Fx is a parallelizing Fortran compiler that runs on a number of parallel
systems, including the Intel Paragon, Alpha workstation clusters, the IBM
SP/2, the Cray T3D, and the Intel iWarp system.
at Boulder, University of - Eli Project
- Eli combines a variety of standard tools implementing powerful compiler
construction strategies into a domain-specific programming environment
that automatically generates complete language implementations from application-oriented
specifications. The implementations might be interpretive, using the constructs
of the source language to invoke operations of an existing system, or might
involve translation to an arbitrary target language. Eli offers complete
solutions for common libraries of reusable specifications, making possible
the production of high-quality implementations from simple problem descriptions.
A joint project of the University
of Colorado, the Universität
Paderborn and Macquarie University
, Eli has been in use worldwide since 1989. It generates
programs whose performance is comparable to that of good hand-coded implementations.
Development time for a processor using Eli is generally between one quarter
and one third of the that for comparable hand code, and maintenance is
significantly easier because specifications rather than implementations
are being maintained.
- Computing Research Association
- The Computing Research Association (CRA) is an association of more
than 150 North American academic departments of computer science and computer
engineering, industrial laboratories engaging in basic computing research
and affiliated professional societies.
- Demeter, Northeastern
- Adaptive Programming is viewed as a major advance in software technology.
AP allows you to make your software both simpler and more flexible by expressing
regularities which exist in most object-oriented programs as patterns.
AP reduces software development and maintenace costs significantly; the
more collaborating objects you use in a project, the larger the reduction.
Demeter translates software written at the "adaptive" level into
- Gardens Point
Modula (GPM) Compilers
- The Gardens Point Modula compilers are an ongoing research focus of
the Programming Languages and Systems Group in the Faculty of Information
Technology at the Queensland University of Technology in Brisbane.
- Illinois, University of - Center
for Reliable and High-Performance Computing
- The Center for Reliable and High-Performance Computing focuses on integrating
research in the areas of reliable and high-performance computing, high-performance
architectures, fault tolerance, and testing. Compiler projects include
the IMPACT ILP Compiler
and the PARADIGM Parallelizing
- Illinois, University of - Center
for Supercomputing Research and Development
PROMIS is an advanced multilingual and retargetable parallelizing and
optimizing compiler under development at the University of
Illinois at Urbana-Champaign and the University of California-Irvine.
Both the basic research work and the development of the
prototype compiler are based on a radically different design methodology,
in contrast to the design approaches used by virtually
all commercial and experimental compilers.
- Illinois, University of - Center
for Supercomputing Research and Development
- The center focuses on creating new software and hardware approaches
to speed distributed computation. Specific research includes integrating
advances in optimizing and parallelizing compilers, new parallel architectures,
and parallel algorithms. The specific compiler projects are Polaris
and Parafrase2 parallelizing
State: The Teaching About Programming Languages Project
- Information about the teaching of the concepts of programming languages,
especially undergraduate survey courses and courses about programming language
- Leiden, University
- The goal of the present research effort is to enhance the flexibility
of compilers in a number of ways. A restructuring compiler for full Fortran
77 is being developed that offers an interactive environment for the application
of program transformations. The transformations themselves can be specified
in a transformation definition language. This language is highly expressive
accommodating all usual transformations.
University of - Compiler Project
- Fortran based compiler efforts to target irregular problems. Using
the CHAOS library
and the Syracuse Fortran 90D compiler to develop a prototype distributed
memory compiler able to generate efficient code for templates extracted
from adaptive problems. By making use of the Rice D System, have developed
loop slicing methods capable of dealing with unstructured routines with
multiple levels of distributed indirection. Have also applied CHAOS directly
to parallelize a number of full adaptive applications codes.
McCAT Compiler/Architecture Testbed
- The efficient exploitation of parallelism is a major challenge in the
design of next-generation high-performance compilers. McCAT a compiler/architecture
testbed project provides a unified approach to the development and performance
analysis of compilation techniques and high-performance architectural features.
The core of the project is a C compiler equipped with many analyses and
transformations at all levels of the compilation process. The McCAT compiler
can be used in as a source to source translator, as a complete code generating
compiler for several architectures (DLX, Sparc at the moment), or produce
LAST which is the low level representation used in McCAT which can be interpreted.
- Melbourne, University of
- Mercury Project
- The Mercury project is focused on the design and implementation of
a new declarative logic/functional programming language, Mercury. The project
has involved developement of an optimizing compiler for Mercury; the compiler
takes advantage of the high-level declarative nature of Mercury to perform
some quite high-level transformations, as well as using a considerable
amount of static analysis to improve the low-level efficiency of the generated
- A compiler which utilizes feedback and specialization to adapt the
compiled code to run-time requirements can dramatically increase the performance
of program execution over several runs for most ordinary programs. This
adaptation can be accomplished using a heuristic transform engine, automatic,
dynamic profiling and aggressive program specialization. The introduction
of cooperative computing, a framework in which compilers at different sites
share information about specialization and code transformations, can also
improve the quality and performance of compiled code.
Universität - Eli Project
- See description of Eli Project above.
- Passau, University of
- The Polyhedral Loop Parallelizer: LooPo
LooPo is a project of the Chair for Programming at the Department of Mathematics and Computer Science of the University of Passau. Its purpose is to
develop a prototype implementation of loop parallelization methods based on the polyhedral model. LooPo is part of the DFG-funded projects RecuR (Regular
Concurrency in Recursions) and, recently, LooPo/HPF.
- The relationship between compiler transformations and hardware architecture
is commonly viewed as consisting of a number of "engineering trade-offs",
but this is not the case. In order for a particular function of the computer
system to be implementable in either the compiler or the architecture,
the information vital to performing that function must be available to
both; however, the information available to static mechanisms (e.g., compilers)
is not the same as that which is available to dynamic mechanisms (e.g.,
the architected hardware). Static mechanisms can examine and transform
the entire program, yet only probabilistic information is available (e.g,
branch probabilities); in contrast, dynamic mechanisms can transform only
a few instructions around the current program counter, but perfect information
within that range is common. Very few problems can be solved equally well
using either kind of information -- the focus of CARP is simply to solve
each problem in the right place.
- Purdue Compiler
Construction Tool Set
- PCCTS, the Purdue Compiler Construction Tool Set, is a set of public-domain
software tools designed to facilitate the construction of compilers and
other translation systems. Although originally developed primarily for
internal use within Purdue University, these tools are now used at over
1000 sites in at least 37 countries. Extensions and support work on PCCTS
now span Purdue, the University of Minnesota, the AHPCRC, the University
of Alabama in Huntsville, and Parr
Research Corporation a company founded by T. Parr, the primary author
- Rice Compiler
- Rice Compiler Group projects focus on 1) the Massively Scalar Compiler
Project (MSCP) which concentrates on compilers for advanced microprocessors,
and 2) the Fortran Parallel Programming Systems and Fortran Tools which
concentrates on compilers and tools to support machine-independent parallel
programming in Fortran.
University of - Compiling for Distributed Shared-Memory Machines
- Existing work in parallelizing compilers falls into two principal categories:
compiling for uniform shared memory machines and compiling for distributed
memory message passing machines. Little work has addressed compiler techniques
for distributed shared memory machines. The goal of this project is to
develop a parallelizing compiler for distributed shared memory systems.
- Saarlandes, Universität
- Compiler projects include implementation of functional programming
languages on parallel architectures, design and implementation of a programming
language for the PRAM, and generation of compilers for parallel machines.
- Stanford SUIF Compiler
- The SUIF (Stanford University Intermediate Format) compiler group at
Stanford consists of fifteen graduate students and one staff member under
the auspices of Professor Monica
Lam. The group does research in many fields of compiler technology,
including parallelization of numeric and non-numeric programs, interprocedural
analysis, dependence analysis, superscalar processors, and parallel languages.
Talso developed the SUIF compiler system, a flexible framework for the
research of compiler techniques.
- Toronto, University of
- The goal of the Jasmine project is to investigate new code and data transformations for enhancing cache and memory locality while preserving
existing parallelism in programs. Today's parallelizing compilers are capable at detecting loop-level parallelism, but the performance of the parallel code they
produce is typically poor. This is particularly true for scalable shared-memory multiprocessors, where the physically-distributed shared memory and reliance
on high-speed caches dictate careful attention to memory an cache locality.
Today's parallelizing compilers typicaly abandon locality for the sake of
greater parallelism. We have developed a number of techniques to address this
University of - Compiler Research
- Compiler research topics including benchmarking, debugging, interpreters
and simulators, language design, register allocation, and threads and multithreading.
- Programming languages and compilers.