The Role
As a member of our Compiler team, you will work with leaders from industry and academia to develop entirely new solutions for the toughest problems in AI compute.
As deep neural network architectures evolve, they are becoming enormously parallel, and distributed. Compilers are needed to optimize the mappings of computation graphs to compute nodes. In this position, you will build the tools that generate distributed memory code from evolving intermediate representations.
You will:
- Develop and optimize the LLVM backend target for Cerebras architecture.
- Design and devise graph semantics, intermediate representations, and abstraction layers between high-level definitions (like TensorFlow’s XLA) and low-level distributed code.
- Use state-of-the-art parallelization and partitioning techniques to automate generation, exploiting hand-written distributed kernels.
- Identify and implement novel program analysis and optimization techniques.
- Employ and extend state of the art program analysis methods such as the Integer Set Library.
- Provide guidance to next generation system architecture development team based on compiler tool chain perspective
Skills & Qualifications
Required:
- 5+ years of experience in developing and optimizing Compilers based on LLVM tool chain
- Master’s, PhD, or foreign equivalents in computer science, engineering, or related field.
- Compiler experience; experience in code generation and optimization for distributed systems.
- Familiarity with high-level parallel program analysis and optimization.
Preferred:
- LLVM compiler internals.
- Polyhedral models
- Familiarity with HPC kernels and their optimization.