d-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
Location:
Hybrid, working onsite at our Mysore / Bangalore, India 3 days per week.
The role: ML Compiler Engineer, Staff
What you will do:
The d-Matrix compiler team is looking for exceptional candidates to help develop the compiler backend - specifically the problem of assigning hardware resources in a spatial architecture to execute low level instructions. The successful candidate will be motivated, capable of solving algorithmic compiler problems and interested in learning intricate details of the underlining hardware and software architectures. The successful candidate will join a team of experienced compiler developers, which will be guiding the candidate for a quick ramp up in the compiler infrastructure, in order to attack the important problem of mapping low level instructions to hardware resources. We have opportunities specifically in the following areas:
Model partitioning (pipelined, tensor, model and data parallelism), tiling, resource allocation, memory management, scheduling and optimization (for latency, bandwidth and throughput).
What you will bring:
Minimum:
Bachelor's degree in Computer Science with 7+ Yrs of relevant industry experience, MSCS Preferred with 5+ yrs of relevant industry experience.
Ability to deliver production quality code in modern C++.
Experience in modern compiler infrastructures, for example: LLVM, MLIR.
Experience in machine learning frameworks and interfaces, for example: ONNX, TensorFlow and PyTorch.
Experience in production compiler development.
Preferred:
Algorithm design ability, from high level conceptual design to actual implementation.
Experience with relevant Open Source ML projects like Torch-MLIR, ONNX-MLIR, Caffe, TVM.
Passionate about thriving in a fast-paced and dynamic startup culture.
Equal Opportunity Employment Policy
d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.
d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.
At d-Matrix, we’re on the cutting edge of AI compute with our revolutionary digital in-memory compute (DIMC) engine, and as a Staff ML Compiler Engineer, you’ll have the unique opportunity to play a key role in our mission. Based in Bangalore, you’ll join a team of brilliant minds that are dedicated to tackling the challenge of mapping low-level instructions to hardware resources. Your work will focus on developing the backend of the compiler, where you’ll delve into areas like model partitioning, memory management, and resource allocation. We’re particularly excited to meet someone who loves the depth and detail of intricate hardware and software architectures. With over $154M raised and a talented team consisting of veterans from tech giants like Microsoft and Intel, we’re positioned to co-create innovative solutions for the next generation of AI, aimed at breaking through the memory wall that has long challenged AI compute. We’re fostering a hybrid work culture that allows flexibility while maintaining strong collaboration. If you’re driven, skilled in modern C++, and have experience with LLVM or MLIR, we can’t wait to see what you can achieve with us. Join us as we introduce our first commercial product in 2024, and let's accelerate Generative inference together!
Subscribe to Rise newsletter