ML Engineer

About ConfigAI

ConfigAI is building the compiler that turns ML models into FPGA hardware automatically. We are based in Saarbrucken, backed by the Max Planck Institute for Informatics and Google for Startups, and working on one of the hardest problems at the intersection of machine learning and silicon design.

About the Role

We are looking for an ML Engineer to bridge the gap between ML model design and the constraints of FPGA hardware. You will work on hardware-aware optimisation techniques that make models smaller, faster and deployable on FPGAs without accuracy loss. This is a 15 hrs/week, on-site role in Saarbrucken, open to students and professionals (m/w/d).

Key Responsibilities

  • · Develop and apply quantisation techniques (INT8, INT4 and mixed-precision) to ML models targeting FPGA deployment.
  • · Design operator fusion strategies that reduce memory bandwidth and improve throughput on hardware.
  • · Evaluate model architectures for hardware friendliness and advise on changes that improve compile quality.
  • · Collaborate with compiler engineers to ensure that optimised models map cleanly through the compiler pipeline.
  • · Benchmark inference accuracy and latency on compiled FPGA hardware.
  • · Maintain a library of hardware-aware model optimisation utilities for the compiler toolchain.

Required Skills and Experience

  • · Strong understanding of ML model architectures: CNNs, Transformers and their computational patterns.
  • · Hands-on experience with quantisation frameworks (PyTorch quantisation, ONNX or equivalent).
  • · Familiarity with hardware constraints that affect ML inference: memory bandwidth, parallelism and dataflow.
  • · Proficiency in Python and comfort with ML research codebases.
  • · Ability to interpret hardware profiling data and translate it into model-level optimisation decisions.

Preferred Skills — Nice to Have

  • · Experience with FPGA-targeted ML frameworks such as hls4ml or FINN.
  • · Background in knowledge distillation or neural architecture search.
  • · Understanding of compiler IRs and how ML ops are represented in formats such as ONNX or MLIR.
  • · Familiarity with hardware simulation or emulation environments.

Why Join Us

  • · Rare opportunity to work where ML research meets silicon: every optimisation you make ships to real hardware.
  • · Small team: your contributions will have direct, visible impact on the product.
  • · Research environment: direct access to expertise at the Max Planck Institute for Informatics.
  • · Flexible 15 hrs/week commitment: ideal for students or researchers pursuing parallel work.
  • · Claude Max (20x) provided — full AI tooling for your day-to-day work.

Submit your application

CV and cover letter required. Files stored securely.

ML Engineer