Why this role matters:
At Black Sesame Technologies, we're building next-generation compute platforms for intelligent driving. In modern AI silicon, performance is no longer just about compute — it's about how data moves. This role sits at the core of that challenge.
You'll define the interconnection fabric that connects compute, memory, and accelerators — shaping how real-world ADAS workloads perform at scale under strict constraints on latency, bandwidth, and power.
What you'll be working on:
- Architect high-bandwidth interconnect / NoC solutions for ADAS NPU platforms
- Design scalable data movement across compute clusters, memory systems, and accelerators
- Drive trade-offs across bandwidth, latency, QoS, and PPA
- Analyze traffic patterns, bottlenecks, and system-level performance behavior
- Collaborate with architects across NPU, SoC, memory, and software stacks
- Contribute to long-term architecture roadmap for future AI compute platforms
What we're looking for:
- 8+ years of experience in interconnect architecture, NoC design, SoC architecture, or high-performance data movement systems
- Strong understanding of NoC / fabric design, including topology, routing, arbitration, buffering, QoS, congestion, and latency/bandwidth trade-offs
- Solid foundation in computer architecture and memory systems, including compute-to-memory dataflow and system bottleneck analysis
- Experience designing or analyzing large-scale data movement across compute, memory, and accelerators
- Ability to reason about system-level performance, traffic behavior, and workload characteristics
- Experience translating architectural concepts into scalable, production-ready solutions
- Exposure to NPU / AI accelerators, memory subsystems (HBM / LPDDR), or performance modeling/simulation is a strong advantage
- Understanding of how software (compiler/runtime) impacts system-level data flow is a plus