VALSEA builds production-grade speech intelligence systems for Southeast Asia. Our systems operate in real-world conditions: mixed languages, regional accents, noisy audio, and live business workflows.
We focus on reliability in production, predictable system behavior, and outputs that teams can confidently use in their day-to-day operations.
We are an early-stage team building foundational infrastructure that we expect to live for a long time. Correctness, clarity, and long-term system health are core to how we design, ship, and operate our systems.
The Role
Speech systems often break at the intersection of models, infrastructure, and real usage. This role focuses on strengthening that intersection.
You will work end-to-end on GPU inference services, production APIs, and workflow outputs used in live environments.
This is a founding systems role with meaningful ownership and real impact. The scope is intentionally broad, and your work will directly shape how our systems scale, behave under stress, and support real users over time.
What You'll Own
GPU-based ASR inference services running in production
Stable output interfaces consumed by downstream business workflows
System behavior under partial data, degraded inputs, and operational constraints
Practical tradeoffs across latency, cost, quality, and reliability
What You'll Build
Inference Infrastructure
GPU-backed ASR services on AWS, with careful control over latency, batching, concurrency, and cost. Clear versioning, health checks, and safe rollout and rollback paths.
Workflow Outputs
Durable JSON schemas with confidence signaling. Explicit handling of ambiguity and missing data, with graceful degradation treated as a first-class design concern.
Operational Reliability
Strong observability, fast diagnosis of production issues, and steady improvement driven by real usage and operational feedback.
How You'll Work
You're comfortable operating in an early-stage environment where requirements evolve. You make reasonable assumptions, document decisions, and move systems forward thoughtfully. You care about clean interfaces, contained complexity, and calm, methodical debugging when systems don't behave as expected.
You enjoy working close to production systems and take pride in making them more reliable, understandable, and maintainable over time.
Experience
3+ years experience in backend, systems, platform, or infrastructure engineering
Experience building or running production APIs or real-time services
Solid fundamentals in AWS, Python, and Linux
Familiarity with schema design, data validation, and structured outputs
Experience with Docker and CI/CD workflows
Strong debugging and problem-solving skills
Strong Signals
GPU inference or ML-serving experience
Speech, audio, or media processing pipelines
Infrastructure as code (Terraform, AWS CDK)
Queue-based systems (SQS, Kafka, Redis)
Production observability and on-call experience
Compensation & Growth
Cash plus ESOP aligned to stage and contribution.
As an early team member, you'll have the opportunity to grow in scope and responsibility as the company scales.
We value steady ownership, sound judgment, and systems that hold up over time.
Apply
If this sounds interesting, please send a brief one-minute introduction and relevant background to [Confidential Information].