Build the core backend systems for prediction data infrastructure, including source ingestion, normalization, canonical event registry, truth engine, and serving API
Design and implement unified data models, event state machines, and audit trails for sports, politics, and other event types
Develop replayable, traceable, and explainable data processing architecture to support low-latency trading signals and high-confidence settlement decisions
Collaborate with data operations, market operations, research, and trading teams to define data quality standards, latency metrics, and exception handling workflows
Drive continuous improvement of the system toward high availability, low latency, and strong observability
Requirements
5+ years of backend or infrastructure development experience, with proven ability to design core systems from scratch
Proficiency in Python, Go, Java, or similar backend languages; capable of independently owning production-grade service development
Familiarity with Kafka/Redpanda, Postgres, Redis, and similar infrastructure components; understanding of event-driven architecture
Experience with real-time data processing, streaming systems, message queues, or event sourcing
Strong awareness of system correctness, observability, replay capabilities, and failure recovery
Ability to drive architectural design and engineering execution under ambiguous requirements
Bonus
Experience in trading, market data, risk control, sports data, financial data, or low-latency systems
Experience with multi-source data reconciliation, conflict resolution, entity alignment, or event adjudication systems
Familiarity with ClickHouse, Temporal, gRPC, Prometheus/Grafana
Interest in prediction markets, or market operations