Search by job, company or skills

R

Staff Software Engineer

6-8 Years
SGD 10,000 - 13,000 per month
Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 6 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

In our always on world, we believe it's essential to have a genuine connection with the work you do.

We are looking for a Staff Software Engineer to join our growing team in Singapore. You will work with adynamic and focused team to develop state-of-the-art big data applications in Analytics and Artificial Intelligence (AI). As our Senior Software Engineer, you will be implementing our core software components, and be involved in scalable design for cloud software architecture. This is an exciting opportunity to join our talented team and be involved in the next technology trend: Analytics& AI in Networking.

How You'll Help Us Connect the World:

Key Responsibilities

. Distributed Data Processing: Design, build, and optimize complex streaming and batch pipelines using Scala/Java/Python and Apache Spark (Structured Streaming and RDDs).

. Event-Driven Orchestration: Lead the migration of our cron-based Spark pipelines to modern, event-driven orchestration frameworks(e.g., Argo Workflows/Events, Apache Airflow, or Temporal) running on Kubernetes.

. Streaming Infrastructure: Work extensively with Apache Kafka and Google Protocol Buffers (GPB) to manage high-frequency data ingestion and schema evolution.

. Lakehouse Architecture: Implement and optimize data storage using modern open table formats like Delta Lake and Apache Iceberg (including partitioning strategies and compaction).

. Cloud-Native Deployment: Deploy, scale, and tune workloads on Google Kubernetes Engine (GKE) using Helm and the Spark K8s Operator.

Required Qualifications for Consideration:

. Experience: 6+ years of software engineering experience focusing on distributed systems, big data, or backend engineering.

. Languages: Experience in Scala(or any JVM language with a willingness to learn Scala). Python/Rust is a plus.

. Big Data Frameworks: Experience with Apache Spark (tuning, partitioning, driver/executor memory management, and streaming).

. Messaging: Solid understanding of Apache Kafka (topics, partitions, consumer groups) and serialization formats like Google Protobuf.

. Infrastructure: Hands-on experience deploying and managing applications on Kubernetes (ideally GKE) and using Helm.

. Orchestration: Experience with modern data orchestration tools (Airflow, Argo, Dagster, Prefect, or Temporal).

You Will Excite Us If You Have:

. Rust: Experience writing systems-level or high-performance code in Rust (or a strong desire to learn and adopt it for data tooling).

. Modern Table Formats: Production experience with Delta Lake or Apache Iceberg (understanding of transaction logs, meta data, and partition pruning).

. Next-Gen Streaming: Familiarity with emerging streaming engines like Arroyo or Apache Flink.

. Testing at Scale: Experience building distributed load-testing frameworks or chaos engineering tools for data pipelines.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 146290277

Similar Jobs

Early Applicant