Search by job, company or skills

B

Software Engineer - Data Storage & Data Lake (ByteDance Singapore)

3-5 Years
SGD 11,250 - 22,500 per month
new job description bg glownew job description bg glownew job description bg svg
  • Posted 21 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About Us

Founded in 2012, ByteDance's mission is to inspire creativity and enrich life. With a suite of more than a dozen products, including TikTok, Lemon8, CapCut and Pico as well as platforms specific to the China market, including Toutiao, Douyin, and Xigua, ByteDance has made it easier and more fun for people to connect with, consume, and create content.

Why Join ByteDance

Inspiring creativity is at the core of ByteDance's mission. Our innovative products are built to help people authentically express themselves, discover and connect - and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and enrich life - a mission we work towards every day.

As ByteDancers, we strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. By constantly iterating and fostering an Always Day 1 mindset, we achieve meaningful breakthroughs for ourselves, our Company, and our users. When we create and grow together, the possibilities are limitless. Join us.

Diversity & Inclusion

ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.

Job highlights

Career growth opportunity, Paid leave, Flat organization

Responsibilities

About the team

The Data Ecosystem Team has the vital role of crafting and implementing a storage solution for offline data in our recommendation system, which caters to more than a billion users. Their primary objectives are to guarantee system reliability, uninterrupted service, and seamless performance. They aim to create a storage and computing infrastructure that can adapt to various data sources within the recommendation system, accommodating diverse storage needs. Their ultimate goal is to deliver efficient, affordable data storage with easy-to-use data management tools for the recommendation, search, and advertising functions.

What you will be doing:

1. Responsible for the design and development of distributed database Hbase-related components.

2. Responsible for the design and development of single-node LSM engine Rocksdb-related components.

3. Design and implement an offline/real-time data architecture for large-scale recommendation systems.

4. Design and implement a flexible, scalable, stable, and high-performance storage system and computation model.

5. Troubleshoot production systems, and design and implement necessary mechanisms and tools to ensure the overall stability of production systems.

6. Build industry-leading distributed systems such as offline and online storage, batch, and stream processing frameworks, providing reliable infrastructure for massive data and large-scale business systems.

Qualifications

Minimum Qualifications:

- Bachelor's Degree or above, majoring in Computer Science, or related fields, with 3+ years of experience building scalable systems

- Proficiency in common big data processing systems like Spark/Flink at the source code level is required, with a preference for experience in customizing or extending these systems

Preferred Qualifications:

- A deep understanding of the source code of at least one data lake technology, such as Hudi, Iceberg, or DeltaLake, is highly valuable and should be prominently showcased in your resume, especially if you have practical implementation or customisation experience

- Knowledge of HDFS principles is expected, and familiarity with columnar storage formats like Parquet/ORC is an additional advantage

- Prior experience in data warehousing modeling

- Proficiency in programming languages such as Java, C++, and Scala is essential, along with strong coding skills and the ability to troubleshoot effectively

- Experience with other big data systems/frameworks like Hive, HBase, or Kudu is a plus

- A willingness to tackle challenging problems without clear solutions, a strong enthusiasm for learning new technologies, and prior experience in managing large-scale data (in the petabyte range) are all advantageous qualities.

More Info

Job Type:
Industry:
Function:
Employment Type:

Job ID: 143958609

Similar Jobs