
Search by job, company or skills
Key Responsibilities
. Creating complex, enterprise-transforming applications on diverse, high-energy teams
. Working with the latest tools and techniques
. Hands-on coding, usually in a pair programming environment
. Working in highly collaborative teams and building quality code
. The candidate must exhibit a good understanding of model implementation, data structures,
data manipulation, distributed processing, application development, and automation.
. The candidate must have a good understanding of consumer financial products,
data systems and data environments, and processes that are necessary for the implementation of Risk and Finance models
Essential Skills & Prerequisites
. Degree in computer science or a numerate subject (e.g. engineering, sciences, or mathematics)
or Bachelor's/Master degree with 6 years of experience
. Proven experience as a Data Engineer or similar role for a minimum of 3 years experiences.
. Hand-on Development experience with strong proficiency on ETL programming languages such as Spark SQL, Python, Pandas, PySpark, Apache, Scala Spark and Distributed computing in Hadoop Platform (Hive, HDFS and Spark)
. Development and implementation experience of applications for 4-6 years.
. Knowledgeable of data warehousing concepts and technologies.
. 3 to 5 years experience with SQL Related Programming
. 3 to 5 years experience designing and developing in Python.
. 2 to 3 years experience in Hadoop Platform (Hive, HDFS and Spark)
. 2 to 3 years experience with Bash (Linux/Unix) shell scripting
. 2 to 3 years experience with Spark programming.
. Knowledge of micro-services architecture and cloud will be added advantage.
. Knowledge of Java, Oracle, Scala and other data programming (ETL) languages will be added advantage
. Good experience in CI/CD pipeline and working in Agile environment.
. Familiarity with cloud platforms such as AWS and Big Data infrastructure, Jira, Bitbucket, etc.
Mandatory Skills
. Hand-on Development experience with strong proficiency on ETL programming languages such as Spark SQL, Python, Pandas, PySpark, Apache, Scala Spark and Distributed computing in Hadoop Platform (Hive, HDFS and Spark)
. Development and implementation experience of applications for 4-6 years. Knowledgeable of data warehousing concepts and technologies.
. 3 to 5 years experience with SQL Related Programming
. 3 to 5 years experience designing and developing in Python.
. 2 to 3 years experience with Bash (Linux/Unix) shell scripting
Desired
. A Bachelor's degree or higher preferably in Computer Science, Information Technology, or a related field.
. Additional experience on developing service-based application
. Excellent analytical skills: Proficient in MS Office and able to produce board-level
documentation
. Fluency in written and spoken English Good communication and interpersonal skills
. Ability to develop ETL processes for data extraction, transformation, and loading is an advantage.
. Self-starter who sets and meets challenging personal targets, Detailed person, with a big
picture outlook
. Understanding of current technologies employed by Tier 1 Investment Banking Institutions
. Must be a team player
. Working in financial/banking industry is an advantage.
. Avaloq knowledge/skillset is good to have.
Job ID: 146615341