Share this Job
Senior Data Engineer
Date: 27-Feb-2023
Location: Singapore, Singapore
Company: Singtel Group
Responsibilities:
- Design, develop and automate large scale, high-performance distributed data processing systems (batch and/or real-time streaming) that meet both functional and non-functional requirements
- Deliver high level & detailed design to ensure that the solution meet business requirements and align to the data architecture principles and technology stacks
- Partner with business domain experts, data scientists, and solution designers to identify relevant data-assets, domain data model and data solutions. Collaborate with product data engineers to coordinate backlog feature development of data pipelines patterns and capabilities
- Own and lead data engineering projects; data pipelines delivery with reliable, efficient, testable, & maintainable artifacts, involves ingest & process data from a large number and variety of data sources
- Build, optimize and contribute to shared Data Engineering Frameworks and tooling, Data Products & standards to improve the productivity and quality of output for Data Engineers
- Design and build scalable Data APIs to host Operational data and Data-Lake assert in Data Mesh / Data Fabric Architecture.
- Drive Modern Data Platform operations using Data Ops, ensure data quality, monitoring the data system. Also support Data science MLOps platform
We are committed to a safe and healthy environment for our employees & customers and will require all prospective employees to be fully vaccinated.
The Ideal candidate should possess:
- Bachelor’s degree in IT, Computer Science, Software Engineering, Business Analytics or equivalent
- Minimum of 8 years of experience in Data Engineering, Data Lake Infrastructure, Data Warehousing, Data Analytics tools or related, in design and developing of end-to-end scalable data pipelines and data products
- Experience in building and operating large and robust distributed data lakes (multiple PBs) and deploying high performance with reliable system with monitoring and logging practices
- Experience in designing and building data products and pipelines using some of the most scalable and resilient open-source big data technologies; Spark, Delta-Lake, Kafka, Flink, Airflow, Presto and related distributed data processing
- Excellent experience in using ANSI SQL for relational databases like – Postgres, MySql, Oracle
- and knowledge of Advanced SQL on distributed analytics
- Experience working in Telco Data Warehouse and / or Data Lake engines – Databricks SQL,
- Snowflake, etc
- Proficiency programming languages like Scala, Python, Java, Go, Rust or scripting languages like Bash