Share this Job

Senior Data Engineer

Date: 03-Jun-2022

Location: Macquarie Park, Australia

Company: Singtel Group

 

Job Summary

 

In this role, you will design, build, and maintain batch and real-time data pipelines on On-premises and cloud platforms to ingest data from the various source systems. Post ingestion, you will be responsible for integrating the data asset in the data model and making the data ready for consumption for the end-user use cases and downstream applications. You will ensure that the data is stored efficiently and adheres to the Data Governance and Architecture principles that ensure the quality, consistency, and accessibility of our data assets.

 

Role and Responsibilities

 

  • Enhance, optimize and maintain existing data ingestion, transformation and extraction pipelines and assets built for reporting and analytics on Big Data and EDW platforms.
  • Work with the Product Owner to understand the priorities and OKRs for the quarter and gather detailed requirements from the initiative owners or program sponsor as per the Epics planned to be delivered in the quarter.
  • Build new and optimized data pipelines, assets to meet the end-user requirements. The Data pipelines must adhere to all the architecture, design and engineering principles.
  • Design the data pipelines and assets to meet non-functional requirements (Security, reliability, performance, maintainability, scalability, and usability). Most importantly, they should be designed to be to keep the compute cost low on Cloud.
  • Data wrangling, Data profiling and data analysis for new datasets ingested from source systems and derived/built from existing datasets with the on-premises and cloud-native tools
  • Functionally understand the Data assets working with various SMEs and apply the transformation rules required to build the target data asset.
  • Coordinate with other teams for planning, design, governance, engineering and release management of processes and ensure timely and accurate delivery of data and services.
  • Build the new data asset and data pipeline as per the downstream requirement. e.g. The Dataset for Tableau will be different to the Dataset built for TM1 cubes.

 

Desired Experience

 

  • 7+ years’ experience working in Data Engineering and Warehousing.
  • Experience in building fully automated end to end data pipelines using on-premise or cloud based data platforms
  • Hands on delivery of solutions for Reporting and Analytics use cases.
  • Hands On with advanced SQL on Data Warehouse , Big Data and Native cloud technologies
  • Experience in data profiling, source-target mappings, ETL development, SQL optimisation, testing and implementation
  • Experience in working on relational databases like Teradata and Cloud DWs.
  • Knowledgeable in Big Data tools like Spark (python/scala), Hive, Impala, Hue and storage (e.g. HDFS, HBase)
  • Knowledgeable in CICD processes – BitBucket/GitHub, Jenkins, Nexus etc.
  • Knowledgeable managing structured and unstructured data types like json, xml, avro
  • Track record of implementing databases and data access middleware and high-volume batch and (near) real-time processing

 

 

We understand that flexibility means different things to different people. We're proud to offer a variety of options to work in different ways, such as our Blended Ways of Working, job share and part-time. Our Blended Ways of Working lets our people work across home and our offices. Please talk to us about how we can make this role work for you.

Curious about our culture? Go behind the scenes with our people by searching #OptusLife on LinkedIn.