Share this Job

Senior Data Engineer

Date: 06-Nov-2022

Location: Macquarie Park, Australia

Company: Singtel Group


Job Summary


In this role, you will design, build, and maintain batch and real-time data pipelines on On-premises and cloud platforms to ingest data from the various source systems. Post ingestion, you will be responsible for integrating the data asset in the data model and making the data ready for consumption for the end-user use cases and downstream applications.


Key Responsibilities


  • Enhance, optimize and maintain existing data ingestion, transformation and extraction pipelines and assets built for reporting and analytics on Cloud (Azure + Databricks) and Big Data Cloudera platform.
  • Work with the Product Owner to understand the priorities and OKRs for the quarter and gather detailed requirements from the initiative owners or program sponsor as per the Epics planned to be delivered in the quarter.
  • Build new and optimized data pipelines, assets to meet the end-user requirements. The Data pipelines must adhere to all the architecture, design and engineering principles.
  • Design the data pipelines and assets to meet non-functional requirements (Security, reliability, performance, maintainability, scalability, and usability). Most importantly, they should be designed to be to keep the compute cost low on Cloud.
  • Data wrangling, Data profiling and data analysis for new datasets ingested from source systems and derived/built from existing datasets with the on-premises and cloud-native tools
  • Functionally understand the Data assets working with various SMEs and apply the transformation rules required to build the target data asset.


Experience and Qualifications


  • Bachelor’s degree in maths, statistics, computer science, information management, finance or economics
  • 7+ years’ experience working in Data Engineering and Warehousing.
  • Experience in building fully automated end to end data pipelines using on-premise or cloud based data platforms.
  • Cloud experience – Azure based analytics/reporting pipeline
  • Hands on delivery of solutions for Reporting and Analytics use cases.
  • Hands On with advanced SQL on Data Warehouse , Big Data and Data Bricks
  • Experience in data profiling, source-target mappings, ETL development, SQL optimisation, testing and implementation
  • Experience in working effectively on Cloud DWs
  • Knowledgeable in Big Data tools like Spark (python/scala), Hive, Impala, Hue and storage (e.g. HDFS, HBase)
  • Knowledgeable in CICD processes – BitBucket/GitHub, Jenkins, Nexus etc.
  • Knowledgeable managing structured and unstructured data types like json, xml, avro
  • Track record of implementing databases and data access middleware and high-volume batch and (near) real-time processing



We understand that flexibility means different things to different people. We're proud to offer a variety of options to work in different ways, such as our Blended Ways of Working, job share and part-time. Our Blended Ways of Working lets our people work across home and our offices. Please talk to us about how we can make this role work for you!

Curious about our culture? Go behind the scenes with our people by searching #OptusLife on LinkedIn.