We're working with a Prestigious Australian Organisation who are looking for a DW & ETL Engineer to work on their Data and Integration team! You will own (design, implement and support) the AWS/Azure Big Data technologies such as the Data Lake and Data Warehouse and support the organisation in growing the data analytics capabilities at. In addition to this, the data engineer role will also administer the Power BI tool and support the implementation of the data mastering capability.
What you will be responsible for:
- Identify, design, and implement data pipelines to assemble large, complex data sets that meet functional / non-functional business requirements
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using AWS ‘big data’ technologies
- Identify and implement internal process improvements: automating manual processes, optimizing data
delivery, re-designing infrastructure for greater scalability, security etc.
- Build data models (dimensional / data vaults) as per the business needs to help provide actionable business insights and other key business performance metrics
- Experience of Unit Testing UT, System Integration Testing SIT and User Acceptance Testing UAT including efficient documentation as well as Defect Tracking
- Work with stakeholders including the Business and IT teams to assist with data-related technical issues and support their data needs
- Create data tools for analytics and ‘data science’ team members that assist them in building machine learning models
- Administer and govern the use of Power BI toolset
- Work with data and analytics experts to strive for greater functionality in our data systems.
Technical experience to be successful:
- Experience building and optimizing data pipelines, architectures and data sets
Experience in dimensional modelling and building data vaults
Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with structured, semi-structured and unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Experience in implementing master data management
- Experience supporting and working with cross-functional teams in a dynamic environment.
- 3+ years of experience in a Data Engineer role
- Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- AWS big data tools and technologies: Redshift, Glue, EMR, RDS etc.
- Open-source technologies Hadoop, Spark, Kafka, Elasticsearch etc.
- Experience with relational SQL and NoSQL databases, including MSQL Server, Postgres, Cassandra, DynamoDB etc.
- Experience with data pipeline and workflow management tools: Airflow, AWS/Azure Data Pipeline, Lambda etc.
- Experience with stream-processing systems: AWS Kinesis, Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
- Experience with any commercial ETL tool: Informatica, Talend, Pentaho etc.
- Experience with any visualization tool: Power BI, Tableau, Qlik etc.
If you are interested in hearing more about this role, apply now! Reece.email@example.com