- Hadoop & Apache NIFI/KAFKA/SPARK
- 12 month contract + 12 month extension option
- Australian Citizenship essential | NV1 clearance ideal!
About the Client
Our client is a government-owned organisation responsible for providing services which are essential to the nation's safety and economic security. They are committed to the continual improvement of their services by becoming leaner, more efficient and more responsive to the changing needs of their customers.
About the Role
Our client is seeking a skilled Big Data Engineer specialising in Hadoop, who will focus on the development of new internal systems for the organisation. You will be working with data streams to implement best practice and continue ongoing maintance and development for the system, including a range of administration activities and vendor engagement/ productivity. You will bring to this role a high level of understanding behind Apache, DevOps and similar as well as data development and management experience.
Skills and Attributions
- Demonstrated experience with data streaming and pipeline processing using Apache NIFI/Kafka/ Spark
- Experience within data migration, modeling and analytics
- Demonstrated capability in administration support platforms i.e. Hortonworks or equivalent Hadoop platform
- Experience implementing role-based access in Hadoop platforms
- Expertise in RHEL Linux, graph databases and database administration
- Proven DevOps experience
- Experience working in an Agile environment
How to Apply
To apply for this opportunity, please contact Wanya Eggler on 02 6285 3500 or email@example.com or click the "APPLY NOW' button below.