Data Engineer / Python Developer

We are looking for a self-starter Python developer with a knack for problem-solving and a desire to work on exciting projects.

Abstract image showing swirling patterns of deep blue and lighter blue hues, creating a sense of depth and motion. The center appears denser with a dark blue concentration, fading outward with streaks and ripples of varying shades of blue, much like the meticulous design behind chemical safety management software.
Full-Time
Australia
September 16, 2025
Apply
Role Summary

The main product of RMT is ChemAlert, a chemical risk management platform. This role will join the team managing the ChemAlert data and focus on automating data collection from myriads of external sources, processing and serving it via APIs. The successful candidate will join the RMT Data Science team and report to the CIO.

Key Responsibilities / Accountabilities
  • Design, develop, and maintain data pipelines and ETL processes to ensure the smooth flow of data within our systems
  • Utilize Python to create efficient and maintainable code for data processing and analysis
  • Continuously monitor and improve the performance, reliability, and scalability of data pipelines
  • Proactively identify and resolve technical challenges and bottlenecks in the data pipeline
  • Manage and optimize data storage solutions and databases
  •  Collaborate with cross-functional teams to gather and understand data requirements and implement solutions that meet business needs
Key Skills
  • Proven experience as a Python developer with a strong understanding of software development principles
  • Experience writing REST and/orGraphQL APIs with Python tools
  • Proficiency working in Linux systems
  • Hands-on experience with Docker and other containerization technologies
  • Strong problem-solving skills and a passion for tackling complex technical challenges
  • Excellent communication skills and the ability to work effectively in a collaborative team environment
  • A willingness to learn newtechnologies and adapt to evolving project requirements
  • Self-motivated and able to work independently while also contributing to team objectives
Nice to have
  • Experience with job orchestration tools (e.g. Airflow, Mage.ai)
  • Knowledge of data stream processing concepts and experience with relevant tools (e.g., Apache Kafka, Apache Druid, Apache Flink).
  • Knowledge of big data technologies like Spark, Flink.
  • Understanding of DevOps practice and CI/CD pipelines.
  • Familiarity with orchestration tools like Kubernetes.
  • AI technologies and processes.

Opportunity Awaits

Please complete the form below to complete your application.