
Design scalable systems using Hadoop & Apache Spark, develop database infrastructure, automate workflows with Airflow.
As a Data Engineer, you will be at the heart of our large-scale data processing systems. Utilizing advanced technologies such as Hadoop and Apache Spark, you’ll design and manage robust big data solutions that are both scalable and cost-effective. With expertise in RDBMS and SQL, you'll develop and maintain our database infrastructure, ensuring it operates efficiently and securely. You will also leverage AWS services including S3 and Redshift to build dynamic storage and processing environments. Leveraging scheduling tools like Airflow or Control M, you’ll automate data pipelines, enhancing the efficiency of our workflows.
In this role, you will write efficient Python and Scala code for data manipulation and processing tasks, ensuring that every line contributes to optimizing performance and reducing costs. Our team thrives in a collaborative environment where creative problem-solving meets technical excellence. If you are passionate about big data and are looking to work with the latest technologies in a dynamic setting, we want to hear from you.