Job Description: Data Engineer

Location: Bangalore, India
Experience: 4+ Years

About Quation Solution:

In today’s data-driven world, information is power. But raw data alone can’t fuel success. At Quation, we bridge the gap. We’re a data analytics consultancy with a purpose to empower brands like yours to unlock the true potential of their data.

We translate complex data landscapes into clear, actionable insights that inform strategic decision-making across your entire organization. We leverage cutting-edge technology and a deep understanding of your industry to ensure our solutions are tailored to your unique challenges and opportunities.

If you are a passionate and skilled Data Scientist looking for an exciting opportunity, we would love to hear from you. Apply now to join our team in Bangalore!

Feel free to customize any part of this description to better fit your needs.

In today’s data-driven world, information is power. But raw data alone can’t fuel success. At Quation, we bridge the gap. We’re a data analytics consultancy with a purpose to empower brands like yours to unlock the true potential of their data.

We translate complex data landscapes into clear, actionable insights that inform strategic decision-making across your entire organization. We leverage cutting-edge technology and a deep understanding of your industry to ensure our solutions are tailored to your unique challenges and opportunities ChatGPT


Job Overview

We are seeking a skilled Data Engineer with extensive experience in Python, PySpark, Spark SQL, and Big Data technologies to join our team in Bangalore. The ideal candidate will be responsible for designing, developing, and maintaining our data processing and analytics infrastructure.

Key Responsibilities

  • Design and implement scalable data pipelines using Python, PySpark, and Spark SQL.
  • Develop and maintain ETL processes to support data integration from various sources.
  • Optimize and troubleshoot data processing workflows to ensure performance and reliability.
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
  • Manage and maintain big data infrastructure, ensuring data availability, integrity, and security.
  • Perform data quality checks and validation to ensure accuracy and consistency.
  • Monitor and tune data processing jobs to enhance performance and reduce costs.
  • Stay updated with the latest industry trends and best practices in big data engineering.

Required Skills and Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • 4+ years of experience in data engineering or a related role.
  • Proficiency in Python and PySpark for data processing and analysis.
  • Strong knowledge of Spark SQL and its optimization techniques.
  • Hands-on experience with big data technologies such as Hadoop, Hive, HDFS, etc.
  • Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP).
  • Familiarity with data warehousing concepts and technologies like Redshift, BigQuery, or Snowflake.
  • Strong SQL skills and experience with relational databases.
  • Knowledge of data modeling, data structures, and algorithms.
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills.

Preferred Qualifications

  • Experience with real-time data processing frameworks like Apache Kafka, Flink, or Storm.
  • Knowledge of containerization technologies such as Docker and Kubernetes.
  • Experience with CI/CD pipelines and version control systems like Git.
  • Understanding of data governance, privacy, and security practices.

Benefits

  • Competitive salary and performance bonuses.
  • Health insurance and wellness programs.
  • Flexible working hours and remote work options.
  • Opportunities for professional development and career growth.
  • Collaborative and inclusive work environment.