Data Engineer

Job Locations IN-Hyderabad
Posted Date 1 day ago(1/19/2026 7:35 AM)
ID
2026-4007
# of Openings
1
Category
Development

Overview

JAGGAER is looking for a Senior Data Engineer to join our Data Pipeline team and help build the technologies used to drive our advanced Analytics capabilities. You will be part of a team responsible for the vision and architecture of scalable data pipeline solutions utilizing various data movement and transformation technologies. You will join a strong team of data engineers what will create high-performance, optimized, and robust date pipeline processes to move data from disparate sources from around the globe.

Principal Responsibilities

  • Contribute, as part of a team, in designing, testing, and implementing sophisticated data pipeline technologies to extract and transform data from source systems.
  • Perform data-modeling on target systems to store data and support analytic querying.
  • Communicate with other development teams to gather and document requirements.
  • Work in Agile methodology and participate in design and review meetings.
  • Follow best practices for the software development life-cycle including coding standards, reviews, source management, build and testing.
  • Collaborate with other engineers in the team to implement best-practices around large-scale data processing

Position Requirements

  • 5+ years of experience as a Data Engineer or in a similar role working with large data sets and ELT/ETL processes.
  • 7+ years of industry experience in software development.
  • Knowledge and practical use of a wide variety of RDBMS technologies such as MySQL, Postgres, SQL Server or Oracle.
  • Use of cloud-based data warehouse technologies including Snowflake, AWS RedShift.
  • Strong SQL experience with an emphasis on analytic queries and performance.
    Experience with various “NoSQL” technologies such as MongoDB or ElasticSearch.
  • Familiarity with either native database or external change-data-capture technologies.
  • Practical use of various data formats such as CSV, XML, JSON, and Parquet.
  • Use of Data flow and transformation tools such as Apache Nifi or Talend or PDI.
  • Implementation of ELT processes in languages such as Java, Python or NodeJS.
  • Use of large, shared data stores such as Amazon S3 or Hadoop File System
    Thorough and practical use of various Data Warehouse data schemas (Snowflake, Star)

 

#LI-SN1

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed