Send me more jobs like this

Keywords / Skills : Talend, AWS, Amazon Web Service, Cloudfont, S3, Web Service

2 - 12 years
Posted: 2018-07-31

Industry
IT/ Computers - Software
Function
IT
Role
Software Engineer/ Programmer
Posted On
31st Jul 2018
Job Description
Job Description:

We are looking for a Data Engineer that will work on the collecting, storing,

processing, and analysing huge sets of data. The primary focus will be on choosing

optimal solutions to use for these purposes, then maintaining, implementing, and

monitoring them. You will also be responsible for integrating them with the

architecture used across the company.

Responsibilities:

- Work closely with BI and Data Analytics Team, Merchandising and Product teams and Ecommerce Developers to understand data requirements.

- Design and build dimensional data models and schema designs to improve accessibility, efficiency, and quality of data.

- Develop the ETL jobs for data induction and implement the application logic to meet

the business requirements

- Monitoring performance and advising any necessary infrastructure changes.

- Defining data retention policies.

Skills:

- 4 years or more of experience with data warehousing technical components (e.g. Data Modeling, ETL and Reporting), infrastructure (cloud/AWS platform) and their integration

- Deep understanding of the architecture for enterprise level data warehouse solutions using multiple platforms (OLTP, OLAP, web services)

- Experience with design, creation, management, and business use of large datasets

- Be enthusiastic about learning new technologies and be able to implement solutions using them to provide new functionality to the users or to scale the existing platform.

- Good written and verbal communication skills are required as the person will work very closely with diverse teams.

- Having strong analytical skills is a plus.

- Solid experience with either of the following ETL tool: Talend, Informatica, DataStage or Pentaho

- Experience with cloud AWS Web Service: Amazon RDS, AWS S3, AWS Redshift.

- Exposure with Big Data technologies: Hadoop, MapReduce, HDFS or Big Data querying tools: Pig, Hive, and Impala is a plus.

Additional Knowledge and Skills:

- Fluent in writing Advanced SQL script

- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB (preferable)

- Experience with building stream-processing systems, using solutions such

as Storm or Spark-Streaming (preferable)

Education:

Graduate degree required in Computer Science or related field

4+ years of experience in the related field



Similar Jobs
View All Similar Jobs