Elliottmoss Consulting – Data Engineer

Company
Elliottmoss Consulting
elliottmoss.com
Designation
Data Engineer
Date Listed
03 Aug 2021
Job Type
Experienced / Senior Executive
Free/Proj
Job Period
From Sep 2021 - Aug 2022
Profession
IT / Information Technology
Industry
Computer and IT
Location Name
Singapore
Allowance / Remuneration
$6,300 - 7,100 monthly
Company Profile

Elliott Moss Consulting is an IT & SAP Consultancy Company, established in the year 2010. Headquartered in the United Kingdom, EMC takes pride in helping its clients automate, transform, and optimize their business processes.

Job Description

• 8 to 10 years of experience in data warehouse, data analytics projects, change management process, and/or any IM (Information Management) related works.
• Must possess Hadoop skills
• Preferably with experience in implementation best practices involving data management, data reconciliation, data duping, scheduling, etc.
• Able to assess design considerations in the aspect of data management and integration
• Experience with Agile/SCRUM/Kanban software implementation methodology
• Should have good knowledge in DevOps engineering using Continuous Integration/Delivery tools such as Docker, Jenkins, Puppet, Chef, GitHub Atlassian Jira etc.
• Certification in any of Hadoop Big Data tool/technology, data integration, data management, or visualisation tools is an added advantage.
• Knowledge of Collibra Metadata is an added advantage.
• Knowledge of Apache Airflow is an added advantage.
• Knowledge about the infrastructure paradigms such as OS, network is an added advantage. Big data developer

• Hands on experience in implementing data integration processes, designing and developing data models and building in detail ETL/ELT processes or programs.
• Contributed in at least 2 phases of SDLC lifecycle and experience in Big Data, data warehouse, data analytics projects, data migration, change management process, and/or any IM (Information Management) related works.
• Experience with Hadoop Technologies such as HDFS/MapRFS, Map Reduce(II), Advanced HDFS ACLS, Hive, HBase, Cassandra, Impala, Spark, Sqoop, Kafka, Nifi, Flink, Druid, Zookeeper and zkClient tool
• Good understanding on Cloudera or Horton Works distributions
• Experience in working with RDBMS technologies such as, Oracle, Microsoft SQL Server, PostgreSQL, DB2, MySQL, Maria DB, etc.
• Hands-on experience on Spark, SparkSQL, Hive QL, Impala, Spark Data Frames and Flink CEP, as ETL framework
• Strong knowledge of Big Data stream ingestion and stream processing using Kafka and Spark Structured Streaming, Flink
• Good understanding Spark Memory management with and without Yarn memory management
• Should have experience developing and designing in one or more NoSQL database technologies such as Cassandra, Mongo, HBase, CouchDB/Couchbase, Elasticsearch etc.
• Should good working knowledge of HCatalog and Hive Metadata.
• Should have working knowledge of Kerberos authentication tool
• Good knowledge of data warehouse and data management implementation methodology.
• Knowledge and experience in data visualisation concepts using tools such as Tableau, Microsoft PowerBI or QlikView etc. will be an advantage.
• Ability to pick up new tools and able to be independent with minimal guidance from the project leads/managers.
• Hands-on programming skill on Scala/Python using Spark/Flink Framework

This position is already closed and no longer available.  You may like to view the other latest internships here.

Related Job Searches:

Discuss this Job:

You can discuss this job on Clublance.com #career-jobs channel, or chat with other community members for free:
Share This Page