Dieses Jobangebot ist archiviert und steht nicht mehr zur Verfügung.
Vakante Jobangebote finden Sie unter Projekte.
Vakante Jobangebote finden Sie unter Projekte.
Bigdata-Hadoop - Zurich, Switzerland - Contract Opportunity
Eingestellt von Infoplus Technologies UK Ltd
Gesuchte Skills: Python, Kerberos, Design
Projektbeschreibung
Job Title: BIGDATA-HADOOP
Job Type: Contract 3 months (rolling)
Job location: Zurich, Switzerland
Start Date: ASAP
WE ARE LOOKING FOR SOME ONE WHO IS REALLY INTERESTED TO COME AND WORK AT ZURICH, SWITZERLAND
JOB DESCRIPTION:
MANDATORY SKILLSET:
Strong technical knowledge and programming experience in Real Time streaming with Spark and Kafka
Deep understanding on Hadoop ingestion framework and streaming methodologies (Eg. Sqoop, Flume, Kafka, Storm, Falcon, NiFi)
OTHER RELEVANT SKILLS:
4+ Years' experience in BIGDATA HADOOP Technologiies and architecting a big data landscape for large organizations
Experience working with multiple Hadoop distributions (eg. Hortonworks, Cloudera, MapR)
Hands on experience working with Hadoop file system (eg. HDFS/MapRFS)
Experience with the Hadoop ecosystem components MAPREDUCE, HIVE, PIG, SPARK, OOZIE, ZOOKEEPER, AMBARI
Proficient in designing, developing & deploying scalable big data analytic solutions based on Hadoop and NoSQL (Eg. HBase, Cassandra, MongoDB)
Experience on Hadoop PERFORMANCE TUNING
Should have led full life cycle development of ETL and analytics solutions for high volume (hundreds of TB) data
Hands-on experience in DEVELOPMENT, CAPACITY PLANNING, DEPLOYMENT AND TROUBLESHOOTING
ADDED ADVANTAGES
Specialized in agile/scrum methodologies, Excellent analytical and problem solving skills
Working Knowledge on Hadoop security components (Eg. Data Encryption, Kerberos, Ranger, Knox)
Programming experience with Python, Scala
Experience in Hadoop Administration
KEY CONSIDERATIONS:
The candidate is expected to have hands on experience with HADOOP programming and should be willing to do code reviews (and programming to an extend) along with design and architecture activities
Preferably
Job Type: Contract 3 months (rolling)
Job location: Zurich, Switzerland
Start Date: ASAP
WE ARE LOOKING FOR SOME ONE WHO IS REALLY INTERESTED TO COME AND WORK AT ZURICH, SWITZERLAND
JOB DESCRIPTION:
MANDATORY SKILLSET:
Strong technical knowledge and programming experience in Real Time streaming with Spark and Kafka
Deep understanding on Hadoop ingestion framework and streaming methodologies (Eg. Sqoop, Flume, Kafka, Storm, Falcon, NiFi)
OTHER RELEVANT SKILLS:
4+ Years' experience in BIGDATA HADOOP Technologiies and architecting a big data landscape for large organizations
Experience working with multiple Hadoop distributions (eg. Hortonworks, Cloudera, MapR)
Hands on experience working with Hadoop file system (eg. HDFS/MapRFS)
Experience with the Hadoop ecosystem components MAPREDUCE, HIVE, PIG, SPARK, OOZIE, ZOOKEEPER, AMBARI
Proficient in designing, developing & deploying scalable big data analytic solutions based on Hadoop and NoSQL (Eg. HBase, Cassandra, MongoDB)
Experience on Hadoop PERFORMANCE TUNING
Should have led full life cycle development of ETL and analytics solutions for high volume (hundreds of TB) data
Hands-on experience in DEVELOPMENT, CAPACITY PLANNING, DEPLOYMENT AND TROUBLESHOOTING
ADDED ADVANTAGES
Specialized in agile/scrum methodologies, Excellent analytical and problem solving skills
Working Knowledge on Hadoop security components (Eg. Data Encryption, Kerberos, Ranger, Knox)
Programming experience with Python, Scala
Experience in Hadoop Administration
KEY CONSIDERATIONS:
The candidate is expected to have hands on experience with HADOOP programming and should be willing to do code reviews (and programming to an extend) along with design and architecture activities
Preferably
Projektdetails
Geforderte Qualifikationen
-
Kategorie:
IT Entwicklung, Medien/Design