Dieses Jobangebot ist archiviert und steht nicht mehr zur Verfügung.
Vakante Jobangebote finden Sie unter Projekte.
Vakante Jobangebote finden Sie unter Projekte.
Big Data Senior Architect Zurich, Switzerland
Eingestellt von Silverlink Technologies
Gesuchte Skills: Java
Projektbeschreibung
POSITION DETAILS
POSITION: BIG DATA SR. ARCHITECT
JOB LOCATION: ZURICH, SWITZERLAND
CONTRACT DURATION: 12 MONTHS +
JOB DESCRIPTION
MANDATORY SKILLS
Good understanding of architectural concepts on Spark Streaming, Kafka, Flume, HBase and Solr
Proficient in defining technical architecture for Real Time data processing
Proficient in designing, developing & deploying scalable big data analytic solutions based on Spark Streaming and HBase
Strong technical knowledge and programming experience in Real Time streaming with Spark streaming, Flume and Kafka
Prior experience in working on Spark Stateful transformations
Hands-on experience in development (Spark/HBase/Kafka/Flume), capacity planning, deployment and troubleshooting
Experience on Spark streaming and HBase Performance Tuning
Should have led full life cycle development of Hadoop/Spark implementation for high volume data
Experience working with Cloudera Hadoop distribution
Strong experience in Core Java
ADDITIONAL SKILLS:
Experience in Finance domain
Exposure to WebSphere MQ
Experience in installing and configuring Hadoop on AWS
Experience in data quality, exception management, reconciliation and data migration
POSITION: BIG DATA SR. ARCHITECT
JOB LOCATION: ZURICH, SWITZERLAND
CONTRACT DURATION: 12 MONTHS +
JOB DESCRIPTION
MANDATORY SKILLS
Good understanding of architectural concepts on Spark Streaming, Kafka, Flume, HBase and Solr
Proficient in defining technical architecture for Real Time data processing
Proficient in designing, developing & deploying scalable big data analytic solutions based on Spark Streaming and HBase
Strong technical knowledge and programming experience in Real Time streaming with Spark streaming, Flume and Kafka
Prior experience in working on Spark Stateful transformations
Hands-on experience in development (Spark/HBase/Kafka/Flume), capacity planning, deployment and troubleshooting
Experience on Spark streaming and HBase Performance Tuning
Should have led full life cycle development of Hadoop/Spark implementation for high volume data
Experience working with Cloudera Hadoop distribution
Strong experience in Core Java
ADDITIONAL SKILLS:
Experience in Finance domain
Exposure to WebSphere MQ
Experience in installing and configuring Hadoop on AWS
Experience in data quality, exception management, reconciliation and data migration
Projektdetails
Geforderte Qualifikationen
-
Kategorie:
IT Entwicklung