Dieses Jobangebot ist archiviert und steht nicht mehr zur Verfügung.
Vakante Jobangebote finden Sie unter Projekte.
Vakante Jobangebote finden Sie unter Projekte.
Data Engineer
Eingestellt von Darwin Recruitment
Gesuchte Skills: Engineering, Engineer, Python, Java
Projektbeschreibung
DATA ENGINEER
Regio Laakdal (Belgium), €70 PER HOUR, 40 hours a week, Freelance, Hadoop Spark, hive, pig, Agile
SECTOR
Retail
Do you get exited from working with Data?
Do you like the possibility to get more experience within data engineering?
I'M LOOKING FOR SOMEONE WHO:
MS/BS degree in a computer science field or related discipline
2+ years' experience in large-scale software development
1+ year experience in Hadoop or big data technologies.
Strong Java programming, Python, Shell Scripting, and SQL
Strong development skills around Hadoop, Spark, Hive, and Pig
Good understanding of file formats including JSON, Parquet, Avro, and others
Experience with performance/scalability tuning, algorithms and computational complexity
Ability to understand relational database schemas
Proven ability to work cross functional teams to deliver appropriate resolution
Experience with AWS components and services, particularly, EMR, S3, and Lambda
Automated testing, Continuous Integration/Continuous Delivery
WHO IS INTERESTED IN:
Working for a big company
Who wants to get more experienced in data engineering
Gets exited from a environment with talented people
FOR A COMPANY WHO:
Provides a challenging environment which provides possibilities for growth
There are lots of projects that teeth sinking into, so I'm hiring fast. Don't delay - send your CV to (see below) or give me a call.
Regio Laakdal (Belgium), €70 PER HOUR, 40 hours a week, Freelance, Hadoop Spark, hive, pig, Agile
SECTOR
Retail
Do you get exited from working with Data?
Do you like the possibility to get more experience within data engineering?
I'M LOOKING FOR SOMEONE WHO:
MS/BS degree in a computer science field or related discipline
2+ years' experience in large-scale software development
1+ year experience in Hadoop or big data technologies.
Strong Java programming, Python, Shell Scripting, and SQL
Strong development skills around Hadoop, Spark, Hive, and Pig
Good understanding of file formats including JSON, Parquet, Avro, and others
Experience with performance/scalability tuning, algorithms and computational complexity
Ability to understand relational database schemas
Proven ability to work cross functional teams to deliver appropriate resolution
Experience with AWS components and services, particularly, EMR, S3, and Lambda
Automated testing, Continuous Integration/Continuous Delivery
WHO IS INTERESTED IN:
Working for a big company
Who wants to get more experienced in data engineering
Gets exited from a environment with talented people
FOR A COMPANY WHO:
Provides a challenging environment which provides possibilities for growth
There are lots of projects that teeth sinking into, so I'm hiring fast. Don't delay - send your CV to (see below) or give me a call.
Projektdetails
Geforderte Qualifikationen
-
Kategorie:
IT Entwicklung, Ingenieurwesen/Technik