Senior Data Engineer - Scala, Python, Spark, Kafka, GCP
One of the fastest growing tech-driven eCommerce companies in the world are looking for 3x Data Engineers to develop their Spark and Scala data platform for advanced data science. You will be joining a strong team of 4, to allow for real-time data ingestion for several different data science use cases. The team work in a fast-paced and highly agile way, and you will also contribute to new tech selection and architecture.
This is a great opportunity to join a very strong tech team in a really fun, flexible and engaging household name.
YOUR ROLE AND RESPONSIBILITIES - Senior Data Engineer - Scala, Python, Spark, Kafka
As Senior Data Engineer, you will:
- Access and platform unique data sources using Scala, Spark, Kafka, BigQuery, and other tools, in a Google Cloud Environment
- Work in a highly Agile Software Development environment, including Continuous Integration (Docker), regular Sprints, TDD, etc.
- Introduce new tech to the business to solve complex data problems
- Work with Data Scientists to ensure that the data is appropriate for the NLP and AI work
YOUR SKILLS AND EXPERIENCE
To qualify for this role, you will need:
- Experience developing a production level Big Data platform, preferably Cloudera or Hortonworks, and experience with a number of Hadoop/Spark ecosystem tools
- Experience in the cloud is preferred - both GCP and AWS is great!
- Solid Scala programming skills, but other languages can be considered (Clojure, Java, Python)
- Experience with both batch and stream processing is preferable
- Any experience with analytics/data science would be really useful but not necessary
HOW TO APPLY
To apply for this role, please do so via this site. For more information on this role or other data engineering roles, get in touch with Ross at Harnham.
KEYWORDS: Spark, Scala, Google Cloud, AWS, GCP, Python, Java, Kafa, Data Engineerin