Are you the data guru we’re looking for?
Virgin Money’s quest to build a better bank is underway with the build of the Virgin Money Digital Bank (VMDB). We’re looking for dynamic people that have the ambition to shape the business they will one day be responsible for running.
This start-up business will need to be designed from the top down and built from the bottom up; as such the Cloud Data Engineer will be a critical role in the design, development and implementation of VMDB’s cloud environment. The successful applicant will also work closely with the wider VMDB technology team to ensure alignment of requirements and the delivery of a world class cloud environment for the benefit of Virgin Money.
The Cloud Data Engineer will have responsibility for the technical delivery and support of VMDB’s cloud platform including security, storage, compliance, DevOps, data ingestion, data integration, big data, NOSQL, Graph database deployment and overall platform optimization. During the build phase, the successful applicant will be instrumental in the implementation of the platform and providing assurance that the platform works as designed from an end to end perspective. Beyond launch, the key responsibilities will be to continually review and refine all aspects of VMDB’s cloud environment and DevOps automation processes.
We’ll look to you to implement and operate data ingestion and processing pipeline and develop data model for various modern storage and analytics engines. You’ll also write code, and work with data scientists to capture and interpret requirements and build tools, frameworks and dashboards to support data governance initiatives. You’ll bring external DevOps best practice whilst keeping abreast of new tools and ways of working that will improve the efficiency of the VMDB DevOps function.
To be brilliant, you’ll ideally be experienced in: developing data integration jobs for multi-TB traditional relational data warehouses; Hadoop based big data environments (specifically Hadoop, Hive, Pig, Spark, Oozie); designing and optimising ETL processes to transform large volumes of data from a variety of sources; building stream ingesting and processing data pipelines to leverage open source technologies such as Kafka, Storm and Flink;
What are you waiting for? Apply now
Our aim is to nurture a skilled, diverse and committed workforce where every individual regardless of background can share our purpose, reach their potential and be rewarded appropriately for their contribution to our success. We’re also happy to talk flexible working with you.
Salary: From £36,500 to £50,000 per annum, plus amazing benefits
If you’re not ticking all the boxes for this role - don’t worry - still apply as we’d love to hear from you! There are opportunities with varying levels of experience required so please share your details with us so we can find out more about you.
Please note, if successful you will be subject to a satisfactory credit and criminal records check as part of our recruitment process.
- Big Data
- Data Governance