Responsibilities
– Create highly scalable and robust data solutions for use by our clients
– Design, build, and maintain multiple performant data pipelines & ETL / ELT flows against massive datasets
– Ensure data accuracy and reliability
Main skills
– Strong SQL
– Strong Java development
– Experience building large scale streaming and batch data pipelines (e.g. Python, Java)
– Experience building out data warehouse and/or data lake infrastructure
– Experience with data modelling and physical database design preferred
– Experience using Big Data technologies especially Kafka around messaging
– Experience with microservice design and deployment
– Exceptional business level English (C1), your English level will be tested.
Wish list extras
– AWS data stack (e.g. Kinesis, Glue, RDS, Athena, Redshift etc.)
– Software development full stack
– Knowledge of data security best practices (e.g. data encryption, tokenization, masking)
If you are technically appropriate and business astute, excellent communications and ambition to join a quality focused company with like-minded clients, then please apply. We are looking forward to meeting you.
More Information
- Localidad SAN DIEGO
- Sueldo € 30,000 - € 45,000 / year
- Nº de vacantes 1
- Requisitos '-Strong SQL - Strong Java development - Experience building large scale streaming and batch data pipelines (e.g. Python, Java) - Experience building out data warehouse and/or data lake infrastructure - Experience with data modelling and physical database design preferred - Experience using Big Data technologies especially Kafka around messaging - Experience with microservice design and deployment - Exceptional business level English (C1)
- Nombre de empresa Consulting, Company Investing