— About 3+ years in software development;
— Experience with Scala, SQL;
— Experience with Hadoop ecosystem, Spark;
— Experience working with Agile methodologies;
— Ability to work in a fast-paced, start-up like environment;
— Proactive self-starter who works well independently and as part of a team;
— Good communication skills;
— B.Sc. in Computer Science or related field;
— English — Upper-inetrmediate (Headquater in London).
-Flexible working hours;
-Cozy office in Pechersk district (new Creative States);
-High-level compensation and regular performance based salary and career development reviews;
-Medical insurance (health), employee assistance program;
-Paid vacation, holidays and sick leaves;
-Team building and a lot of fun to take a break, relax, and give you the freedom to think beyond the next line of code;
-You are not just a number, your work makes a difference.
-You will help architect and build solutions to business-critical problems;
-You will be participating in interesting projects such as:
-Carrying out efficient integration with our data providers via various API endpoints and data representation formats.
-Building and deploying an in-house distributed ETL pipeline for processing petabytes of data per day
-Enable an accurate, comprehensive and reliable data storage in our distributed data warehouses based on the needs of other teams
-Providing continuous improvements in the way data is being processed and stored based on the feedback and needs of the business or other teams
-Setting up monitoring for key performance metrics and overall systems’ behaviour to promptly react in case any anomaly detected
-You will be responsible for optimisation of ETL pipelines, maintaining over 60 Spark jobs. Building a data lake for data scientists and analysts.
-Experimenting with new tools and technologies to produce cutting-edge solutions to business problems
-Be a part of a self-organising, results-oriented agile team using Kanban to complete new product launches