Lead Data Engineer

Global Analytics business is hiring for a Lead Data Engineer based in the City. This is a permanent role with a salary paying between £60K - £85K. You will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives.

This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyse, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, training and initiatives though mentoring and coaching.

Duties include:
- Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matilion, Airflow, Sqoop, etc.
- Create functional & technical documentation - e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc.
- Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
- Perform data analysis to validate data models and to confirm ability to meet business needs.
- Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik.

Skills and Experience Required:
- 7+ Years industry implementation experience with data integration tools such as Microsoft SSIS, Azure Data Factory, Databricks, Glue, Step Functions, Airflow, Apache Flume/Sqoop/Pig, etc.
- Minimum of 5 years of data architecture, data modelling or similar experience
- Strong data warehousing, OLTP systems, data integration and SDLC
- Strong experience in big data frameworks & working experience in Spark or Hadoop or Hive (incl. derivatives like pySpark (prefered), SparkScala or SparkSQL) or Similar, along with experience in libraries / frameworks to accelerate code development
- Experience using major data modelling tools (examples: ERwin, ER/Studio, PowerDesigner, etc.)
- Experience with major database platforms (e.g. SQL Server, Oracle, Azure Data Lake, Hadoop, Azure Synapse/SQL Data Warehouse, Snowflake, Redshift etc.)
- Strong experience in orchestration & working experience in either Data Factory or HDInsight or Data Pipeline or Cloud composer or Similar
- Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP)
- Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms.

Please apply for immediate interview!

The JM Group is operating and advertising as an Employment Agency for permanent positions and as an Employment Business for interim / contract / temporary positions. The JM Group is an Equal Opportunities employer and we encourage applicants from all backgrounds.