Python Data Engineer

Global Analytics Firm is hiring for a Python Data Engineer / Data Modeler to be based in Leeds. This is a permanent role but can work on an FTC Basis. The salary ranges from £55K - £65K.

The Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyse, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs.

Responsibilities include:
*Design, develop, test, and deploy data integration processes (batch or real-time)
using tools such as Cloud Pub/Sub, Data Prep, Data Flow, Dataproc, Databricks,
Matilion, Cloud Composer/Airflow, Sqoop, etc.
*Create functional & technical documentation - e.g. ETL architecture documentation,
unit testing plans and results, data integration specifications, data testing plans,
*Provide a consultative approach with business users, asking questions to understand
the business need and deriving the data flow, conceptual, logical, and physical data
models based on those needs. Perform data analysis to validate data models and to
confirm ability to meet business needs.
*Stays current with emerging and changing technologies to best recommend and
implement beneficial technologies and approaches for Data Integration
*Design and develop enterprise self-service analytic solutions, business intelligence
reports, visualisations and best practice standards. Toolsets include but not
limited to: Big query, Big query BI Engine, Big table, Google Data Studio, Looker,
Microsoft Power BI, Tableau and Qlik.
*Work with report team to identify, design and implement a reporting user experience
that is consistent and intuitive across environments, across report methods, defines
security and meets usability and scalability best practices.

Education & Experience
*5 Years industry implementation experience with data integration tools such as
Microsoft SSIS, Azure Data Factory, Databricks, Glue, Step Functions, Airflow,
Flume/Sqoop/Pig, etc.
*1-3 years consulting experience preferred
*Bachelor's degree or equivalent experience, Master's Degree Preferred
*Strong data warehousing, OLTP systems, data integration and SDLC
*Experience in big data frameworks & working experience in Spark or Hadoop or Hive
(incl. derivatives like pySpark (preferred), SparkScala or SparkSQL) or Similar,
along with experience in libraries / frameworks to accelerate code development
*Experience using major data modelling tools (examples: ERwin, ER/Studio,
PowerDesigner, etc.)
*Experience with major database platforms (e.g. Cloud SQL, Cloud Spanner, SQL Server,
Oracle, Azure Data Lake, Hadoop, Google BigQuery, Google BigQuery BI, Snowflake,
Redshift etc.)
*Strong experience in orchestration & working experience in either Data Factory or
HDInsight or Data Pipeline or Cloud composer or Similar
*Understanding and experience with major Data Architecture philosophies (Dimensional,
ODS, Data Vault, etc.)
*Understanding of modern data warehouse capabilities and technologies such as real-
time, cloud, Big Data.
*Understanding of on premises and cloud infrastructure architectures (e.g. Azure,
*Experience in Agile Process (Scrum cadences, Roles, deliverables) & working
experience in either Google Cloud Source, JIRA or Similar with Experience in CI/CD
using one or more code management platforms
*1-3 years development experience in decision support / business intelligence
environments utilizing tools such as Big query, Big query BI Engine, Big table,
Google Data Studio, Looker etc.

Please apply for immediate interview!

The JM Group is operating and advertising as an Employment Agency for permanent positions and as an Employment Business for interim / contract / temporary positions. The JM Group is an Equal Opportunities employer and we encourage applicants from all backgrounds.