Connecting to LinkedIn...

Data Specialist (Azure)

Job Title: Data Specialist (Azure)
Contract Type: Contract
Location: Remote (anywhere in EU except UK)
Industry:
Salary: 60000 - 80000
REF: SANJ/EU/Dec
Contact Name: Sanjeev Mehta
Contact Email: sanjeev@sushtalent.co.uk
Job Published: 4 months ago

Job Description

Job title-Data Specialist (Azure)

Location-Remote (anywhere in Europe except UK)

Nationality-Any EU Nationality

Client-Renowned IT Consulting firm

Contract-3-6 months (Extendable)

Day rate-£250-£350 (Depending on experience and location)

Job specs

  • Ability to work in ambiguous situations with unstructured problems and anticipate potential issues/risks
  • Demonstrated experience in building data pipelines in data analytics implementations such as Data Lake and Data Warehouse
  • At least 2 instances of end-to-end implementation of data processing pipeline
  • Experience configuring or developing custom code components for data ingestion, data processing and data provisioning, using Big data & distributed computing platforms such as Hadoop/Spark, and Cloud platforms such as AWS or Azure.
  • Hands-on-experience developing enterprise solutions using designing and building frameworks, enterprise patterns, database design and development in 2 or more of the following areas:
    • End-to-end implementation of Cloud data engineering solution
      • AWS (EC2, S3, EMR, Spectrum, Dynamo DB, RDS, Redshift, Glue, Kinesis) /
      • Azure (Azure SQL DW, Azure Data factory, HDInsight, Cosmos DB, PostgreSQL, SQL on Azure)
    • End-to-end implementation of Big data solution on Cloudera/Hortonworks/MapR ecosystem
      • Real-time solution using Spark streaming, Kafka/Apache pulsar/Kinesis
      • Distributed compute solution (Spark/Storm/Hive/Impala)
      • Distributed storage and NoSQL storage (Cassandra, Mongo DB, Datastax)
    • Batch solution and distributed computing using ETL/ELT (SSIS/Informatica/Talend/Spark SQL/Spark Data frame/AWS Glue/ADF)
    • DW-BI (MSBI, Oracle, Teradata), Data modeling, performance tuning, memory optimization/DB partitioning
    • Frameworks, reusable components, accelerators, CI/CD automation
    • Languages (Python, Scala)
  • Proficiency in data modelling, for both structured and unstructured data, for various layers of storage
  • Ability to collaborate closely with business analysts, architects and client stake holders to create technical specifications
  • Ensure quality of code components delivered by employing unit testing and test automation techniques including CI in DevOps environments.
  • Ability to profile data, assess data quality in the context of business rules, and incorporate validation and certification mechanism to ensure data quality
  • Ability to review technical deliverables, mentor and drive technical teams to deliver quality technical deliverables.
  • Understand system Architecture and provide component level design specifications, both high level and low level design
  • Experience in building ground-up Data lake solutions
  • Provide support in building RFP
  • Data governance using Apache atlas, Falcon, Ranger, Erwin, Metadata manager
  • Understanding of Design patterns (Lambda architecture/Data lake/micro services)