h
hamza_ghias

Hamza G

@hamza_ghias

Senior Data Engineer

Paquistão
Inglês
Algumas informações são exibidas no idioma inglês.
Sobre mim
Data engineer with 7+ years of experience building scalable, end-to-end data solutions on Microsoft Fabric, Azure, and Databricks. I specialize in designing robust pipelines across the medallion architecture — from raw ingestion to curated, analytics-ready layers using Delta Lake, Apache Spark, and Azure Data Factory. ... Saiba mais

Habilidades

h
hamza_ghias
Hamza G
offline • 
Tempo médio de resposta: 1 hora

Conheça meus serviços

Consultoria em Engenharia de Dados
I will design your pipeline as a senior data platform consultant

Portfólio

Experiência profissional

Senior Data Engineer

Systems Ltd • Período integral

Aug 2025 - Present9 mos

Designed and developed Azure Data Factory (ADF) pipelines to extract data from multiple source systems, load it into Azure Data Lake containers, and orchestrate ingestion into Databricks Bronze layers for structured processing.  Built scalable Big Data processing pipelines using components like Apache Spark, HDFS, Hive, and PARQUET file formats.  Developed and orchestrated ETL/ELT workflows using Azure Data Factory (ADF) and Databricks on Microsoft Azure.  Designed batch and real-time ingestion pipelines using Spark Streaming and Kafka to process high volume data.  Implemented optimized Spark (PySpark/Scala) transformations leveraging distributed computing for performance and scalability.  Designed and maintained Snowflake data models, optimized SQL queries, and analytical datasets for enterprise reporting.  Built and supported Power BI datasets and reports using curated analytics-ready data.  Utilized ADLS Gen2 as a centralized data lake for raw and curated data storage.  Managed source control and agile delivery using GIT and JIRA.  Automated data processing and validation tasks using Python.  Supported end-to-end pipeline reliability through monitoring, logging, and defect resolution.

Data Engineer

Businees Value Services • Período integral

Feb 2023 - Aug 20252 yrs 6 mos

Built and optimized ETL (Extract, Transform, and Load) processes in Azure Data Factory (ADF) and SSIS for efficient data migration.  Built Microsoft Fabric notebooks and configured default environments to streamline ETL pipeline development and ensure reusable Python libraries are pre-installed for new notebooks.  Developed, tested, and supported enterprise data engineering solutions within Azure and Databricks ecosystems to enable analytics and reporting across business domains.  Built and optimized batch and ELT data pipelines using Azure Data Factory, Databricks, and Azure Data Lake, ensuring secure, reliable, and scalable data processing.  Designed and implemented Bronze, Silver, and Gold data layers using Spark and Delta Lake to deliver high-quality, analytics-ready datasets.  Contributed to data modelling and warehousing activities, including fact/dimension design, schema optimization, and source-to-target mapping.  Developed and optimized SQL transformations, views, and stored procedures to support analytical applications and BI consumption.  Assisted in data analysis activities such as data profiling, metadata collection, and validation to ensure solutions meet business requirements.  Implemented unit testing, monitoring, and troubleshooting to resolve data pipeline issues of low to moderate complexity.  Worked closely with senior and lead data engineers on technical design, architecture reviews, and performance optimization.  Ensured adherence to data governance, privacy, and cybersecurity standards, maintaining metadata and data lineage documentation.

Database Developer

Paysys Labs • Período integral

Apr 2021 - Mar 20231 yr 11 mos

 Designed, developed, and optimized ETL/ELT pipelines using Azure Data Factory (ADF), SSIS, Talend Studio, and Snowflake to support scalable and reliable data integration and migration processes.  Built advanced SQL and PL/SQL queries, stored procedures, functions, triggers, and packages across SQL Server, Oracle, and Snowflake to support reporting, analytics, and application needs.  Performed query performance tuning and optimization, including index creation/modification, execution plan analysis, and configuration of transaction isolation levels to improve data processing efficiency.  Developed Snowflake data models, views, and tasks, and managed Snowflake stages, file formats, and warehouses for efficient ELT workflows.  Designed and maintained Snowflake pipelines using Snowpipe, Streams, and Tasks for near real-time data ingestion and transformation.  Provided end-to-end application and data support, handling daily Jira tickets, troubleshooting production issues, and performing routine system maintenance.  Supported architectural design and enhancements of existing and new data pipelines, ensuring maintainability, scalability, reliability, and adherence to best practices.