Hello, I’m

Leonel
Parrales

Data Engineer

Hero

ABOUT ME

ABOUT ME

Hi There! I'm Leonel Parrales

Data Engineer

Passionate, hard-working individual that is constantly looking for new challenges in life. Intrinsic motivated to be part of the Digitalization and make radical changes in companies.

  • Birthday : Sep 22, 1997
  • Phone : +593 98-000-5123
  • Email : leonelparrales22@gmail.com
  • From : Quito, Ecuador
  • Language : English, Spanish
  • Freelance : Available

SERVICES

SERVICES

Data Pipeline Engineer

Specialized in building end-to-end data pipelines orchestrated with Apache Airflow on AWS cloud architecture. Expert in ETL/ELT processes from raw to gold layer, implementing Lakehouse systems with S3 and Delta Lake, ensuring data quality and automated decision systems for business intelligence.

Big Data Processing

Expert in large-scale data processing using Apache Spark, PySpark, and Apache Flink for real-time streaming. Proficient in Python and SQL for data transformations, implementing scalable workflows in Databricks and optimizing performance for enterprise-level data volumes and complex analytical workloads.

Data Architecture & Modeling

Architecting robust data ecosystems with Data Lakehouse, Data Warehouse, and Data Mesh patterns. Expert in SQL Server, PostgreSQL, MySQL, and MongoDB management, including geospatial data (PostGIS) and performance optimization for high-volume transactional and analytical workloads.

Cloud Data Solutions

Cloud-native data engineer with expertise in AWS (S3, EC2, VPC) and Azure (Data Lake Storage Gen2, Databricks, Data Factory). Specialized in designing secure, scalable cloud environments and implementing cost-effective data solutions with Azure Fundamentals certification.

Real-Time Data Streaming

Expert in real-time data streaming using Apache Kafka and Apache Flink for processing high-velocity data streams. Implementing event-driven architectures and building systems that deliver actionable insights with minimal latency for strategic business decisions.

DevOps & Automation

Proficient in DevOps practices with Apache Airflow orchestration, CI/CD pipelines, Git version control, and Docker containerization. Experience in monitoring with Grafana and deploying R Shiny applications for interactive Business Intelligence dashboards and automated workflows.

MY SKILLS

MY SKILLS

All the skills that I have in data engineering are mentioned.

As a Data Engineer, I specialize in building end-to-end data solutions using Python, SQL, PySpark, and Apache Spark for large-scale processing. My expertise includes orchestrating complex workflows with Apache Airflow and implementing real-time streaming with Kafka and Flink.

I architect cloud-native solutions on AWS and Azure, utilizing services like S3, Databricks, and Data Factory. My focus is on creating Lakehouse architectures with Delta Lake and ensuring data quality throughout the entire pipeline.

I'm passionate about transforming raw data into actionable insights through automated ETL/ELT processes, building robust data warehouses, and implementing monitoring solutions with Grafana. My recent certifications include Azure Data Engineer Specialization and Generative AI Data Processing.

Data Pipeline Engineering

95%

Big Data Processing

92%

Data Architecture & Modeling

90%

Cloud Data Solutions

88%

Real-Time Data Streaming

85%

DevOps & Automation

87%

RESUME

RESUME

resume-icon

Experience

Data Engineer

July 2025 - Present

Banco Pichincha - Devsu

✅ End-to-End ETL Pipeline Development: Designed and implemented ETL processes from raw data layers to finalized dataproducts used as critical inputs for business decision-making.
✅ Real-Time Data Processing: Built real-time ETL pipelines using Kafka topics for data consumption, Apache Flink for transformation, and Kafka for reprocessing and output.
✅ Data Workflow Orchestration: Managed and automated complex data workflows using Apache Airflow, ensuring reliable and efficient pipeline execution and monitoring.

Data Engineer

July 2022 - July 2025

Produbanco - Grupo Promerica

✅ Full Data Pipeline Ownership: Designed and implemented complete ETL/ELT processes, moving data from raw ingestion (raw) through transformation to a fully refined, analytics-ready state (gold/curated).
✅ Data Product Development: Exposed key strategic data assets through the creation of robust REST APIs for seamless internal and external consumption.
✅ Automated Decision Systems: Engineered and deployed critical business rules for automated credit approval processes, including scoring models and policy enforcement.
✅ Process Automation: Successfully reduced operational timelines and increased efficiency by developing and implementing targeted technological solutions.
✅ Requirements Management: Managed project lifecycles with a strong focus on quality, end-to-end traceability, and strict regulatory compliance.

Software Development Engineer

October 2021 - July 2022

Confiamed S.A.

End-to-End ETL Pipeline Development: Designed and built robust ETL processes using C# .NET Core for data extraction, transformation, and loading, ensuring scalable and maintainable architectures.
Real-Time Data Processing: Developed SOAP/REST services to enable seamless data integration between disparate systems, facilitating efficient data flow across the organization.
Data Workflow Orchestration: Automated critical business processes involving data handling, reducing manual intervention and improving data accuracy and timeliness.
End-to-End Data Solutions: Implemented full-stack data solutions—from backend data processing to functional user interfaces—ensuring data accessibility and usability for business stakeholders.

Data Engineering Intern

March 2021 - September 2021

Banco Pichincha

✅ Advanced Business Intelligence
✔ Developed interactive dashboards using R Shiny.
✔ Data transformation for strategic decision-making models.
✅ Large-Scale Data Processing
✔ Designed optimized ETL/ELT pipelines.
✔ Managed large data volumes in cloud and on-premise environments.
✅ Tech Stack
✔ Microsoft Stack: SQL Server, SSIS, Azure Data Lake, Azure Databricks.
✔ Cloud Data: Ingestion, transformation, and storage in Azure.
✔ Data flow automation with Databricks (PySpark/Scala).

Research Study Assistant

March 2021 - September 2021

Universidad Politécnica Salesiana del Ecuador

✅ Advanced Cloud Management:
✔ Implementation and administration of VPS servers on AWS (EC2, EBS, VPC).
✔ Configuration of secure and scalable environments.
✅ Databases & Storage:
✔ PostgreSQL/PostGIS (for geospatial data).
✔ MySQL and MongoDB for NoSQL solutions.
✔ Query optimization and index management.
✅ Scientific/Technical Documentation:
✔ Creation of professional LaTeX templates for technical papers.
✔ Documentation of architectures and data workflows.

resume-icon

Education

Computer Science Engineering

2017 - 2022

Universidad Politécnica Salesiana del Ecuador

Degree obtained. 5 years of study. Basic to advanced knowledge in computer science.

Datacamp Certifications

2021 - Present

Datacamp™

Certificates obtained in Data Engineer and subjects related to data analysis.

Udemy Certifications

2018 - Present

Udemy™

Certificates obtained in programming, database management and frameworks.

PORTFOLIOS

PORTFOLIOS

TEAMWORK

TEAMWORK

CONTACT ME

CONTACT ME

Contact Info

Contact me on my social networks or send me a message to get in touch. I will get back to you as soon as possible.

Email

leonelparrales22@gmail.com

Phone

+593 98 000 5123

Address

Quito, Ecuador