I’m a data engineer with experience in Python, data architectures and modeling, knowledge of distributed data processing, pipeline implementation, AWS Cloud data tools, and an analytical mindset. Datamorfose was born with the aim of sharing knowledge and motivating continuous learning. Through my experience and opportunities, I will share content about the world of data. Currently, I work as a Data Engineer in the financial market and have participated in large projects involving infrastructure and data product migrations, development, automation, and maintenance of data pipelines, among others. I invite you to visit the blog and embark on this journey of constant learning with me!
Itaú Unibanco is a leading financial institution in Brazil and Latin America, offering a wide range of banking, insurance, investment, and credit card services. Headquartered in São Paulo, Brazil, Itaú Unibanco is known for its market presence, innovation, and leadership in cloud-native development.
Sustainable coffee cultivation company recognized among the top 10 coffees in the world by the Cup of Excellence, located in Minas Gerais, Brazil.
The Getulio Vargas Foundation is a Brazilian private institution of higher education, founded on December 20, 1944, with the initial objective of preparing qualified personnel for the public and private administration of the country.
2023-2024 Data Science Postgraduate Certificate | ||
2023-2024 Full Stack Development Postgraduate Certificate | ||
2021-2022 Data Analysis, Information Technology Postgraduate Certificate | ||
2021-2022 MBA in Big Data and Competitive Intelligence, Information Technology Postgraduate DegreePublications: | ||
2018-2019 Postgraduate Specialization in Information Security | ||
Degree, Archival Science |
2023 Itaú Upwards Award - Responsible for organizing and planning the migration of over 4000 workflows of raw, transformed, and specialized data layers to Cloudera CDP - AWS.
I played an essential role in promoting and disseminating the tool, conducting presentations, sharing relevant content, and addressing users’ questions. My involvement significantly contributed to raising awareness about the tool and facilitating its adoption by users.
A product created for data democratization, utilizing automated mapping of nested JSON schema for raw data ingestion. Through daily mapping of new fields, allowing schema evolution, generating necessary artifacts to achieve data democratization in a single day.
It details the experience of obtaining the AWS Certified Data Engineer — Associate (DEA-C01) certification through the beta exam, highlighting motivations, challenges, preparation strategies, and practical tips. It analyzes the complexity of the exam, emphasizing the importance of practical and theoretical knowledge of AWS services and data engineering principles. It includes study recommendations for different experience levels, providing a comprehensive and detailed overview to help candidates adequately prepare and succeed in the certification and in their data engineering careers in the AWS cloud.
This certification helped me understand and validate the knowledge I’ve acquired. As it was a beta exam, lacking preparatory courses and with uncertainty about which topics to study, it provided an opportunity to assess my strengths and areas for improvement within the context of Data Engineering and data service integration in AWS. I was able to explore various data scenarios, focusing mainly on implementations, monitoring, and solutions for data pipelines, requiring an understanding of factors such as cost, operational effort, and performance.
Over 300 hours of training in data engineering, where I developed practical and theoretical skills in topics such as: Data Warehouse Design and Implementation, Data Lake Design, Project, and Integration, Data Security and High Availability, Machine Learning and AI in distributed environments, and Analytics, Visualizations, and Reporting for decision-making with Big Data. Additionally, within this training, I had the opportunity to experiment with and practice various data tools and services such as: Docker, Postgres, Apache Hadoop, Apache Nifi, Apache Kafka, AWS Lake Formation, Amazon EMR, CDC, Dremio, Airbyte, Databricks, BigQuery, Apache Spark, Airflow, Snowflake, ElasticSearch, Scala, Cassandra, MongoDB, Azure Data Factory, and Power BI.
Other certifications and licenses acquired during my continuous learning.