Franko Ortiz

Franko Ortiz

Web Developer

Followers of Franko Ortiz903 followers
location of Franko OrtizPeru

Connect with Franko Ortiz to Send Message

Connect

Connect with Franko Ortiz to Send Message

Connect
  • Timeline

  • About me

    Senior Data Engineer

  • Education

    • El Cultural

      -
    • Universidad Nacional de Trujillo

      2008 - 2013
      Ingeniero Informatico Base de datos
  • Experience

    • Ministerio del Interior- Perú

      Feb 2014 - Mar 2014
      Web Developer

      * Creation end to end of web application built with Django MTW(MVC) framework for attend requirements of the organization. Includes: CRUD's, export to dpf, backup Databases, etc

    • Claro Perú

      Sept 2014 - Jun 2015
      ETL Developer

      • Provided support as responsible for Applicative unique selling point (PVU) - which includes sales operations, migration, returns, renewals and activations. • Wrote PLSQL programs and shells to provide solutions to the challenges in the production environment, taking into account the impact on other processes. • Provided support as the responsible of the Portability Process - which include sales of portability from assessment, sale like postpaid, prepaid, massive, corporate, fixed, mobile and activations. • Solved the problems in postpaid activations lines using shells (Aix- IBM) that contains plsql programs and I avoid future errors. Mostrar menos

    • Banco Ripley

      Jun 2015 - Jan 2015
      Senior ETL Developer

      • Helped to streamline business processes by developing, installing and configuring Hadoop ecosystem components(Hive, Flume, Mahout) that moved data from individual servers to Hadoop Distributed File System (HDFS). • Managed and reviewed Hadoop log files(Flume). • Created ETL's that allows the extraction, transformation and loading of various database that manages the bank whose origins are: oracle tables, .csv, .txt files and remote databases. • Applied tunning, using hints (parallel, append, no_use_nl, etc.), load optimization (bulks, foralls), partitioning tables and indexes. • Migrated processes from COBOL to Oracle, analyzed the impact by programming Oracle objects (packages, procedures, functions, flat files, databases remote, etc), applying ETL (apply tunnig extractors, working with external tables), processing large volumes of data, document, and creating shells (Aix- IBM). • Reduced reporting time response of processes commissions. This was accomplished using indexes, hints(parallel, ordered) and partitions table. Mostrar menos

    • Interseguro Compañía de Seguros

      Feb 2016 - Jan 2017
      Senior ETL Developer

      • Create datamarts using ETL's Oracle PLSQL (including tuning, analytical functions, and massive loads bulks) and integration services for some flows. • Create reports used in dashboards. • Create shells (Aix-IBM) in order to migrate tables of millions of records coming from different sources (SQL Server, .txt, .csv, etc.) which load the datamarts and shells to launch PLSQL programs (Blocks, Procedures, etc.) • Improve the performance of the SAMP processes (claims payments and credits) through PLSQL in order to reduce time and costs. Mostrar menos

    • Verizon

      Jan 2017 - Jan 2018
      Big Data Engineer

      -Move some part of the business process from on-premise to the cloud, using aws,cloud formation/Ansible, TTL and python-Build ML Models using Spark ML and Python picking data from isolated datalakefor figure out what will be the prediction for count of sales of a season andbe aware for it, for clustering our customer and offer exact products to them .-Adapt new customers requirement that involves expanding the datalake(AWS EMR)and DWH(AWS REDSHIFT), using ETL in Python, Nifi, Spark SQL, PLSQL and shellscripting.-Build kafka topics , Elasticsearch Index and Spark streaming jobs as part ofthe analytics wave engine in Verizon.-Create and manage docker containers for specific iterations/sprint purposeslike db, stream engines, and notebooks.-Include new airflow packages for monitoring the Verizon Speed layer.-Clone some part of the data-bussiness to NoSql technologies like Cassandra andElasticSearch using kafka and Sqoop to expose data for analytics activities.-Build ML Models using Spark ML and Python picking data from an isolate datalakefor figure out what will be prediction for count of sales of a season and beaware for it, for clustering our customer and offer exact products to them.-Build DWH for new using Plsql / Hive + sqoop for move data from source databaseto data lake HDFS/Oracle OLAP DB, and then consume for others Dashboards teams.-Find insights using Spark Sql/ Zepellin/ Apache Drill/ Apache Presto forcreating cross DML operations between differents sources like No SQLdbl/Oracle/Flat files.-Migrate existing ETL and MPP technologies like Integration services anddatastage to Oracle for save cost in the business and create shell script forschedule jobs from linux.-Include new services and applications to the existing dwh and lakes usingetls/pipelines for be part of current business models in government federalprojects like Fisma and EIS. Mostrar menos

    • Banco de Crédito BCP

      Jan 2019 - Oct 2019
      Senior Data Engineer

      -Create scripting Pipelines: python + bash(shell) for handling custom loads.-Creation and perform kafka (microservices)- Python clients (faust, confluent-client, etc) using: docker, Python, Java, Linux scripts, Azure Event Hub,kafka-confluent, hdfs.-Build Azure Data Factory pipelines(ETL) for new business units acquired.-Create Impala and Spark(pyspark) jobs in the bank Datalake(Raw vault datalake,unified Vault Datalake)- Cloudera cluster.-Create and manage docker containers for specific iterations/sprint purposeslike db, stream engines, and notebooks.-Create Pipelines using python dataframes and shell for handle automations.-Model and design solid ETLs and ELT’s in order to populate high concurrencedbs and object datalake for our customers.-Adapt new customers requirement that involves expanding the datalake and DWH,using ETL in Python, Spark SQL, PLSQL and shell scripting.Build a hot layer for our customers that includes predictive analytics streams(Grafana,Metricbeat,logstash,kafka,Elasticsearch, Airflow )-Build REST Api with python -Flask for external consumers. Mostrar menos

    • Thrive Market

      Oct 2019 - Sept 2021
      Senior Data Engineer

      -Create dags using Airflow-Snowflake for all loads- Astronomer platform.-Data models using Dbt(core) - backend snowflake and custom data pipelines inAirflow (All tasks in Python).-Handle multiple data loads: Snowflake, EMR, Containers, Redshift, Apis, sftp,s3 using Airflow.-Migration Redshift(Datasource Aurora) to Snowflake, Airflow boxes too.-Store VM information to Elastic by beats or s3(grok logstash connector)-All mentioned working with ci-cd airflow - Astronomer.-Creation dbt models for then schedule in Airflow.-Use Terraform to apply Snowflake permissions.-Use Liquibase to DDL’s deployments Mostrar menos

    • Roofstock

      Sept 2021 - Oct 2023
      Senior Data Engineer

      -Create scripting Pipelines: python + bash(shell) for handling custom loads.-Create Dags using Airflow-Snowflake for batch requirements(Api’s, dataframes,threads, etc).-Create DBT models for Snowflake DWH.-Collect multiple information from third-party data sources: MongoDB atlas,confluent.-Create Azure Data factory pipelines. linked service, key vault, dataset, Azureblob, containers.-Owner of migration Airflow 1.0 to Astronomer 2.0-Orchestrate custom pipelines with Azure resources when Airflow cannot fit.-All mentioned working with ci-cd airflow, dbt (docker deployments), andDatafactory. Mostrar menos

    • Beazley

      Oct 2023 - now
      Senior Data Engineer

      - Model new data entities of MDA project by dbt cloud- Migrate legacy financials models from dbt core to dbt cloud- Create dbt macros, test, jobs, lineage for support the new architecture- Create new channels/ locations by Fivetran HVR related to sqlserver ,Oracle, Sap tables.- Migrate workflows from StoneBranch orchestrator to modern orchestrator - Airflow 2.- Work in switch report from Tableau to ThoushSpot report IA.- Work with snowflake for handle procedure, stages, warehouse and ETL to be used by Airflow in Policies, quotes, submissions.- Fix Kafka streams applications for single customer view product- Work in new Airflow dags for handle Api's data and all the workflows by custom python code Mostrar menos

  • Licenses & Certifications