Experience
Key member of the Data Platform team at Constantinople, leading client onboarding to the platform and delivering scalable data infrastructure and AI solutions for fintech operations.
Achievements
- Contributed to the revamp of tenant onboarding, streamlining integration for new clients on the data platform.
- Initiated the design and implementation of an OpenMetadata system for enterprise-wide cataloging of 400+ data assets and automated lineage, enabling robust governance.
- Migrated AWS EKS cluster to Bottlerocket OS, reducing vulnerabilities by 90%.
Day-to-Day Responsibilities
- Work with AWS-based tech stack: EKS, Java, Scala, Python, Helm, RDS, OpenSearch, Terraform, Apache Spark, DBT, Snowflake, S3, Athena, Glue, EMR
- Collaborate with cross-functional teams to gather requirements, design solutions, and support client integration onto the data platform.
- Develop and maintain data cataloging, lineage, and governance systems using OpenMetadata and cloud-native tools.
- Lead migration initiatives for large-scale data platforms, ensuring security, compliance, and business continuity.
Led the Data Platform team within Grab’s DataTech org, managing 8+ engineers to build cloud-scale data and ML infrastructure powering analytics, experimentation, and business insights across Southeast Asia.
Achievements
- Architected and launched a Spark-based low-code ETL platform, enabling 600+ daily pipelines and empowering business teams to build workflows without engineering support.
- Developed a GenAI-powered analytics engine that generated actionable, user-friendly insights from billions of clickstream events, improving product team decision-making.
- Built a multi-tenant data platform that is used by 5 Grab subsidiaries by introducing a modular authorization framework that reduced access management overhead by 60%.
- Engineered a data quality validation system processing 5TB+ daily, increasing clickstream data accuracy by 30% and reducing incident rates.
- Implemented a real-time feedback loop for ML pipelines, improving model reliability and reducing experiment cycle time by 25%.
- Directed the migration of a 1PB+ cloud data lake from AWS to Azure with zero downtime for analytics consumers.
- Reduced compute costs by 40% for experiment sample prediction by replacing full data scans with HyperLogLog and regression-based estimations.
Data/AI consultant in Accenture’s Applied Intelligence group, architecting secure data lakes, streaming platforms, and cloud migration projects for banking and enterprise clients.
Achievements:
- Enforced cell-level security for 2000+ datasets at DBS Bank's ADA platform, ensuring regulatory compliance via BlueTalon and Protegrity.
- Architected and optimized data pipelines handling 10TB+ daily, leveraging Airflow, Spark, and Kafka to improve processing efficiency by 35%.
- Integrated Kafka ecosystem with enterprise metadata and security, enabling secure, seamless streaming for 20+ teams.
- Authored best practice guides and code samples, accelerating platform adoption and reducing onboarding time for new application teams by 50%.
- Led the migration of a legacy data platform to Azure for Neilson, reducing infrastructure costs by 20% and modernizing batch processing workflows.
Senior engineer in Model N’s Data Engineering team, focused on Spark-based analytics in AWS infrastructure, and SQL migration for pharma and government pricing solutions.
Achievements:
- Developed custom Hive UDFs to extend Spark SQL capabilities, enabling advanced analytics for business users.
- Set up and maintained Hadoop clusters for Dev/QA, improving environment stability and deployment speed.
- Rewrote Oracle SQL logic to ANSI SQL using ANTLR v4, streamlining migration and reducing technical debt.
- Improved Spark job performance by 30% via YARN and Spark History Server tuning.
- Delivered Spark-based pricing applications on EMR-YARN, reducing processing time for government clients by 40%.
- Automated S3 data transformation from CSV to Parquet, enhancing query performance and reducing storage costs.
Senior engineer in Verizon’s Security Analytics group, specializing in big data pipelines, fraud detection, and telecom data processing at scale.
Achievements:
- Built MapReduce and Spark pipelines to analyze authentication and usage for 10M+ users, driving product improvements.
- Boosted QR code authentication speed by 60% through binary optimization, enhancing user experience.
- Launched an API monitoring portal with Splunk, providing real-time visibility into system health for engineering teams.
- Automated complex job orchestration using Oozie, reducing manual interventions by 70%.
- Engineered a Spark-based engine for Telephone Denial of Service protection, reducing fraud incidents.
- Streamlined data ingestion from multiple sources, optimizing HDFS maintenance and automating ETL with Sqoop and Flume.
Core member of Tech4sys’s product engineering team, building job portals, payment integrations, and social features for B2B and B2C web applications.
Achievements:
- Designed and launched JobCoconut job portal, delivering robust database architecture and scalable features.
- Integrated payment gateways for Maventus and Planlab, increasing transaction success rates.
- Enabled multimedia content sharing via Facebook and Twitter APIs, expanding user engagement.
- Implemented social authentication (Facebook, Google, Twitter, LinkedIn) for e-commerce and job platforms, improving sign-up conversion rates.
- Drove full SDLC participation, ensuring on-time delivery from design to deployment.
- Developed reusable Java and PHP modules, promoting code quality and maintainability.