Intro
Hello, I'm Martin, an accomplished Senior Director of Data Operations with a distinguished career marked by over 10 years of expertise in optimizing data processes, building high-performing teams, and implementing innovative data strategies. Residing in Denver, Colorado, I find inspiration in the stunning Rocky Mountains, where I often spend my weekends hiking, mountain biking and exploring.
Throughout my career, I've had the privilege of working across various industries, where I've consistently driven transformative results through strategic data management and advanced analytics. My leadership style is both dynamic and collaborative, fostering environments where creativity and operational excellence thrive.
I am now eager to embark on new challenges and opportunities within dynamic startups that view data as a strategic asset. I'm particularly interested in organizations that are positioned for significant growth and innovation. With my extensive experience and passion for leveraging data to drive business success, I believe I can be a valuable asset to any forward-thinking startup.
If your organization is seeking a dedicated and visionary leader who combines deep technical expertise with a passion for innovation, I would love to explore how I can contribute to your company’s success.
While you're here, check out my resume.
Or feel free to read through some of my projects.
Resume
EXPERIENCE
Senior Director, Data Operations
Mar ‘22-Present
- Transformed data team into profit center by leveraging data monetization – boosting FY company revenue by 25%
- Introduced self-hosted solutions and performed cloud cost optimizations – driving 60% reduction in cloud expenses
- Mentored a team of 5 data analysts, data engineers, and QA to drive higher performance in business outcomes
- Engineered over 700 robust data pipelines ensuring seamless and compliant data flow
- Improved data literacy across the company by hosting workshops and creating industry standardized KPI’s
Western Union LLC
Denver, CO
Senior Manager, Pricing Engineering
Aug ‘18-Mar ‘22
- Led team of 12 analysts and engineers to build next-gen pricing capabilities – boosting company revenue by $100M
- Introduced self-hosted solutions and performed cloud cost optimizations - driving 60% reduction in cloud expenses
- Implemented global pricing workflows with automation - reducing time to market from 24 days to 24 hours
- Created global pricing optimization and experimentation architecture to identify ideal corridor pricing
- Created web scraping pricing data pipelines from over 100 competitors’ websites to provide competitive insights
DISH Network LLC
Denver, CO
Manager, Customer Experience Platforms
Oct ‘17-Aug ‘18
- Developed data mart in snowflake to centralize reporting of 5 CX platforms to provide customer 360 view
- Led team of 11 remote operations leads and 2 developers to implement data-backed omni-channel operations
- Leveraged Natural Language Processing to analyze customer feedback on surveys, reviews, and call transcriptions
Finance & Forecasting Lead
Jan ‘16-Oct ‘17
- Led team of 2 business analysts in creating department budget and tracking financial objectives
- Replaced Excel spreadsheets with SQL Server and SSIS to automate 120 hours of weekly work
- Created ML Models in R to predict call center resource requirements, reducing workforce idle time by 5%
Eldorado Trading Group
Denver, CO
Investment Analyst
Oct ‘14-Jan ‘16
- Responsible for development of new tools, research and advisory of data-driven investing
- Designed predictive ML models in R to reduce losses for investors
- Developed optimizer to provide real-time hedging recommendations using Python and APIs
- Leveraged Kafka and Python to create an arbitrage notification model --boosting profit by $1.6M in year 1
EDUCATION
Masters of Science in Applied Quantitative Finance
Denver, CO
- Specialization in Financial Modeling and Corporate Finance
Bachelor of Science in Business Administration
Denver, CO
- Specialization in Investment and Valuation Strategy
CERTIFICATION
AWS Certified Cloud Practitioner
-
By passing the AWS Certified Cloud Practitioner exam, I gained a solid understanding of core cloud concepts, AWS services, security practices, and cost management strategies, enhancing my ability to design and implement scalable, secure, and cost-effective cloud solutions that align with business objectives. This certification also strengthened my practical skills and knowledge of AWS best practices, ensuring effective cloud adoption and management.
Projects
Contact
COMING SOON
Enterprise Data
Warehouse Migration
As the Senior Manager of Pricing Engineering at a large multinational corporation, I led the design, development, and deployment of an Enterprise Data Warehouse (EDW).
This project aimed to consolidate data from various business units, including sales, operations, compliance, and pricing into a single, scalable repository.
The primary goal was to enhance data accessibility, improve reporting accuracy, and empower advanced analytics capabilities across the organization.
Initial Challenges
Before the implementation of the EDW, our organization faced several significant challenges:
- Data Silos: Each business unit maintained its own databases, leading to data silos that hindered cross-functional analysis.
- Inconsistent Data: Disparate systems resulted in data inconsistencies and inaccuracies, complicating reporting and decision-making.
- Manual Reporting: Generating reports required manual data extraction and consolidation, consuming valuable time and resources.
- Limited Analytics: The lack of a unified data source limited the ability to perform comprehensive and advanced analytics.
- Resource Constraints: Reliance on a single on-premises server caused significant resource constraints, leading to long wait times for data analysts as they had to queue for access to limited computational resources.
Strategic Planning
To address these challenges, I spearheaded a strategic planning phase that involved:
- Stakeholder Engagement: Conducted workshops and meetings with key stakeholders from each business unit to understand their data needs and pain points.
- Technology Evaluation: Evaluated various data warehousing technologies and selected a robust platform that could handle our data volume and complexity.
- Team Formation: Assembled a cross-functional team of data engineers, database administrators, business analysts, and project managers to drive the project.
Deployment and Transition
The deployment phase was carefully planned to minimize disruption and ensure a smooth transition:
- Phased Rollout: Deployed the EDW in phases, starting with non-critical business units to gather feedback and make necessary adjustments.
- Training and Support: Conducted training sessions for end-users and provided ongoing support to help them transition to the new system.
- Change Management: Implemented a change management strategy to address resistance and ensure user adoption. This included regular communication, feedback loops, and demonstrating the benefits of the EDW.
Results and Impact
The successful implementation of the EDW brought about significant improvements and benefits:
- Enhanced Data Accessibility: Consolidated data into a single repository, making it easily accessible to all business units through user-friendly dashboards and reporting tools.
- Improved Reporting Accuracy: Standardized data definitions and automated data integration processes reduced errors and inconsistencies in reports.
- Time Savings: Automated reporting and data consolidation processes freed up valuable time for data analysts and business users, allowing them to focus on more strategic tasks.
- Advanced Analytics: Enabled advanced analytics capabilities, such as predictive modeling and trend analysis, providing deeper insights and supporting data-driven decision-making.
- Cross-Functional Insights: Facilitated cross-functional analysis by breaking down data silos, leading to more comprehensive and informed strategic planning.
Fivetran to Custom
Pipeline Migration
As the Senior Director of Data Operations at an innovative 30 person start up software firm, I led the strategic migration of our Extract, Transform, Load (ETL) processes by replacing a managed platform, Fivetran, with custom serverless pipelines orchestrated using self-hosted Apache Airflow.
This initiative aimed to significantly reduce operational costs, enhance the flexibility and control over our data management processes, and ultimately improve the overall efficiency of our data infrastructure.
Initial Challenges
Before the migration, our organization was facing several issues with the existing Fivetran-managed ETL solution:
- High Operational Costs: The subscription fees for Fivetran were egregious, especially as our data volume grew.
- Limited Customization: Fivetran's managed service, while user-friendly, offered limited flexibility for custom ETL requirements, restricting our ability to tailor processes to specific business needs.
- Dependence on External Service: Relying on an external vendor for critical data operations posed risks related to service reliability and vendor lock-in.
- Scalability Concerns: As our data needs expanded, scaling the managed service became increasingly costly and complex.
Strategic Planning
To address these challenges, I spearheaded a strategic planning phase that involved:
- Stakeholder Engagement: Conducted detailed discussions with stakeholders across IT, Operations, Data, and Development to understand their requirements and concerns with the current system.
- Technology Assessment: Evaluated several ETL tools and frameworks, eventually selecting Apache Airflow for its flexibility, open-source nature, and robust scheduling capabilities.
- Cost-Benefit Analysis: Performed a thorough cost-benefit analysis, demonstrating a reduction of 98% in our ETL expenditure and operational efficiencies by transitioning to self-hosted solutions.
Design and Development
The design phase focused on creating a flexible, scalable, and maintainable ETL architecture using Apache Airflow. Key components included:
- Custom Serverless ETL Pipelines: Designed custom ETL pipelines tailored to our specific data sources and business logic, providing greater control over data transformations and loading processes.
- Modular Architecture: Developed a modular architecture that allowed easy addition and modification of ETL tasks, improving maintainability and scalability.
- Data Quality and Monitoring: Implemented robust data quality checks and monitoring mechanisms to ensure the accuracy and reliability of data throughout the ETL process.
The development phase involved:
- DAG Factory Development: Developed Directed Acyclic Graph Factory to build out orchestration instructions for Airflow in an automated fashion.
- ETL Pipeline Development: Built custom ETL pipelines using AWS Lamba, AWS ECS, Snowflake and Apache Airflow, integrating data from various sources such as databases, APIs, and flat files.
- Scheduler Configuration: Configured Airflow’s scheduler to manage ETL jobs, ensuring timely and efficient execution of data workflows.
- Data Validation and Logging: Implemented comprehensive logging and validation mechanisms to monitor pipeline performance and data quality, enabling quick identification and resolution of issues.
Deployment and Transition
The deployment phase was meticulously planned to ensure a smooth transition with minimal disruption:
- Parallel Run: Ran the custom Airflow pipelines in parallel with the existing Fivetran setup for a period, ensuring data consistency and allowing for fine-tuning.
- User Training: Conducted training sessions for data engineers and analysts to familiarize them with the new system and best practices for managing Airflow pipelines.
- Change Management: Implemented a change management strategy to address user concerns and encourage adoption, including regular updates, feedback sessions, and dedicated support.
Results and Impact
The successful migration to custom ETL pipelines using Apache Airflow delivered substantial improvements and benefits:
- Cost Reduction: Reduced operational costs by 98%, significantly lowering our dependency on expensive managed services and reallocating resources to other strategic initiatives.
- Enhanced Flexibility: Gained greater control and flexibility over ETL processes, allowing for custom transformations and more complex data workflows tailored to our specific business needs.
- Improved Scalability: Developed a scalable ETL infrastructure that could easily grow with our data needs, supporting higher data volumes without proportional increases in cost.
- Operational Efficiency: Enhanced the efficiency of our data operations, with more reliable and faster data processing capabilities, leading to timely insights and improved decision-making.
- Ownership and Control: Increased ownership and control over our data infrastructure, reducing risks associated with vendor lock-in and external service dependencies.