Collaborate with Lead Data Engineers and domain Architect to design, build, optimize the data architecture and extract, transform, load (ETL) pipelines to make them accessible for the data users
Own and support data engineering platform tools using technologies like Snowflake, AWS, HVR, KNIME, etc.
Design robust ETL pipelines of medium complexity adhering to existing patterns while keeping performance, uptime, reduced technical debt, scalability, and extensibility
Develop and perform unit tests, and maintain up to date code in source control
Collaborate with other Data Engineers for code review and participate in pair programming when needed
Independently troubleshoot issues reported by users and errors from ETL jobs with minimal guidance; participate in on-call rotation and perform root cause analysis
Deliver quality code and follow best practices and standards keeping performance & Scalability in mind to keep cost in check in Cloud environment
Partner with Platform Product Manager to prioritize and deliver high quality data products, working in agile team
Live the culture of sharing, re-use, design for scale and stability, and operational efficiency of data and analytical solutions. Demonstrate the passion for innovation and continuous improvement
Maintain awareness of advancements, and changes in technologies relating to data engineering and cloud data platforms
Overall Bring a positive Run Happy energy and work with the team to deliver the best possible solutions
Learn the business, learn the data that supports the business; be a partner β donβt just implement technology
Live Brooksβ values
Other responsibilities as required
Skills Required
Machine Learning
Python
Find more jobs at Brooks Running
Shop superior running shoes and apparel for men and women with free shipping and returns. Your search for great running gear starts and ends with Brooks Running.