- Work with data scientists to productionize exploratory or proven machine learning use cases, and implement instrumentation to measure usage and model performance.
- Write extract, transform, and load (ETL) logic to automate data collection and manage machine learning and reporting processes/pipelines including data quality and monitoring
- Collaborate with data scientists, analysts, support/system engineers, and business stakeholders to ensure our data infrastructure meets constantly evolving requirements.
- Contribute to the development of data and machine learning frameworks, tools, skills and culture for the team and wider Google Cloud Support organization.
- Write and review technical documents, including requirements and design documents for existing and future data systems, as well as data standards and policies.
- Bachelor's degree in Computer Science, Mathematics, a related technical field, or equivalent practical experience.
- Experience in software development life cycle. Experience in one or more programming languages such as Python, Go, Java or C++, as well as experience working with data sets using SQL.
- Experience in data processing software (e.g., Hadoop, Spark, Pig, Hive) and data processing algorithms (e.g., MapReduce, Flume).
- Experience with machine learning libraries (e.g., TensorFlow, Scikit-learn, Keras) or exploratory/statistical analysis using Python and R.
- Experience writing, maintaining and monitoring streaming and batch ETLs operating on a variety of structured and unstructured sources.
- Experience designing databases, defining and implementing system requirements for data collection.
- Experience with reporting/analytic tools, data visualization or experiment design.
- Excellent communication, organizational, and analytical skills. Proven ability to collaborate with internal stakeholders throughout complex projects, as well as an ability to write and review technical documentation as needed.