Senior Associate (12 months contract), Investment Data Science (Data Engineer)
Singapore, SG, 238891
Temasek is a global investment company headquartered in Singapore, with a net portfolio value of S$389 billion (US$288b, €267b, £228b, RMB2.08t) as at 31 March 2024. Marking our unlisted assets to market would provide S$31 billion of value uplift and bring our mark to market net portfolio value to S$420 billion.
Our Purpose “So Every Generation Prospers” guides us to make a difference for today’s and future generations.
Operating on commercial principles, we seek to deliver sustainable returns over the long term.
We have 13 offices in 9 countries around the world: Beijing, Hanoi, Mumbai, Shanghai, Shenzhen, and Singapore in Asia; and Brussels, London, Mexico City, New York, Paris, San Francisco, and Washington, DC outside Asia.
For more information on Temasek, please visit www.temasek.com.sg
For Temasek Review 2024, please visit www.temasekreview.com.sg
For Sustainability Report 2024, please visit www.temasek.com.sg/SR2024
Introduction
Temasek is looking to add a Data Engineer to work closely with other members of the Investment Data Science team to build, deploy and manage the data and analytics workflows used in our data-driven investment analysis process.
Responsibilities
- Build tools and automation capabilities for data pipelines that improve the efficiency, quality and resiliency of our data analytics platform
- Partner with the investment professionals, quantitative researchers, and data scientists to design, develop and deploy solutions that answer fundamental questions about companies, sectors, countries and financial markets
- Explore new external data sources to understand availability and quality =
- Develop solutions that enable investment professionals and data science teams to efficiently extract insights from data. This includes owning the ingestion and transformation
Requirements
- Bachelor’s Degree in Computer Science, Information Technology, Computer Engineering, and/or related fields
- Passion for working with data and developing software to solve data processing challenges
- Proficiency with building, tuning, and debugging ETL pipelines in Python , including common libraries (Pandas, Numpy), and testing frameworks (e.g. PyTest)
- Experience working with SQL and relational databases, Snowflake is strongly preferred.
- Experience working with or managing cloud technologies e.g. AWS, Kubernetes.
- Experience with Data Warehousing and Machine Learning workflows
- Experience with CI/CD workflows preferred
- Experience with containerized services
- Strong written and verbal communications skills