At a glance
The Data Engineering team at Chope is pivotal in building a solid Data Driven culture and meaningful Data Products which impacts millions of diners and thousands of restaurants. This cannot be a better time to join the organization mainly when we plan to revamp most parts of our Data Platform System. The role brings a great learning curve as you’d be participating in all the activities - interactions with external and internal parties, help to carve out the architecture and do the actual implementation. Not to mention the impact this brings out to the whole organization!
You’ll be based in our Singapore office to work building the data pipelines for the enterprise tech solutions, including but not limited to ERP,CRM solutions. Our ideal candidate would be someone who has experience working on small/mid sized internet companies before, has experience/understand about the functionalities of important enterprise tech products like - ERP, CRM etc (building pipelines for these systems gives tremendous advantage). The role requires strong interpersonal skills and stakeholder management, communication skills, and has attention to details without missing out on the bigger landscape.
About the role
- The ideal candidate should be comfortable with designing the Datawarehouse schema in Bigquery and also able to write code to process data from heterogeneous sources like APIs, PubSub, Transactional Databases.
- The candidate should be able to write efficient - batch and near real time pipelines to feed data to downstream ERP/CRM applications.
- The candidate should also be able to pull in information back to the data warehouse from these ERP/CRM platforms by - File transfer/from databases/APIs etc.
- Design and develop solutions to store and process data on cloud platforms
- Hands on development in designing and developing ETL/ELT processes to ingest data into the data warehouse with focus on data quality and profiling.
- Participate in collaborations in external and internal stakeholders and work on building SLAs.
- Responsible for taking ownership of not just the Data Platform but also building the right practices to lead the way forward for the team.
- Translate the business requirements into actual workable technical solutions
- Participate in code reviews with focus on high quality technical documentation.
- Bachelors/Masters in mathematics, computer science or other related quantitative discipline.
- Ideally 5+ years experience in data engineering roles.
- Familiarity with Hadoop, Hive, HBase, Spark, Flink, Airflow, Kafka, etc
- Exposure to big data solutions from either of Amazon AWS/ Google Cloud / Microsoft Azure / Alibaba Cloud such as Redshift, EMR, BigQuery, Dataflow, Cloud Composer, BigTable, PubSub etc.
- Familiar with CI/CD and automating the deployments from dev to staging to production platforms.
- Great understanding of Data Warehouse concepts - Fact/Dimension Tables, CDC, SCD, partitioning and bucketing strategies.
- Proficient in writing complex SQLs, and experience in at least anyone of Java/Python/Golang.
- Experience in Metadata management, data discovery and creating data lineages.
- High personal code/development standards (peer testing, unit testing, documentation, etc)
- Last but not the least - Passionate about food and the F&B Industry! :)
- Meaningful work - you’ll have ownership over and the ability to make a direct impact on the experience we’re delivering to our customers (You’re not here “just to execute”)
- Autonomy - you’ll be in an environment that enables you to challenge yourself, challenge team members, learn new skills, and try out new things
- Collaborative and an open culture that values receiving and giving feedback
- A competitive salary
- Flexible work arrangements that provide you the balance you deserve