Data Engineer

  • HKD35000 - HKD50000 per month
  • Hong Kong
  • Permanent, Full time
  • Opus Talent Solutions
  • 23 Apr 19

Opus Talent Solutions have partnered with one of Hong Kong’s shortlisted Virtual Bank to help support them in building out their Technology team. We are currently seeking experienced Data Engineers to join the wider Data team to help build the Virtual Bank from scratch

using Cloud technologies.

Reporting into the Data Lead, you as the Data Engineer will be responsible for the design, creation and maintenance of analytics infrastructure. You will develop, create, maintain and test the architecture including that of the data lake, data warehouse, databases, data pipelines and large-scale processing systems. You will be part of a data engineering team so collectively you will also create data set processes used in modelling, mining, acquisition and verification.

Your key duties will include though will not be limited to;

  • Work closely with the development and product teams in a fast paced and agile delivery.
  • Build and manage data warehouses, databases and date pipelines.
  • Design, build and maintain modern, automated, cloud native and analytics infrastructure.
  • Translate business needs into data models supporting long term solutions
  • Partner with the development team to implement data strategies, build data flows and develop conceptual, logical and physical data models that ensure high data quality and reduced redundancy
To be considered for the exciting opportunity, you will be able to demonstrate the ability to work in a fast paced and agile environment. You will have proven data engineering experience working for either a Technology business, FinTech or eCommerce firms. You will also possess the following skills and attributes;
  • Proven working knowledge of technology best practice for building a modern data lake, data warehouse and data pipelines.
  • Strong understanding of technologies and building a highly scalable cloud data platform.
  • Ability to work in an environment with minimal direction and work out solutions with project teams.
  • Proven experience in building an maintain Data Warehousing/Big Data Tools - Hadoop and MapReduce, Apache Spark and Spark SQL, HIVE.
  • In-Depth Database Knowledge of RDBMS (PostgreSQL and MySQL) and NoSQL (Hbase).
  • Strong experience in building and maintaining cloud Big Data and ETL tool, Google Big Table, Big Query and Air Flow (Google Composer).
  • Ideally have strong knowledge and experience with Apache Beam in implementing batch and streaming data processing jobs, strong Development background in Python or Java.
  • Strong knowledge in messaging systems like Kafka, RabbitMQ and Google Pub/Sub.
  • Ideally have experience with Agile/Lean projects SCRUM, KANBAN 

If this is an opportunity that sounds of interest to you, then please don’t delay your application and apply today! Please forward your CV to Sharon Koelewyn via the “Apply Now” button below. If you have any questions prior to applying, please contact Sharon directly on +852 3905 3342.