
Senior Data Engineer
- Beijing
- Permanent
- Full-time
- Design, develop, and maintain scalable real-time data processing pipelines using Apache Flink.
- Collaborate with cross-functional teams to understand data requirements and implement solutions.
- Optimize and monitor the performance of Flink jobs to ensure high availability and low latency.
- Build and manage ETL processes for data ingestion, transformation, and storage across distributed systems.
- Ensure data quality, consistency, and reliability by implementing validation and monitoring mechanisms.
- Troubleshoot and debug issues in data pipelines, ensuring seamless data flow across systems.
- Document technical solutions, processes, and workflows to ensure knowledge sharing within the team.
- 3+ years of hands-on experience in data engineering with a focus on Apache Flink.
- Proficiency in Java or Scala for Flink development.
- Strong understanding of real-time data processing concepts and event-driven architectures.
- Experience with message brokers like Apache Kafka and different databases - NoSQL, Columnar, Relational.
- Solid knowledge of distributed systems and big data technologies (e.g., Hadoop, Spark, etc.).
- Ability to debug and resolve complex issues in large-scale data systems.
- Strong verbal and written communication skills to collaborate with technical and non-technical teams.
- Bachelor's degree or higher in Computer Science, Engineering, or a related field.
- Relevant certifications in Apache Flink or big data technologies.
- Contributions to open-source projects or active participation in the Apache Flink community.
- Basic understanding of ML pipelines and integrating data engineering workflows with ML models.
- A proactive mindset, adaptability to changing priorities, and eagerness to learn new technologies.