Job information

$ 1,000,000 - 2,000,000 TWD per annum

關鍵字

Job Summary

1. Develop, operate, and monitor ETL for data collection, cleansing, processing, storage, and analytics 
2. Develop data visualization to track data usage, server status, and represent data insights 
3. Automate data process tasks and support data analysis tasks 
4. Use existing algorithms and implement data-mining strategies to achieve our end goal. 

Job Description

 1. At least 2 years of experience in building and operating large scale distributed systems or applications 
2. At least 2 years of experience in processing data with Python (scikit-learn), Scala, or R 
3. Familiar with Hadoop ecosystem, such as Spark, Hive, Hue, Pig 
4. Experience with implementing highly scalable systems on top of cloud infrastructures, such as AWS or GCP 
5. Expertise in Linux/Unix environments and familiar with shell scripting 
6. Expertise in hands-on ETL job design, components and modules development of data process 
7. Great communication skill and can-do attitude 
8. Desired skills: Python (scikit-learn), Scala, Hadoop, Hive, Spark, Linux/Unix, AWS, MySQL/Mongo/PostgreSQL, R

**薪資可面議

Or refer a friend and get 50% monthly salary reward.