Business Area:
Professional Services
Seniority Level:
Mid-Senior level
Job Description:
At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world’s largest enterprises.
Cloudera is seeking a Solutions Consultant to join its ANZ Professional Services team. You’ll have the opportunity to develop massively scalable solutions to solve complex data problems using Hadoop, Ni Fi, Spark and related Big Data technology. This role presents a client facing opportunity that combines consulting skills with deep technical design and development in the Big Data space.
Please note, for this role we can only consider Australian citizens with NV1 security clearance or above.
As a Solutions Consultant you will:
- Work directly with customer business and technical teams to understand requirements and develop high quality solutions
- Design highly scalable and reliable data pipelines to consume, integrate, and analyze large amounts of data from various sources.
- Data Pipeline Development with Cloudera s suite of tools such as Apache Ni Fi, Spark, Hive, Impala, Kafka (Debezium)
- Support integration with other data tools (for example Alation, Neo4J, Power BI etc).
- Able to document and present complex architectures for the customer’s technical teams
- Work closely with Cloudera teams at all levels to ensure project and customer success
- Design effective data models for optimal storage and retrieval, deploy inclusive data quality checks to ensure high quality of data
- Design, build, tune and maintain data pipelines using Cloudera, Ni Fi or related data integration technologies
- Install, deploy, augment, upgrade, manage and operate large Cloudera clusters
- Write and produce technical documentation, customer status reports and knowledgebase articles
- Keep up with current Cloudera, Ni Fi, Big Data ecosystem / technologies.
We’re excited about you if you have:
- Overall 8+ years IT experience, with at least 4+ years of production experience working with Cloudera, Hadoop and/or Ni Fi, Apache, Spark, Hive, Impala, Kafka in regards to data engineering.
- Hands-on experience with all aspects of developing, testing and implementing low-latency big data pipelines.
- Demonstrated production experience in data engineering, data management, cluster management and/or analytics domains.
- Experience designing data queries against data in the HDFS environment using tools such as Apache Hive
- Experience implementing Map Reduce, Spark jobs
- Experience setting up multi-node Cloudera clusters
- Experience in systems administration or Dev Ops experience with one or more open-source operating systems [Big Data Developers interested in Administration and consulting can also apply]
- Experience with Data Warehouse design, ETL (Extraction, Transformation & Load), architecting efficient software designs for DW platform.
- Experience implementing operational best practices such as alerting, monitoring, and metadata management.
- Strong understanding with various enterprise security practices and solutions such as LDAP and/or Kerberos
- Experience using configuration management tools such as Ansible, Puppet or Chef
- Familiarity with scripting tools such as bash shell scripts, Python and/or Perl
- Experience with Apache Ni Fi is desired
- Significant previous work writing to network-based APIs, preferably REST/JSON or XML/SOAP
- Understanding of the Java ecosystem and enterprise offerings, including debugging and profiling tools (e.g. jstack, jmap, jconsole), logging and monitoring tools (log4j, JMX)
- Ability to understand and translate customer requirements into technical requirements
- Excellent verbal and written communications
You may also have:
- Site Reliability Engineering concepts and practices knowledge
- Knowledge of the data management ecosystem including: Concepts of data warehousing, ETL, data integration, etc.
- Experience using a compiled programming language, preferably one that runs on the JVM (Java, Scala, etc)
- Experience coding with streaming/micro-batch compute frameworks, preferably Kafka, Spark
What you can expect from us:
-
Generous PTO Policy
-
Support work life balance with
Unplugged Days
-
Flexible WFH Policy
-
Mental & Physical Wellness programs
-
Phone and Internet Reimbursement program
-
Access to Continued Career Development
-
Comprehensive Benefits and Competitive Packages
-
-
Employee Resource Groups
Cloudera is an Equal Opportunity / Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
#LI-SR1
#Hybrid