Expert Support Engineer / Product Specialist : Hadoop SME

Acceldata is revolutionizing data observability in how enterprises manage and observe data by offering comprehensive solutions tailored to each organization's unique needs.
Data
Staff Software Engineer
In-Person
10+ years of experience
AI · Enterprise SaaS
This job posting may no longer be active. You may be interested in these related jobs instead:
Lead Software Data Engineer (Roads Team)

Lead Software Data Engineer position at Mapbox, focusing on developing and maintaining data processing pipelines for global road networks and navigation systems.

Lead Risk & Resilience Engineer

Lead Risk & Resilience Engineer position at First Street, developing climate risk assessment models and connecting physical climate risk to financial impact.

Technical Program Manager, Data Governance and Regulatory Programs

Lead data governance and regulatory compliance initiatives at Google Research, managing cross-functional teams and implementing strategic data management solutions.

Senior Solutions Acceleration Architect, Data

Lead data-centric solution design and development at Google Cloud, combining technical expertise with customer engagement to drive enterprise transformation.

Senior Data Scientist, Intelligent Automation and Recommendation

Senior Data Scientist position at Google focusing on intelligent automation and recommendation systems for large-scale infrastructure optimization.

Description For Expert Support Engineer / Product Specialist : Hadoop SME

Acceldata is revolutionizing data observability for enterprises, offering tailored solutions for managing and observing data. As a Product Specialist, you'll play a crucial role in resolving complex customer challenges across various data integrations, including Data Catalogs, Databases, Data Lakes/Lakehouses, Orchestration, SQL Query and Analytics Engines, Security, Streaming, Data Warehouses, and Business Intelligence platforms. You'll focus on delivering faster resolutions and ensuring seamless data operations, enabling customers to monitor, predict, and optimize their data systems for impactful business outcomes.

The ideal candidate should have 10+ years of experience in customer-facing support, post-sales, technical architecture, or consulting roles, with a focus on Cloud Native and Data Lakes. Proficiency in cloud platforms (AWS, GCP, Azure), Kubernetes, Docker, and various Hadoop components is required. You'll need excellent communication skills and the ability to handle pressure while demonstrating empathy for customer situations.

Working at Acceldata, you'll collaborate with innovative minds in data observability, solve complex technical problems, and gain invaluable experience in managing distributed, scalable environments. You'll be responsible for ensuring seamless support operations for Gold and Enterprise customers, providing proactive solutions that empower organizations to maximize their data infrastructure.

This role offers the opportunity to work with cutting-edge data solutions in a fast-paced, dynamic environment, making a lasting impact on customer success and Acceldata's evolution in data observability.

Last updated 6 months ago

Responsibilities For Expert Support Engineer / Product Specialist : Hadoop SME

  • Resolve complex customer challenges across various data integrations
  • Deliver faster resolutions and ensure seamless data operations
  • Work with customers to understand use cases and identify pain points
  • Provide operations expertise leveraging knowledge of Data Quality, Cost Optimization, and Data Reliability
  • Manage support cases for Gold and Enterprise customers
  • Troubleshoot intricate technical issues
  • Provide proactive solutions to empower organizations

Requirements For Expert Support Engineer / Product Specialist : Hadoop SME

Kubernetes
Linux
  • 10+ years of experience in customer-facing support, post-sales, technical architecture, or consulting roles
  • Bachelor's degree
  • Proficiency in cloud platforms (AWS, GCP, Azure)
  • Expertise in Kubernetes, Docker, and Hadoop components (HDFS, Yarn, Spark, Hive, HBase, Ranger, Kafka)
  • Knowledge of Linux distributions (RHEL, Ubuntu, SUSE)
  • Excellent communication skills (written and verbal)
  • Ability to handle pressure and demonstrate empathy

Interested in this job?