Our Company
Changing the world through digital experiences is what Adobe's all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We're passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.
We're on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!
What You'll Do
- Build the underneath data platform and maintain data processing pipelines using best in class technologies. Special focus on R&D to challenge status-quo and build the next generation big data platform that is efficient and cost effective.
- Translate complex technical and functional requirements into detailed designs.
- Collaborate with Data Scientists and Product Managers to bring AI-based Assistive experiences to life. Create the roadmap and socialize among team members and stake holders.
- Create and instill a team culture that focuses on sound scientific processes and encourages deep engagement with our customers.
- Handle project scope and risks with data, analytics, and creative problem-solving.
- Help define data governance policies and implement security and privacy guard rails.
What you require:
- Hands-on engineer with extensive experience ingesting, storing, processing, and analyzing large datasets
- Understanding of data-warehousing and data-modelling techniques
- Experience working (deploying and operationalizing) with DNN frameworks like TensorFlow or Py Torch on large-scale data sets will be a great plus.
- Proficient in some of the following tools and technologies:
- Languages: Python, Java/Scala, SQL
- Dev-ops pipelines
- Container orchestration: Docker, Kubernetes
- Hadoop ecosystem: Hadoop, Spark, Hive etc.
- Stream processing: Storm, Flink etc.
- Workflow: Airflow, Flyte, etc.
- Data Engineering ecosystem: DeltaLake, DataHub etc.
- General understanding of data structures, algorithms, multi-threaded programming, and distributed computing concepts
- Ability to be a self-starter and work closely with other data scientists and software engineers to design, test, and build production-ready ML and optimization models and distributed algorithms running on large-scale data sets.
Ideal Candidate Profile:
- A total of 10+ years of experience in technical roles involving Big Data Engineering, implementation of Machine Learning models.
- Masters or B.Tech in Engineering.
- Deep knowledge of Big Data Architectures, Platforms, Frameworks.
- Experience with Real Time Bidding / Ad Tech background preferred.
- Comfort with ambiguity, adaptability to dynamic environment, and the ability to lead a team while working autonomously.
- Demonstrated ability to influence technical and non-technical stakeholders.
- Track record of delivering cloud-scale, data-intensive applications and services that are widely adopted with large customer bases.
- An ability to think strategically, look around corners, and create a vision for the current quarter, the year, and five years down the road.
- A relentless pursuit of great customer experiences and continuous improvements to the product.
- Excellent Verbal, Written and interpersonal communication skills.