
Location: Bangalore/Pune/Hyderabad/Nagpur
4-5 years of overall experience in software development.
- Experience on Hadoop (Apache/Cloudera/Hortonworks) and/or other Map Reduce Platforms
- Experience on Hive, Pig, Sqoop, Flume and/or Mahout
- Experience on NO-SQL – HBase, Cassandra, MongoDB
- Hands on experience with Spark development, Knowledge of Storm, Kafka, Scala
- Good knowledge of Java
- Good background of Configuration Management/Ticketing systems like Maven/Ant/JIRA etc.
- Knowledge around any Data Integration and/or EDW tools is plus
- Good to have knowledge of using Python/Perl/Shell
Please note - Hbase hive and spark are must.

Similar jobs
A small description about the Company.
It is an Account Engagement Platform which helps B2B organizations to achieve predictable revenue growth by putting the power of AI, big data, and machine learning behind every member of the revenue team.
Looking for PYTHON DEVELOPER.
The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.
Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.
Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.
Actively participates with other consultants in problem-solving and approach development.
Responsibilities :
Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
Perform data analysis to validate data models and to confirm the ability to meet business needs.
Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.
Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.
Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.
Coordinate with Data Architects, Program Managers and participate in recurring meetings.
Help and mentor team members to understand the data model and subject areas.
Ensure that the team adheres to best practices and guidelines.
Requirements :
- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.
- Experience with Spark optimization/tuning/resource allocations
- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.
- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).
- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.
- Have a deep understanding of the various stacks and components of the Big Data ecosystem.
- Hands-on experience with Python is a huge plus
• Participation in the requirements analysis, design, development and testing of applications.
• The candidate is expected to write code himself/herself.
• The candidate is expected to write high level code, code review, unit testing and deployment.
• Practical application of design principles with a focus on the user experience, usability, template
designs, cross browser issues and client server concepts.
• Contributes to the development of project estimates, scheduling, and deliverables.
• Works closely with QA team to determine testing requirements to ensure full coverage and best
quality of product.
• There is also the opportunity to mentor and guide junior team members in excelling their jobs.
Job Specifications
• BE/B. Tech. Computer Science or MCA from a reputed University.
• 6+ Years of experience in software development, with emphasis on JAVA/J2EE Server side
programming.
• Hands on experience in Core Java, Multithreading, RMI, Socket programing, JDBC, NIO,
webservices and Design patterns.
• Should have Knowledge of distributed system, distributed caching, messaging frameworks, ESB
etc.
• Knowledge of Linux operating system and PostgreSQL/MySQL/MongoDB/Cassandra database is
essential.
• Additionally, knowledge of HBase, Hadoop and Hive are desirable.
• Familiarity with message queue systems and AMQP and Kafka is desirable.
• Should have experience as a participant in Agile methodologies.
• Should have excellent written and verbal communication skills and presentation skills.
• This is not a Fullstack requirement, we are purely looking out for Backend resources
About Vymo
Vymo is a Sanfrancisco-based next-generation Sales productivity SaaS company with offices in 7 locations. Vymo is funded by top tier VC firms like Emergence Capital and Sequoia Capital. Vymo is a category creator, an intelligent Personal Sales Assistant who captures sales activities automatically, learns from top performers, and predicts ‘next best actions’ contextually. Vymo has 100,000 users in 60+ large enterprises such as AXA, Allianz, Generali.Vymo has seen 3x annual growth over the last few years and aspires to do even better this year by building up the team globally.
What is the Personal Sales Assistant
A game-changer! We thrive in the CRM space where every company is struggling to deliver meaningful engagement to their Sales teams and IT systems. Vymo was engineered with a mobile-first philosophy. The platform through AI/ML detects, predicts, and learns how to make Sales Representatives more productive through nudges and suggestions on a mobile device. Explore Vymo https://getvymo.com/">https://getvymo.com/
What you will do at Vymo
From young open source enthusiasts to experienced Googlers, this team develops products like Lead Management System, Intelligent Allocations & Route mapping, Intelligent Interventions, that help improve the effectiveness of the sales teams manifold. These products power the "Personal Assistant" app that automates the sales force activities, leveraging our cutting edge location based technology and intelligent routing algorithms.
A Day in your Life
- Design, develop and maintain robust data platforms on top of Kafka, Spark, ES etc.
- Provide leadership to a group of engineers in an innovative and fast-paced environment.
- Manage and drive complex technical projects from the planning stage through execution.
What you would have done
- B.E (or equivalent) in Computer Sciences
- 6-9 years of experience building enterprise class products/platforms.
- Knowledge of Big data systems and/or Data pipeline building experience is preferred.
- 2-3 years of relevant work experience as technical lead or technical management experience.
- Excellent coding skills in one of Core Java or NodeJS
- Demonstrated problem solving skills in previous roles.
- Good communication skills.
Spark / Scala experience should be more than 2 years.
Combination with Java & Scala is fine or we are even fine with Big Data Developer with strong Core Java Concepts. - Scala / Spark Developer.
Strong proficiency Scala on Spark (Hadoop) - Scala + Java is also preferred
Complete SDLC process and Agile Methodology (Scrum)
Version control / Git
- You will be responsible for design, development and testing of Products
- Contributing in all phases of the development lifecycle
- Writing well designed, testable, efficient code
- Ensure designs are in compliance with specifications
- Prepare and produce releases of software components
- Support continuous improvement by investigating alternatives and technologies and presenting these for architectural review
- Some of the technologies you will be working on: Core Java, Solr, Hadoop, Spark, Elastic search, Clustering, Text Mining, NLP, Mahout and Lucene etc.








