



Location: Bangalore/Pune/Hyderabad/Nagpur
4-5 years of overall experience in software development.
- Experience on Hadoop (Apache/Cloudera/Hortonworks) and/or other Map Reduce Platforms
- Experience on Hive, Pig, Sqoop, Flume and/or Mahout
- Experience on NO-SQL – HBase, Cassandra, MongoDB
- Hands on experience with Spark development, Knowledge of Storm, Kafka, Scala
- Good knowledge of Java
- Good background of Configuration Management/Ticketing systems like Maven/Ant/JIRA etc.
- Knowledge around any Data Integration and/or EDW tools is plus
- Good to have knowledge of using Python/Perl/Shell
Please note - Hbase hive and spark are must.

Similar jobs


Requirements
- Extensive and expert programming experience in at least one general programming language (e. g. Java, C, C++) & tech stack to write maintainable, scalable, unit-tested code.
- Experience with multi-threading and concurrency programming.
- Extensive experience in object-oriented design skills, knowledge of design patterns, and a huge passion and ability to design intuitive modules and class-level interfaces.
- Excellent coding skills - should be able to convert the design into code fluently.
- Knowledge of Test Driven Development. Good understanding of databases (e. g. MySQL) and NoSQL (e. g. HBase, Elasticsearch, Aerospike etc).
- Strong desire to solve complex and interesting real-world problems.
- Experience with full life cycle development in any programming language on a Linux platform. Go-getter attitude that reflects in energy and intent behind assigned tasks.
- Worked in a startup-like environment with high levels of ownership and commitment.
- BTech, MTech or Ph. D. in Computer Science or related technical discipline (or equivalent).
- Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with huge amounts of data.
- 3+ years of experience in the art of writing code and solving problems on a large scale.
- An open communicator who shares thoughts and opinions frequently listens intently and takes constructive feedback
The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.
Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.
Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.
Actively participates with other consultants in problem-solving and approach development.
Responsibilities :
Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
Perform data analysis to validate data models and to confirm the ability to meet business needs.
Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.
Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.
Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.
Coordinate with Data Architects, Program Managers and participate in recurring meetings.
Help and mentor team members to understand the data model and subject areas.
Ensure that the team adheres to best practices and guidelines.
Requirements :
- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.
- Experience with Spark optimization/tuning/resource allocations
- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.
- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).
- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.
- Have a deep understanding of the various stacks and components of the Big Data ecosystem.
- Hands-on experience with Python is a huge plus
● Design and deliver scalable web services, APIs, and backend data modules.
Understand requirements and develop reusable code using design patterns &
component architecture and write unit test cases.
● Collaborate with product management and engineering teams to elicit &
understand their requirements & challenges and develop potential solutions
● Stay current with the latest tools, technology ideas, and methodologies; share
knowledge by clearly articulating results and ideas to key decision-makers.
Requirements
● 3-6 years of strong experience in developing highly scalable backend and
middle tier. BS/MS in Computer Science or equivalent from premier institutes
Strong in problem-solving, data structures, and algorithm design. Strong
experience in system architecture, Web services development, highly scalable
distributed applications.
● Good in large data systems such as Hadoop, Map Reduce, NoSQL Cassandra, etc.. Fluency in Java, Spring, Hibernate, J2EE, REST Services Ability to deliver code
quickly from given scenarios in a fast-paced start-up environment.
● Attention to detail. Strong communication and collaboration skills.



- Bachelor's or Master’s degree in Computer Science or equivalent area
- 10 to 20 years of experience in software development
- Hands-on experience designing and building B2B or B2C products
- 3+ years architecting SaaS/Web based customer facing products, leading engineering teams as software/technical architect
- Experiences of engineering practices such as code refactoring, microservices, design and enterprise integration patterns, test and design-driven development, continuous integration, building highly scalable applications, application and infrastructure security
- Strong cloud infrastructure experience with AWS and/or Azure
- Experience building event driven systems and working with message queues/topics
- Broad working experience across multiple programming languages and frameworks with in-depth experience in one or more of the following: .Net, Java, Scala or Go-lang
- Hands-on experience with relational databases like SQL Server, PostgreSQL and document stores like Elasticsearch or MongoDB
- Hands-on experience with Big Data processing technologies like Hadoop/Spark is a plus
- Hands-on experience with container technologies like Docker, Kubernetes
- Knowledge of Agile software development process

Experience:
The candidate should have about 2+ years of experience with design and development in Java/Scala. Experience in algorithm, data-structure, database and distributed System is mandatory.
Required Skills:
Mandatory: -
- Core Java or Scala
- Experience in Big Data, Spark
- Extensive experience in developing spark job. Should possess good Oops knowledge and be aware of enterprise application design patterns.
- Should have the ability to analyze, design, develop and test complexity of spark job.
- Working knowledge of Unix/Linux.
- Hands on experience in Spark, creating RDD, applying operation - transformation-action
Good To have: -
- Python
- Spark streaming
- Py Spark
- Azure/AWS Cloud Knowledge of Data Storage and Compute side
Your Opportunity
- Own and drive business features into tech requirements
- Design & develop large scale real time server side systems
- Quickly create quality prototypes
- Staying updated on emerging technologies
- Ensuring that all deliverables adhere to our world class standards
- Promote coding best practices
- Mentor and develop junior developers in the team
Required Experience:
- 4+ years of relevant experience as described below
- Excellent grasp of Core Java, Multi Threading and OO design patterns
- Experience with Scala, functional, reactive programming and Akka/Play is a plus
- Excellent understanding of data structures and algorithms
- Solid grasp of large scale distributed real time systems
- Prior experience on building a scalable and resilient micro service
- Solid understanding of relational databases, NoSQL databases and Caching systems
- Good understanding of Big Data technologies such as Spark, Hadoop is a plus
- Experience on one of AWS, Azure or GCP
Who you are :
- You have excellent and effective communication and collaborative skills
- You love problem solving
- You stay up to date with the latest technologies and then apply them in real life
- You love paying attention to detail
- You thrive in meeting tight deadlines and prioritising workloads
- Ability to collaborate across multiple functions
Education:
Bachelor’s degree in Engineering or equivalent experience within the field



