Location: Bangalore/Pune/Hyderabad/Nagpur
4-5 years of overall experience in software development.
- Experience on Hadoop (Apache/Cloudera/Hortonworks) and/or other Map Reduce Platforms
- Experience on Hive, Pig, Sqoop, Flume and/or Mahout
- Experience on NO-SQL – HBase, Cassandra, MongoDB
- Hands on experience with Spark development, Knowledge of Storm, Kafka, Scala
- Good knowledge of Java
- Good background of Configuration Management/Ticketing systems like Maven/Ant/JIRA etc.
- Knowledge around any Data Integration and/or EDW tools is plus
- Good to have knowledge of using Python/Perl/Shell
Please note - Hbase hive and spark are must.
Similar jobs
Requirements
- 3+ years work experience with production-grade python. Contribution to open source repos is preferred
- Experience writing concurrent and distributed programs, AWS lambda, Kubernetes, Docker, Spark is preferred.
- Experience with one relational & 1 non-relational DB is preferred
- Prior work in the ML domain will be a big boost
What You’ll Do
- Help realize the product vision: Production-ready machine learning models with monitoring within moments, not months.
- Help companies deploy their machine learning models at scale across a wide range of use-cases and sectors.
- Build integrations with other platforms to make it easy for our customers to use our product without changing their workflow.
- Write maintainable, scalable performant python code
- Building gRPC, rest API servers
- Working with Thrift, Protobufs, etc.
About the Role-
Thinking big and executing beyond what is expected. The challenges cut across algorithmic problem solving, systems engineering, machine learning and infrastructure at a massive scale.
Reason to Join-
An opportunity for innovators, problem solvers & learners. Working will be Innovative, empowering, rewarding & fun. Amazing Office, competitive pay along with excellent benefits package.
Requiremets and Responsibilities- (please read carefully before applying)
- The overall experience of 3-6 years in Java/Python Framework and Machine Learning.
- Develop Web Services, REST, XSD, XML technologies, Java, Python, AWS, API.
- Experience on Elastic Search or SOLR or Lucene -Search Engine, Text Mining, Indexing.
- Experience in highly scalable tools like Kafka, Spark, Aerospike, etc.
- Hands on experience in Design, Architecture, Implementation, Performance & Scalability, and Distributed Systems.
- Design, implement, and deploy highly scalable and reliable systems.
- Troubleshoot Solr indexing process and querying engine.
- Bachelors or Masters in Computer Science from Tier 1 Institutions
Hands-on experience with Spark and SQL
Good to have java knowledge
Your Opportunity
- Own and drive business features into tech requirements
- Design & develop large scale real time server side systems
- Quickly create quality prototypes
- Staying updated on emerging technologies
- Ensuring that all deliverables adhere to our world class standards
- Promote coding best practices
- Mentor and develop junior developers in the team
Required Experience:
- 4+ years of relevant experience as described below
- Excellent grasp of Core Java, Multi Threading and OO design patterns
- Experience with Scala, functional, reactive programming and Akka/Play is a plus
- Excellent understanding of data structures and algorithms
- Solid grasp of large scale distributed real time systems
- Prior experience on building a scalable and resilient micro service
- Solid understanding of relational databases, NoSQL databases and Caching systems
- Good understanding of Big Data technologies such as Spark, Hadoop is a plus
- Experience on one of AWS, Azure or GCP
Who you are :
- You have excellent and effective communication and collaborative skills
- You love problem solving
- You stay up to date with the latest technologies and then apply them in real life
- You love paying attention to detail
- You thrive in meeting tight deadlines and prioritising workloads
- Ability to collaborate across multiple functions
Education:
Bachelor’s degree in Engineering or equivalent experience within the field
Spark / Scala experience should be more than 2 years.
Combination with Java & Scala is fine or we are even fine with Big Data Developer with strong Core Java Concepts. - Scala / Spark Developer.
Strong proficiency Scala on Spark (Hadoop) - Scala + Java is also preferred
Complete SDLC process and Agile Methodology (Scrum)
Version control / Git