



Location: Bangalore/Pune/Hyderabad/Nagpur
4-5 years of overall experience in software development.
- Experience on Hadoop (Apache/Cloudera/Hortonworks) and/or other Map Reduce Platforms
- Experience on Hive, Pig, Sqoop, Flume and/or Mahout
- Experience on NO-SQL – HBase, Cassandra, MongoDB
- Hands on experience with Spark development, Knowledge of Storm, Kafka, Scala
- Good knowledge of Java
- Good background of Configuration Management/Ticketing systems like Maven/Ant/JIRA etc.
- Knowledge around any Data Integration and/or EDW tools is plus
- Good to have knowledge of using Python/Perl/Shell
Please note - Hbase hive and spark are must.

Similar jobs


Level of skills and experience:
5 years of hands-on experience in using Python, Spark,Sql.
Experienced in AWS Cloud usage and management.
Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).
Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.
Experience with orchestrators such as Airflow and Kubeflow.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
Fundamental understanding of Parquet, Delta Lake and other data file formats.
Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
2. Design software and make technology choices across the stack (from data storage to application to front-end)
3. Understand a range of tier-1 systems/services that power our product to make scalable changes to critical path code
4. Own the design and delivery of an integral piece of a tier-1 system or application
5. Work closely with product managers, UX designers, and end users and integrate software components into a fully functional system
6. Work on the management and execution of project plans and delivery commitments
7. Take ownership of product/feature end-to-end for all phases from the development to the production
8. Ensure the developed features are scalable and highly available with no quality concerns
9. Work closely with senior engineers for refining and implementation
10. Manage and execute project plans and delivery commitments
11. Create and execute appropriate quality plans, project plans, test strategies, and processes for development activities in concert with business and project management efforts


Requirements
- Extensive and expert programming experience in at least one general programming language (e. g. Java, C, C++) & tech stack to write maintainable, scalable, unit-tested code.
- Experience with multi-threading and concurrency programming.
- Extensive experience in object-oriented design skills, knowledge of design patterns, and a huge passion and ability to design intuitive modules and class-level interfaces.
- Excellent coding skills - should be able to convert the design into code fluently.
- Knowledge of Test Driven Development. Good understanding of databases (e. g. MySQL) and NoSQL (e. g. HBase, Elasticsearch, Aerospike etc).
- Strong desire to solve complex and interesting real-world problems.
- Experience with full life cycle development in any programming language on a Linux platform. Go-getter attitude that reflects in energy and intent behind assigned tasks.
- Worked in a startup-like environment with high levels of ownership and commitment.
- BTech, MTech or Ph. D. in Computer Science or related technical discipline (or equivalent).
- Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with huge amounts of data.
- 3+ years of experience in the art of writing code and solving problems on a large scale.
- An open communicator who shares thoughts and opinions frequently listens intently and takes constructive feedback



- Bachelor's or Master’s degree in Computer Science or equivalent area
- 10 to 20 years of experience in software development
- Hands-on experience designing and building B2B or B2C products
- 3+ years architecting SaaS/Web based customer facing products, leading engineering teams as software/technical architect
- Experiences of engineering practices such as code refactoring, microservices, design and enterprise integration patterns, test and design-driven development, continuous integration, building highly scalable applications, application and infrastructure security
- Strong cloud infrastructure experience with AWS and/or Azure
- Experience building event driven systems and working with message queues/topics
- Broad working experience across multiple programming languages and frameworks with in-depth experience in one or more of the following: .Net, Java, Scala or Go-lang
- Hands-on experience with relational databases like SQL Server, PostgreSQL and document stores like Elasticsearch or MongoDB
- Hands-on experience with Big Data processing technologies like Hadoop/Spark is a plus
- Hands-on experience with container technologies like Docker, Kubernetes
- Knowledge of Agile software development process
-
Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics, or related technical field. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
-
Well-versed in and 3+ hands-on demonstrable experience with: ▪ Stream & Batch Big Data Pipeline Processing using Apache Spark and/or Apache Flink.
▪ Distributed Cloud Native Computing including Server less Functions
▪ Relational, Object Store, Document, Graph, etc. Database Design & Implementation
▪ Micro services Architecture, API Modeling, Design, & Programming -
3+ years of hands-on development experience in Apache Spark using Scala and/or Java.
-
Ability to write executable code for Services using Spark RDD, Spark SQL, Structured Streaming, Spark MLLib, etc. with deep technical understanding of Spark Processing Framework.
-
In-depth knowledge of standard programming languages such as Scala and/or Java.
-
3+ years of hands-on development experience in one or more libraries & frameworks such as Apache Kafka, Akka, Apache Storm, Apache Nifi, Zookeeper, Hadoop ecosystem (i.e., HDFS, YARN, MapReduce, Oozie & Hive), etc.; extra points if you can demonstrate your knowledge with working examples.
-
3+ years of hands-on development experience in one or more Relational and NoSQL datastores such as PostgreSQL, Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc.
-
Practical knowledge of distributed systems involving partitioning, bucketing, CAP theorem, replication, horizontal scaling, etc.
-
Passion for distilling large volumes of data, analyze performance, scalability, and capacity performance issues in Big Data Platforms.
-
Ability to clearly distinguish system and Spark Job performances and perform spark performance tuning and resource optimization.
-
Perform benchmarking/stress tests and document the best practices for different applications.
-
Proactively work with tenants on improving the overall performance and ensure the system is resilient, and scalable.
-
Good understanding of Virtualization & Containerization; must demonstrate experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox, Vagrant, etc.
-
Well-versed with demonstrable working experience with API Management, API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption.
Hands-on experience with demonstrable working experience with DevOps tools and platforms viz., Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory, Terraform, Ansible/Chef/Puppet, Spinnaker, etc.
-
Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in any categories: Compute or Storage, Database, Networking & Content Delivery, Management & Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstrable Cloud Platform experience.
-
Good understanding of Storage, Networks and Storage Networking basics which will enable you to work in a Cloud environment.
-
Good understanding of Network, Data, and Application Security basics which will enable you to work in a Cloud as well as Business Applications / API services environment.

Looking for Part time candidate job support having good skills in python, hadoop, oracle and perl .
Working hours - 2-3 hrs daily work for 1year payment from 200-700 whatsapp +1 mad C00 Vwxe your details

Spark / Scala experience should be more than 2 years.
Combination with Java & Scala is fine or we are even fine with Big Data Developer with strong Core Java Concepts. - Scala / Spark Developer.
Strong proficiency Scala on Spark (Hadoop) - Scala + Java is also preferred
Complete SDLC process and Agile Methodology (Scrum)
Version control / Git


