Hadoop Jobs in Ahmedabad
Explore top Hadoop Job opportunities in Ahmedabad from Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.
Hadoop jobs in other cities
Apache Hadoop JobsApache Hadoop Jobs in Bangalore Apache Hadoop Jobs in ChennaiApache Hadoop Jobs in CoimbatoreHadoop JobsHadoop Jobs in Bangalore Hadoop Jobs in ChandigarhHadoop Jobs in ChennaiHadoop Jobs in CoimbatoreHadoop Jobs in Delhi, NCR and GurgaonHadoop Jobs in HyderabadHadoop Jobs in JaipurHadoop Jobs in Kochi Hadoop Jobs in KolkataHadoop Jobs in MumbaiHadoop Jobs in PuneAhmedabad, Pune
3 - 10 yrs
₹15L - ₹30L / yr
NodeJS (Node.js)
React.js
AngularJS (1.x)
Amazon Web Services (AWS)
Python
+7 more
Location: Ahmedabad / Pune
Team: Technology
Company Profile
InFoCusp is a company working in the broad field of Computer Science, Software Engineering, and Artificial Intelligence (AI). It is headquartered in Ahmedabad, India, having a branch office in Pune.
We have worked on / are working on Software Engineering projects that touch upon making full-fledged products. Starting from UI/UX aspects, responsive and blazing fast front-ends, platform specific applications (Android, iOS, web-applications, desktop applications), very large scale infrastructure, cutting edge machine learning, deep learning (AI in general). The projects / products have wide ranging applications in finance, healthcare, e-commerce, legal, HR/recruiting, pharmaceutical, leisure sports and computer gaming domains. All of this is using core concepts of computer science such as distributed systems, operating systems, computer networks, process parallelism, cloud computing, embedded systems and Internet of Things.
PRIMARY RESPONSIBILITIES:
● Own the design, development, evaluation and deployment of highly-scalable software products involving front-end and back-end development.
● Maintain quality, responsiveness and stability of the system.
● Design and develop memory-efficient, compute-optimized solutions for the software.
● Design and administer automated testing tools and continuous integration tools.
● Produce comprehensive and usable software documentation.
● Evaluate and make decisions on the use of new tools and technologies.
● Mentor other development engineers.
KNOWLEDGE AND SKILL REQUIREMENTS:
● Mastery of one or more back-end programming languages (Python, Java, C++ etc.)
● Proficiency in front-end programming paradigms and libraries (for example : HTML, CSS and advanced JavaScript libraries and frameworks such as Angular, Knockout, React).
● Knowledge of automated and continuous integration testing tools (Jenkins, Team City, Circle CI etc.)
● Proven experience of platform-level development for large-scale systems.
● Deep understanding of various database systems (MySQL, Mongo, Cassandra).
● Ability to plan and design software system architecture.
● Development experience for mobile, browsers and desktop systems is desired.
● Knowledge and experience of using distributed systems (Hadoop, Spark) and cloud environments (Amazon EC2, Google Compute Engine, Microsoft Azure).
● Experience working in agile development. Knowledge and prior experience of tools like Jira is desired.
● Experience with version control systems (Git, Subversion or Mercurial).
EDUCATION:
- B.E.\B. Tech\B.S. M.E.\M.S.\M. Tech\PhD candidates' entries with significant prior experience in the aforementioned fields will be considered.
Team: Technology
Company Profile
InFoCusp is a company working in the broad field of Computer Science, Software Engineering, and Artificial Intelligence (AI). It is headquartered in Ahmedabad, India, having a branch office in Pune.
We have worked on / are working on Software Engineering projects that touch upon making full-fledged products. Starting from UI/UX aspects, responsive and blazing fast front-ends, platform specific applications (Android, iOS, web-applications, desktop applications), very large scale infrastructure, cutting edge machine learning, deep learning (AI in general). The projects / products have wide ranging applications in finance, healthcare, e-commerce, legal, HR/recruiting, pharmaceutical, leisure sports and computer gaming domains. All of this is using core concepts of computer science such as distributed systems, operating systems, computer networks, process parallelism, cloud computing, embedded systems and Internet of Things.
PRIMARY RESPONSIBILITIES:
● Own the design, development, evaluation and deployment of highly-scalable software products involving front-end and back-end development.
● Maintain quality, responsiveness and stability of the system.
● Design and develop memory-efficient, compute-optimized solutions for the software.
● Design and administer automated testing tools and continuous integration tools.
● Produce comprehensive and usable software documentation.
● Evaluate and make decisions on the use of new tools and technologies.
● Mentor other development engineers.
KNOWLEDGE AND SKILL REQUIREMENTS:
● Mastery of one or more back-end programming languages (Python, Java, C++ etc.)
● Proficiency in front-end programming paradigms and libraries (for example : HTML, CSS and advanced JavaScript libraries and frameworks such as Angular, Knockout, React).
● Knowledge of automated and continuous integration testing tools (Jenkins, Team City, Circle CI etc.)
● Proven experience of platform-level development for large-scale systems.
● Deep understanding of various database systems (MySQL, Mongo, Cassandra).
● Ability to plan and design software system architecture.
● Development experience for mobile, browsers and desktop systems is desired.
● Knowledge and experience of using distributed systems (Hadoop, Spark) and cloud environments (Amazon EC2, Google Compute Engine, Microsoft Azure).
● Experience working in agile development. Knowledge and prior experience of tools like Jira is desired.
● Experience with version control systems (Git, Subversion or Mercurial).
EDUCATION:
- B.E.\B. Tech\B.S. M.E.\M.S.\M. Tech\PhD candidates' entries with significant prior experience in the aforementioned fields will be considered.
Read more
Ahmedabad
4 - 7 yrs
₹12L - ₹20L / yr
Hadoop
Big Data
Data engineering
Spark
Apache Beam
+13 more
Responsibilities:
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.
Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.
Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
Read more
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more