Hadoop Jobs in Pune

Explore top Hadoop Job opportunities in Pune from Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.
icon
DP
Posted by Nelson Xavier
icon
Bengaluru (Bangalore), Pune, Hyderabad
icon
4 - 8 yrs
icon
₹10L - ₹25L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more

Job responsibilities

- You will partner with teammates to create complex data processing pipelines in order to solve our clients' most complex challenges

- You will pair to write clean and iterative code based on TDD

- Leverage various continuous delivery practices to deploy, support and operate data pipelines

- Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available

- Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions

- Create data models and speak to the tradeoffs of different modeling approaches

- Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process

- Encouraging open communication and advocating for shared outcomes

 

Technical skills

- You have a good understanding of data modelling and experience with data engineering tools and platforms such as Spark (Scala) and Hadoop

- You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting

- Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions

- You are comfortable taking data-driven approaches and applying data security strategy to solve business problems

- Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems

- You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments

 



Professional skills

- You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives

- An interest in coaching, sharing your experience and knowledge with teammates

- You enjoy influencing others and always advocate for technical excellence while being open to change when needed

- Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more

Read more
DP
Posted by Alfiya Khan
icon
Pune, Bengaluru (Bangalore)
icon
6 - 8 yrs
icon
₹15L - ₹25L / yr
Big Data
Data Warehouse (DWH)
Data modeling
Apache Spark
Data integration
+10 more
Company Profile
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.

Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.

What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.

Qualifications & Experience relevant for the role

• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).

• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Read more
icon
Pune
icon
5 - 8 yrs
icon
₹1L - ₹15L / yr
Informatica
Informatica PowerCenter
Spark
Hadoop
Big Data
+6 more

Technical/Core skills

  1. Minimum 3 yrs of exp in Informatica Big data Developer(BDM) in Hadoop environment.
  2. Have knowledge of informatica Power exchange (PWX).
  3. Minimum 3 yrs of exp in big data querying tool like Hive and Impala.
  4. Ability to designing/development of complex mappings using informatica Big data Developer.
  5. Create and manage Informatica power exchange and CDC real time implementation
  6. Strong Unix knowledge skills for writing shell scripts and troubleshoot of existing scripts.
  7. Good knowledge of big data platforms and its framework.
  8. Good to have an experience in cloudera data platform (CDP)
  9. Experience with building stream processing systems using Kafka and spark
  10. Excellent SQL knowledge

 

Soft skills :

  1. Ability to work independently 
  2. Strong analytical and problem solving skills
  3. Attitude of learning new technology
  4. Regular interaction with vendors, partners and stakeholders
Read more
DP
Posted by phani kalyan
icon
Pune
icon
9 - 14 yrs
icon
₹20L - ₹40L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more
Job Id: SG0601

Hi,

Enterprise Minds is looking for Data Architect for Pune Location.

Req Skills:
Python,Pyspark,Hadoop,Java,Scala
Read more

Tier 1 MNC

Agency job
icon
Chennai, Pune, Bengaluru (Bangalore), Noida, Gurugram, Kochi (Cochin), Coimbatore, Hyderabad, Mumbai, Navi Mumbai
icon
3 - 12 yrs
icon
₹3L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+1 more
Greetings,
We are hiring for Tier 1 MNC for the software developer with good knowledge in Spark,Hadoop and Scala
Read more
icon
Jaipur, Noida, Gurugram, Delhi, Ghaziabad, Faridabad, Pune, Mumbai
icon
5 - 15 yrs
icon
₹7L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more
Job Responsibilities:

• Project Planning and Management
o Take end-to-end ownership of multiple projects / project tracks
o Create and maintain project plans and other related documentation for project
objectives, scope, schedule and delivery milestones
o Lead and participate across all the phases of software engineering, right from
requirements gathering to GO LIVE
o Lead internal team meetings on solution architecture, effort estimation, manpower
planning and resource (software/hardware/licensing) planning
o Manage RIDA (Risks, Impediments, Dependencies, Assumptions) for projects by
developing effective mitigation plans
• Team Management
o Act as the Scrum Master
o Conduct SCRUM ceremonies like Sprint Planning, Daily Standup, Sprint Retrospective
o Set clear objectives for the project and roles/responsibilities for each team member
o Train and mentor the team on their job responsibilities and SCRUM principles
o Make the team accountable for their tasks and help the team in achieving them
o Identify the requirements and come up with a plan for Skill Development for all team
members
• Communication
o Be the Single Point of Contact for the client in terms of day-to-day communication
o Periodically communicate project status to all the stakeholders (internal/external)
• Process Management and Improvement
o Create and document processes across all disciplines of software engineering
o Identify gaps and continuously improve processes within the team
o Encourage team members to contribute towards process improvement
o Develop a culture of quality and efficiency within the team

Must have:
• Minimum 08 years of experience (hands-on as well as leadership) in software / data engineering
across multiple job functions like Business Analysis, Development, Solutioning, QA, DevOps and
Project Management
• Hands-on as well as leadership experience in Big Data Engineering projects
• Experience developing or managing cloud solutions using Azure or other cloud provider
• Demonstrable knowledge on Hadoop, Hive, Spark, NoSQL DBs, SQL, Data Warehousing, ETL/ELT,
DevOps tools
• Strong project management and communication skills
• Strong analytical and problem-solving skills
• Strong systems level critical thinking skills
• Strong collaboration and influencing skills

Good to have:
• Knowledge on PySpark, Azure Data Factory, Azure Data Lake Storage, Synapse Dedicated SQL
Pool, Databricks, PowerBI, Machine Learning, Cloud Infrastructure
• Background in BFSI with focus on core banking
• Willingness to travel

Work Environment
• Customer Office (Mumbai) / Remote Work

Education
• UG: B. Tech - Computers / B. E. – Computers / BCA / B.Sc. Computer Science
Read more
DP
Posted by Shefali Mudliar
icon
Ahmedabad, Pune
icon
3 - 10 yrs
icon
₹15L - ₹30L / yr
NodeJS (Node.js)
React.js
AngularJS (1.x)
Amazon Web Services (AWS)
Python
+7 more
Location: Ahmedabad / Pune
Team: Technology

Company Profile
InFoCusp is a company working in the broad field of Computer Science, Software Engineering, and Artificial Intelligence (AI). It is headquartered in Ahmedabad, India, having a branch office in Pune.

We have worked on / are working on Software Engineering projects that touch upon making full-fledged products. Starting from UI/UX aspects, responsive and blazing fast front-ends, platform specific applications (Android, iOS, web-applications, desktop applications), very large scale infrastructure, cutting edge machine learning, deep learning (AI in general). The projects / products have wide ranging applications in finance, healthcare, e-commerce, legal, HR/recruiting, pharmaceutical, leisure sports and computer gaming domains. All of this is using core concepts of computer science such as distributed systems, operating systems, computer networks, process parallelism, cloud computing, embedded systems and Internet of Things.

PRIMARY RESPONSIBILITIES:
● Own the design, development, evaluation and deployment of highly-scalable software products involving front-end and back-end development.
● Maintain quality, responsiveness and stability of the system.
● Design and develop memory-efficient, compute-optimized solutions for the software.
● Design and administer automated testing tools and continuous integration tools.
● Produce comprehensive and usable software documentation.
● Evaluate and make decisions on the use of new tools and technologies.
● Mentor other development engineers.

KNOWLEDGE AND SKILL REQUIREMENTS:
● Mastery of one or more back-end programming languages (Python, Java, C++ etc.)
● Proficiency in front-end programming paradigms and libraries (for example : HTML, CSS and advanced JavaScript libraries and frameworks such as Angular, Knockout, React).
● Knowledge of automated and continuous integration testing tools (Jenkins, Team City, Circle CI etc.)
● Proven experience of platform-level development for large-scale systems.
● Deep understanding of various database systems (MySQL, Mongo, Cassandra).
● Ability to plan and design software system architecture.
● Development experience for mobile, browsers and desktop systems is desired.
● Knowledge and experience of using distributed systems (Hadoop, Spark) and cloud environments (Amazon EC2, Google Compute Engine, Microsoft Azure).
● Experience working in agile development. Knowledge and prior experience of tools like Jira is desired.
● Experience with version control systems (Git, Subversion or Mercurial).

EDUCATION:
- B.E.\B. Tech\B.S. M.E.\M.S.\M. Tech\PhD candidates' entries with significant prior experience in the aforementioned fields will be considered.
Read more
icon
Pune, Bengaluru (Bangalore), Coimbatore, Hyderabad, Gurugram
icon
3 - 10 yrs
icon
₹18L - ₹40L / yr
Apache Kafka
Spark
Hadoop
Apache Hive
Big Data
+5 more

Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.



You’ll spend time on the following:

  • You will partner with teammates to create complex data processing pipelines in order to solve our clients’ most ambitious challenges
  • You will collaborate with Data Scientists in order to design scalable implementations of their models
  • You will pair to write clean and iterative code based on TDD
  • Leverage various continuous delivery practices to deploy data pipelines
  • Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
  • Develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
  • Create data models and speak to the tradeoffs of different modeling approaches

Here’s what we’re looking for:

 

  • You have a good understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
  • You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
  • Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
  • You are comfortable taking data-driven approaches and applying data security strategy to solve business problems 
  • Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems
  • Strong communication and client-facing skills with the ability to work in a consulting environment
Read more

Persistent System Ltd

Agency job
via Milestone Hr Consultancy by Haina khan
icon
Bengaluru (Bangalore), Pune, Hyderabad
icon
4 - 6 yrs
icon
₹6L - ₹22L / yr
Apache HBase
Apache Hive
Apache Spark
Go Programming (Golang)
Ruby on Rails (ROR)
+5 more
Urgently require Hadoop Developer in reputed MNC company

Location: Bangalore/Pune/Hyderabad/Nagpur

4-5 years of overall experience in software development.
- Experience on Hadoop (Apache/Cloudera/Hortonworks) and/or other Map Reduce Platforms
- Experience on Hive, Pig, Sqoop, Flume and/or Mahout
- Experience on NO-SQL – HBase, Cassandra, MongoDB
- Hands on experience with Spark development,  Knowledge of Storm, Kafka, Scala
- Good knowledge of Java
- Good background of Configuration Management/Ticketing systems like Maven/Ant/JIRA etc.
- Knowledge around any Data Integration and/or EDW tools is plus
- Good to have knowledge of  using Python/Perl/Shell

 

Please note - Hbase hive and spark are must.

Read more
icon
Mumbai, Pune
icon
8 - 14 yrs
icon
₹10L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more
Job Responsibilities
1. Understand the business problem and translate these to data services and
engineering outcomes.
2. Expertise in working on cloud application designs, cloud approval plans, and
systems required to manage cloud storage.
3. Explore new technologies and learn new techniques to solve business problems
creatively
4. Collaborate with different teams - engineering and business, to build better data
products
5. Regularly evaluate cloud applications, hardware, and software.
6. Respond to technical issues in a professional and timely manner.
7. Identify the top cloud architecture solutions to successfully meet the strategic
needs of the company.
8. Offer guidance in infrastructure movement techniques including bulk application
transfers into the cloud.
9. Manage team and handle delivery of 2-3 projects
JD | Data Architect 24-Aug-2021
Solving for better 3 of 7

Qualifications
Is Education overrated? Yes. We believe so. But there is no way to locate you
otherwise. So we might look for at least a computer science, computer engineering,
information technology, or relevant field along with:

1. Over 4-6 years of experience in Data handling
2. Hands-on experience of any one programming language (Python, Java, Scala)
3. Understanding of SQL is must
4. Big data (Hadoop, Hive, Yarn, Sqoop)
5. MPP platforms (Spark, Presto)
6. Data-pipeline & scheduler tool (Ozzie, Airflow, Nifi)
7. Streaming engines (Kafka, Storm, Spark Streaming)
8. Any Relational database or DW experience
9. Any ETL tool experience
10. Hands-on experience in pipeline design, ETL and application development
11. Hands-on experience in cloud platforms like AWS, GCP etc.
12. Good communication skills and strong analytical skills
13. Experience in team handling and project delivery
Read more

Hiring for one of the MNC for India location

Agency job
via Natalie Consultants by Rahul Kumar
icon
Gurugram, Pune, Bengaluru (Bangalore), Delhi, Noida, Ghaziabad, Faridabad
icon
2 - 9 yrs
icon
₹8L - ₹20L / yr
Python
Hadoop
Big Data
Spark
Data engineering
+3 more

Key Responsibilities : ( Data Developer Python, Spark)

Exp : 2 to 9 Yrs 

Development of data platforms, integration frameworks, processes, and code.

Develop and deliver APIs in Python or Scala for Business Intelligence applications build using a range of web languages

Develop comprehensive automated tests for features via end-to-end integration tests, performance tests, acceptance tests and unit tests.

Elaborate stories in a collaborative agile environment (SCRUM or Kanban)

Familiarity with cloud platforms like GCP, AWS or Azure.

Experience with large data volumes.

Familiarity with writing rest-based services.

Experience with distributed processing and systems

Experience with Hadoop / Spark toolsets

Experience with relational database management systems (RDBMS)

Experience with Data Flow development

Knowledge of Agile and associated development techniques including:

Read more
DP
Posted by Rutu Lakhani
icon
Mumbai, Bengaluru (Bangalore), Pune, Hyderabad, Noida
icon
2 - 5 yrs
icon
₹10L - ₹17L / yr
Data engineering
Hadoop
Big Data
Python
SQL
+2 more
Position : Data Engineer
Employee Strength : around 600 in all over India
Working days: 5 days
Working Time: Flexible
Salary : 30-40% Hike on Current CTC
As of now work from home.
 
Job description:
  • Design, implement and support an analytical data infrastructure, providing ad hoc access to large data sets and computing power.
  • Contribute to development of standards and the design and implementation of proactive processes to collect and report data and statistics on assigned systems.
  • Research opportunities for data acquisition and new uses for existing data.
  • Provide technical development expertise for designing, coding, testing, debugging, documenting and supporting data solutions.
  • Experience building data pipelines to connect analytics stacks, client data visualization tools and external data sources.
  • Experience with cloud and distributed systems principles
  • Experience with Azure/AWS/GCP cloud infrastructure
  • Experience with Databricks Clusters and Configuration
  • Experience with Python, R, sh/bash and JVM-based languages including Scala and Java.
  • Experience with Hadoop family languages including Pig and Hive.
Read more

Intergral Add Science

Agency job
via VIPSA TALENT SOLUTIONS by Prashma S R
icon
Pune
icon
5 - 8 yrs
icon
₹9L - ₹25L / yr
Java
Hadoop
Apache Spark
Scala
Python
+3 more
  • 6+ years of recent hands-on Java development
  • Developing data pipelines in AWS or Google Cloud
  • Java, Python, JavaScript programming languages
  • Great understanding of designing for performance, scalability, and reliability of data intensive application
  • Hadoop MapReduce, Spark, Pig. Understanding of database fundamentals and advanced SQL knowledge.
  • In-depth understanding of object oriented programming concepts and design patterns
  • Ability to communicate clearly to technical and non-technical audiences, verbally and in writing
  • Understanding of full software development life cycle, agile development and continuous integration
  • Experience in Agile methodologies including Scrum and Kanban
Read more

A large software MNC with over 20k employees in India

Agency job
via RS Consultants by Rahul Inamdar
icon
Pune
icon
5 - 12 yrs
icon
₹15L - ₹22L / yr
Spark
Data engineering
Data Engineer
Apache Kafka
Apache Spark
+6 more

As a Senior Engineer - Big Data Analytics, you will help the architectural design and development for Healthcare Platforms, Products, Services, and Tools to deliver the vision of the Company. You will significantly contribute to engineering, technology, and platform architecture. This will be done through innovation and collaboration with engineering teams and related business functions. This is a critical, highly visible role within the company that has the potential to drive significant business impact. 


The scope of this role will include strong technical contribution in the development and delivery of Big Data Analytics Cloud Platform, Products and Services in collaboration with execution and strategic partners. 

 

Responsibilities:

  • Design & develop, operate, and drive scalable, resilient, and cloud native Big Data Analytics platform to address the business requirements
  • Help drive technology transformation to achieve business transformation, through the creation of the Healthcare Analytics Data Cloud that will help Change establish a leadership position in healthcare data & analytics in the industry
  • Help in successful implementation of Analytics as a Service 
  • Ensure Platforms and Services meet SLA requirements
  • Be a significant contributor and partner in the development and execution of the Enterprise Technology Strategy

 

Qualifications:

  • At least 2 years of experience software development for big data analytics, and cloud. At least 5 years of experience in software development
  • Experience working with High Performance Distributed Computing Systems in public and private cloud environments
  • Understands big data open-source eco-systems and its players. Contribution to open source is a strong plus
  • Experience with Spark, Spark Streaming, Hadoop, AWS/Azure, NoSQL Databases, In-Memory caches, distributed computing, Kafka, OLAP stores, etc.
  • Have successful track record of creating working Big Data stack that aligned with business needs, and delivered timely enterprise class products
  • Experience with delivering and managing scale of Operating Environment
  • Experience with Big Data/Micro Service based Systems, SaaS, PaaS, and Architectures
  • Experience Developing Systems in Java, Python, Unix
  • BSCS, BSEE or equivalent, MSCS preferred
Read more

Consulting Leader

Agency job
via Buaut Tech by KAUSHANK nalin
icon
Pune, Mumbai
icon
8 - 10 yrs
icon
₹8L - ₹16L / yr
Data integration
talend
Hadoop
Integration
Java
+1 more

 

Job Description for :

Role: Data/Integration Architect

Experience – 8-10 Years

Notice Period: Under 30 days

Key Responsibilities: Designing, Developing frameworks for batch and real time jobs on Talend. Leading migration of these jobs from Mulesoft to Talend, maintaining best practices for the team, conducting code reviews and demos.

Core Skillsets:

Talend Data Fabric - Application, API Integration, Data Integration. Knowledge on Talend Management Cloud, deployment and scheduling of jobs using TMC or Autosys.

Programming Languages - Python/Java
Databases: SQL Server, Other Databases, Hadoop

Should have worked on Agile

Sound communication skills

Should be open to learning new technologies based on business needs on the job

Additional Skills:

Awareness of other data/integration platforms like Mulesoft, Camel

Awareness Hadoop, Snowflake, S3

Read more
icon
Pune
icon
2 - 15 yrs
icon
₹10L - ₹30L / yr
Spark
Big Data
Apache Spark
Python
PySpark
+1 more

We are looking for a skilled Senior/Lead Bigdata Engineer to join our team. The role is part of the research and development team, where you with enthusiasm and knowledge are going to be our technical evangelist for the development of our inspection technology and products.

 

At Elop we are developing product lines for sustainable infrastructure management using our own patented technology for ultrasound scanners and combine this with other sources to see holistic overview of the concrete structure. At Elop we will provide you with world-class colleagues highly motivated to position the company as an international standard of structural health monitoring. With the right character you will be professionally challenged and developed.

This position requires travel to Norway.

 

Elop is sister company of Simplifai and co-located together in all geographic locations.

https://elop.no/

https://www.simplifai.ai/en/


Roles and Responsibilities

  • Define technical scope and objectives through research and participation in requirements gathering and definition of processes
  • Ingest and Process data from data sources (Elop Scanner) in raw format into Big Data ecosystem
  • Realtime data feed processing using Big Data ecosystem
  • Design, review, implement and optimize data transformation processes in Big Data ecosystem
  • Test and prototype new data integration/processing tools, techniques and methodologies
  • Conversion of MATLAB code into Python/C/C++.
  • Participate in overall test planning for the application integrations, functional areas and projects.
  • Work with cross functional teams in an Agile/Scrum environment to ensure a quality product is delivered.

Desired Candidate Profile

  • Bachelor's degree in Statistics, Computer or equivalent
  • 7+ years of experience in Big Data ecosystem, especially Spark, Kafka, Hadoop, HBase.
  • 7+ years of hands-on experience in Python/Scala is a must.
  • Experience in architecting the big data application is needed.
  • Excellent analytical and problem solving skills
  • Strong understanding of data analytics and data visualization, and must be able to help development team with visualization of data.
  • Experience with signal processing is plus.
  • Experience in working on client server architecture is plus.
  • Knowledge about database technologies like RDBMS, Graph DB, Document DB, Apache Cassandra, OpenTSDB
  • Good communication skills, written and oral, in English

We can Offer

  • An everyday life with exciting and challenging tasks with the development of socially beneficial solutions
  • Be a part of companys research and Development team to create unique and innovative products
  • Colleagues with world-class expertise, and an organization that has ambitions and is highly motivated to position the company as an international player in maintenance support and monitoring of critical infrastructure!
  • Good working environment with skilled and committed colleagues an organization with short decision paths.
  • Professional challenges and development
Read more
icon
Pune
icon
1 - 5 yrs
icon
₹3L - ₹15L / yr
Machine Learning (ML)
Artificial Intelligence (AI)
Python
Data Structures
Algorithms
+17 more
 
SD (ML and AI) job description:

Advanced degree in computer science, math, statistics or a related discipline ( Must have master degree )
Extensive data modeling and data architecture skills
Programming experience in Python, R
Background in machine learning frameworks such as TensorFlow or Keras
Knowledge of Hadoop or another distributed computing systems
Experience working in an Agile environment
Advanced math skills (Linear algebra
Discrete math
Differential equations (ODEs and numerical)
Theory of statistics 1
Numerical analysis 1 (numerical linear algebra) and 2 (quadrature)
Abstract algebra
Number theory
Real analysis
Complex analysis
Intermediate analysis (point set topology)) ( important )
Strong written and verbal communications
Hands on experience on NLP and NLG
Experience in advanced statistical techniques and concepts. ( GLM/regression, Random forest, boosting, trees, text mining ) and experience with application.
 
Read more
icon
Pune, Hyderabad
icon
7 - 12 yrs
icon
₹12L - ₹33L / yr
Big Data
Hadoop
Spark
Apache Spark
Apache Hive
+3 more

Job description

Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)

Primary Location : India-Pune, Hyderabad

Experience : 7 - 12 Years

Management Level: 7

Joining Time: Immediate Joiners are preferred


  • Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
  • Align architecture with business requirements and stabilizing the developed solution
  • Ability to build prototypes to demonstrate the technical feasibility of your vision
  • Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
  • To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
  • Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
  • Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
  • Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
  • Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
  • Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
  • Deployment sophisticated analytics program of code using any of cloud application.


Perks and Benefits we Provide!


  • Working with Highly Technical and Passionate, mission-driven people
  • Subsidized Meals & Snacks
  • Flexible Schedule
  • Approachable leadership
  • Access to various learning tools and programs
  • Pet Friendly
  • Certification Reimbursement Policy
  • Check out more about us on our website below!

www.datametica.com

Read more
icon
Pune, Hyderabad
icon
3 - 12 yrs
icon
₹5L - ₹25L / yr
Apache Kafka
Big Data
Hadoop
Apache Hive
Java
+1 more

Summary
Our Kafka developer has a combination of technical skills, communication skills and business knowledge. The developer should be able to work on multiple medium to large projects. The successful candidate will have excellent technical skills of Apache/Confluent Kafka, Enterprise Data WareHouse preferable GCP BigQuery or any equivalent Cloud EDW and also will be able to take oral and written business requirements and develop efficient code to meet set deliverables.

 

Must Have Skills

  • Participate in the development, enhancement and maintenance of data applications both as an individual contributor and as a lead.
  • Leading in the identification, isolation, resolution and communication of problems within the production environment.
  • Leading developer and applying technical skills Apache/Confluent Kafka (Preferred) AWS Kinesis (Optional), Cloud Enterprise Data Warehouse Google BigQuery (Preferred) or AWS RedShift or SnowFlakes (Optional)
  • Design recommending best approach suited for data movement from different sources to Cloud EDW using Apache/Confluent Kafka
  • Performs independent functional and technical analysis for major projects supporting several corporate initiatives.
  • Communicate and Work with IT partners and user community with various levels from Sr Management to detailed developer to business SME for project definition .
  • Works on multiple platforms and multiple projects concurrently.
  • Performs code and unit testing for complex scope modules, and projects
  • Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment (~900 Million messages)
  • Provide expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.
  • Provide expertise and hands on experience working on AvroConverters, JsonConverters, and StringConverters.
  • Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector,  JMS source connectors, Tasks, Workers, converters, Transforms.
  • Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API.
  • Working knowledge on Kafka Rest proxy.
  • Ensure optimum performance, high availability and stability of solutions.
  • Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
  • Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms.  Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem. 
  • Use automation tools like provisioning using Jenkins, Udeploy or relevant technologies
  • Ability to perform data related benchmarking, performance analysis and tuning.
  • Strong skills in In-memory applications, Database Design, Data Integration.
Read more
icon
Pune, Hyderabad
icon
7 - 12 yrs
icon
₹7L - ₹20L / yr
Apache Spark
Big Data
Spark
Scala
Hadoop
+3 more
We at Datametica Solutions Private Limited are looking for Big Data Spark Lead who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.

Job Description
Experience : 7+ years
Location : Pune / Hyderabad
Skills :
  • Drive and participate in requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Participate and contribute in Solution Design and Solution Architecture for implementing Big Data Projects on-premise and on cloud
  • Technical Hands on experience in design, coding, development and managing Large Hadoop implementation
  • Proficient in SQL, Hive, PIG, Spark SQL, Shell Scripting, Kafka, Flume, Scoop with large Big Data and Data Warehousing projects with either Java, Python or Scala based Hadoop programming background
  • Proficient with various development methodologies like waterfall, agile/scrum and iterative
  • Good Interpersonal skills and excellent communication skills for US and UK based clients

About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.


We have our own products!
Eagle –
Data warehouse Assessment & Migration Planning Product
Raven –
Automated Workload Conversion Product
Pelican -
Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.

Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.

Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy

Check out more about us on our website below!
www.datametica.com
Read more
DP
Posted by Ankita Kale
icon
Pune
icon
1 - 5 yrs
icon
₹3L - ₹10L / yr
ETL
Hadoop
Apache Hive
Java
Spark
+2 more
  • Core Java: advanced level competency, should have worked on projects with core Java development.

 

  • Linux shell : advanced level competency, work experience with Linux shell scripting, knowledge and experience to use important shell commands

 

  • Rdbms, SQL: advanced level competency, Should have expertise in SQL query language syntax, should be well versed with aggregations, joins of SQL query language.

 

  • Data structures and problem solving: should have ability to use appropriate data structure.

 

  • AWS cloud : Good to have experience with aws serverless toolset along with aws infra

 

  • Data Engineering ecosystem : Good to have experience and knowledge of data engineering, ETL, data warehouse (any toolset)

 

  • Hadoop, HDFS, YARN : Should have introduction to internal working of these toolsets

 

  • HIVE, MapReduce, Spark: Good to have experience developing transformations using hive queries, MapReduce job implementation and Spark Job Implementation. Spark implementation in Scala will be plus point.

 

  • Airflow, Oozie, Sqoop, Zookeeper, Kafka: Good to have knowledge about purpose and working of these technology toolsets. Working experience will be a plus point here.

 

Read more
icon
Pune
icon
2 - 6 yrs
icon
₹3L - ₹15L / yr
SQL
Linux/Unix
Shell Scripting
SQL server
PL/SQL
+3 more

Datametica is looking for talented SQL engineers who would get training & the opportunity to work on Cloud and Big Data Analytics.

 

Mandatory Skills:

  • Strong in SQL development
  • Hands-on at least one scripting language - preferably shell scripting
  • Development experience in Data warehouse projects

Opportunities:

  • Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume, and KafkaWould get a chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
  • Will play an active role in setting up the Modern data platform based on Cloud and Big Data
  • Would be part of teams with rich experience in various aspects of distributed systems and computing
Read more
DP
Posted by Mishita Juneja
icon
Pune
icon
8 - 10 yrs
icon
₹24L - ₹28L / yr
Data Science
Deep Learning
Artificial Intelligence (AI)
Natural Language Processing (NLP)
Python
+7 more

Who we are?

Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.


What do we believe?

  • Best practices are overrated
      • Implementing best practices can only make one an average .
  • Honesty and Transparency
      • We believe in naked truth. We do what we tell and tell what we do.
  • Client Partnership
    • Client - Vendor relationship: No. We partner with clients instead. 
    • And our sales team comprises 100% of our clients.

How do we work ?

It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.

  • Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
  • Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
  • Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
  • Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
  • Innovative: Innovate or Die. We love to challenge the status quo.
  • Experimental: We encourage curiosity & making mistakes.
  • Responsible: Driven. Self motivated. Self governing teams. We own it.

So, what are we hunting for ?

As a Principal Data Scientist, you will help develop and enhance the algorithms and technology that powers our unique system. This role covers a wide range of challenges,

from developing new models using pre-existing components to enable current systems to

be more intelligent. You should be able to train models using existing data and use them in

most creative manner to deliver the smartest experience to customers. You will have to

develop multiple AI applications that push the threshold of intelligence in machines.


Working on multiple projects at a time, you will have to maintain a consistently high level of attention to detail while finding creative ways to provide analytical insights. You will also have to thrive in a fast, high-energy environment and should be able to balance multiple projects in real-time. The thrill of the next big challenge should drive you, and when faced with an obstacle, you should be able to find clever solutions.You must have the ability and interest to work on a range of different types of projects and business processes, and must have a background that demonstrates this ability.






Your bucket of Undertakings :

  1. Collaborate with team members to develop new models to be used for classification problems
  2. Work on software profiling, performance tuning and analysis, and other general software engineering tasks
  3. Use independent judgment to take existing data and build new models from it
  4. Collaborate and provide technical guidance and come up with new ideas,rapid prototyping and converting prototypes into scalable products
  5. Conduct experiments to assess the accuracy and recall of language processing modules and to study the effect of such experiences
  6. Lead AI R&D initiatives to include prototypes and minimum viable products
  7. Work closely with multiple teams on projects like Visual quality inspection, ML Ops, Conversational banking, Demand forecasting, Anomaly detection etc. 
  8. Build reusable and scalable solutions for use across the customer base
  9. Prototype and demonstrate AI related products and solutions for customers
  10. Assist business development teams in the expansion and enhancement of a pipeline to support short- and long-range growth plans
  11. Identify new business opportunities and prioritize pursuits for AI 
  12. Participate in long range strategic planning activities designed to meet the Company’s objectives and to increase its enterprise value and revenue goals

Education & Experience : 

  1. BE/B.Tech/Masters in a quantitative field such as CS, EE, Information sciences, Statistics, Mathematics, Economics, Operations Research, or related, with focus on applied and foundational Machine Learning , AI , NLP and/or / data-driven statistical analysis & modelling
  2. 8+ years of Experience majorly in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions to multiple domains, including financial engineering, financial processes a plus
  3. Strong, proven programming skills and with machine learning and deep learning and Big data frameworks including TensorFlow, Caffe, Spark, Hadoop. Experience with writing complex programs and implementing custom algorithms in these and other environments
  4. Experience beyond using open source tools as-is, and writing custom code on top of, or in addition to, existing open source frameworks
  5. Proven capability in demonstrating successful advanced technology solutions (either prototypes, POCs, well-cited research publications, and/or products) using ML/AI/NLP/data science in one or more domains
  6. Research and implement novel machine learning and statistical approaches
  7. Experience in data management, data analytics middleware, platforms and infrastructure, cloud and fog computing is a plus
  8. Excellent communication skills (oral and written) to explain complex algorithms, solutions to stakeholders across multiple disciplines, and ability to work in a diverse team

Accomplishment Set: 


  1. Extensive experience with Hadoop and Machine learning algorithms
  2. Exposure to Deep Learning, Neural Networks, or related fields and a strong interest and desire to pursue them
  3. Experience in Natural Language Processing, Computer Vision, Machine Learning or Machine Intelligence (Artificial Intelligence)
  4. Passion for solving NLP problems
  5. Experience with specialized tools and project for working with natural language processing
  6. Knowledge of machine learning frameworks like Tensorflow, Pytorch
  7. Experience with software version control systems like Github
  8. Fast learner and be able to work independently as well as in a team environment with good written and verbal communication skills
Read more
icon
Pune, Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹15L - ₹22L / yr
Data Science
Artificial Intelligence (AI)
Deep Learning
Machine Learning (ML)
Hadoop
+6 more

Senior Data Scientist - Applied AI


Who we are?

Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.


What do we believe?

  • Best practices are overrated
      • Implementing best practices can only make one an average .
  • Honesty and Transparency
      • We believe in naked truth. We do what we tell and tell what we do.
  • Client Partnership
    • Client - Vendor relationship: No. We partner with clients instead. 
    • And our sales team comprises 100% of our clients.

How do we work ?

It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.

  • Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
  • Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
  • Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
  • Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
  • Innovative: Innovate or Die. We love to challenge the status quo.
  • Experimental: We encourage curiosity & making mistakes.
  • Responsible: Driven. Self motivated. Self governing teams. We own it.

So, what are we hunting for ?

As a Senior Data Scientist, you will help develop and enhance the algorithms and technology that powers our unique system. This role covers a wide range of challenges,

from developing new models using pre-existing components to enable current systems to

be more intelligent. You should be able to train models using existing data and use them in

most creative manner to deliver the smartest experience to customers. You will have to

develop multiple AI applications that push the threshold of intelligence in machines.


Working on multiple projects at a time, you will have to maintain a consistently high level of attention to detail while finding creative ways to provide analytical insights. You will also have to thrive in a fast, high-energy environment and should be able to balance multiple projects in real-time. The thrill of the next big challenge should drive you, and when faced with an obstacle, you should be able to find clever solutions.You must have the ability and interest to work on a range of different types of projects and business processes, and must have a background that demonstrates this ability.






Your bucket of Undertakings :

  1. Collaborate with team members to develop new models to be used for classification problems
  2. Work on software profiling, performance tuning and analysis, and other general software engineering tasks
  3. Use independent judgment to take existing data and build new models from it
  4. Collaborate and provide technical guidance and come up with new ideas,rapid prototyping and converting prototypes into scalable products
  5. Conduct experiments to assess the accuracy and recall of language processing modules and to study the effect of such experiences
  6. Lead AI R&D initiatives to include prototypes and minimum viable products
  7. Work closely with multiple teams on projects like Visual quality inspection, ML Ops, Conversational banking, Demand forecasting, Anomaly detection etc. 
  8. Build reusable and scalable solutions for use across the customer base
  9. Prototype and demonstrate AI related products and solutions for customers
  10. Assist business development teams in the expansion and enhancement of a pipeline to support short- and long-range growth plans
  11. Identify new business opportunities and prioritize pursuits for AI 
  12. Participate in long range strategic planning activities designed to meet the Company’s objectives and to increase its enterprise value and revenue goals

Education & Experience : 

  1. BE/B.Tech/Masters in a quantitative field such as CS, EE, Information sciences, Statistics, Mathematics, Economics, Operations Research, or related, with focus on applied and foundational Machine Learning , AI , NLP and/or / data-driven statistical analysis & modelling
  2. 5+ years of Experience majorly in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions to multiple domains, including financial engineering, financial processes a plus
  3. Strong, proven programming skills and with machine learning and deep learning and Big data frameworks including TensorFlow, Caffe, Spark, Hadoop. Experience with writing complex programs and implementing custom algorithms in these and other environments
  4. Experience beyond using open source tools as-is, and writing custom code on top of, or in addition to, existing open source frameworks
  5. Proven capability in demonstrating successful advanced technology solutions (either prototypes, POCs, well-cited research publications, and/or products) using ML/AI/NLP/data science in one or more domains
  6. Research and implement novel machine learning and statistical approaches
  7. Experience in data management, data analytics middleware, platforms and infrastructure, cloud and fog computing is a plus
  8. Excellent communication skills (oral and written) to explain complex algorithms, solutions to stakeholders across multiple disciplines, and ability to work in a diverse team

Accomplishment Set: 


  1. Extensive experience with Hadoop and Machine learning algorithms
  2. Exposure to Deep Learning, Neural Networks, or related fields and a strong interest and desire to pursue them
  3. Experience in Natural Language Processing, Computer Vision, Machine Learning or Machine Intelligence (Artificial Intelligence)
  4. Passion for solving NLP problems
  5. Experience with specialized tools and project for working with natural language processing
  6. Knowledge of machine learning frameworks like Tensorflow, Pytorch
  7. Experience with software version control systems like Github
  8. Fast learner and be able to work independently as well as in a team environment with good written and verbal communication skills
Read more
DP
Posted by Mishita Juneja
icon
Pune, Bengaluru (Bangalore)
icon
3 - 5 yrs
icon
₹10L - ₹15L / yr
Data Science
R Programming
Python
Deep Learning
Hadoop
+7 more

Data Scientist - Applied AI


Who we are?

Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.


What do we believe?

  • Best practices are overrated
      • Implementing best practices can only make one an average .
  • Honesty and Transparency
      • We believe in naked truth. We do what we tell and tell what we do.
  • Client Partnership
    • Client - Vendor relationship: No. We partner with clients instead. 
    • And our sales team comprises 100% of our clients.

How do we work ?

It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.

  • Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
  • Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
  • Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
  • Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
  • Innovative: Innovate or Die. We love to challenge the status quo.
  • Experimental: We encourage curiosity & making mistakes.
  • Responsible: Driven. Self motivated. Self governing teams. We own it.

So, what are we hunting for ?

As a Data Scientist, you will help develop and enhance the algorithms and technology that powers our unique system. This role covers a wide range of challenges,

from developing new models using pre-existing components to enable current systems to

be more intelligent. You should be able to train models using existing data and use them in

most creative manner to deliver the smartest experience to customers. You will have to

develop multiple AI applications that push the threshold of intelligence in machines.


Working on multiple projects at a time, you will have to maintain a consistently high level of attention to detail while finding creative ways to provide analytical insights. You will also have to thrive in a fast, high-energy environment and should be able to balance multiple projects in real-time. The thrill of the next big challenge should drive you, and when faced with an obstacle, you should be able to find clever solutions.You must have the ability and interest to work on a range of different types of projects and business processes, and must have a background that demonstrates this ability.






Your bucket of Undertakings :

  1. Collaborate with team members to develop new models to be used for classification problems
  2. Work on software profiling, performance tuning and analysis, and other general software engineering tasks
  3. Use independent judgment to take existing data and build new models from it
  4. Collaborate and provide technical guidance and come up with new ideas,rapid prototyping and converting prototypes into scalable products
  5. Conduct experiments to assess the accuracy and recall of language processing modules and to study the effect of such experiences
  6. Lead AI R&D initiatives to include prototypes and minimum viable products
  7. Work closely with multiple teams on projects like Visual quality inspection, ML Ops, Conversational banking, Demand forecasting, Anomaly detection etc. 
  8. Build reusable and scalable solutions for use across the customer base
  9. Prototype and demonstrate AI related products and solutions for customers
  10. Assist business development teams in the expansion and enhancement of a pipeline to support short- and long-range growth plans
  11. Identify new business opportunities and prioritize pursuits for AI 
  12. Participate in long range strategic planning activities designed to meet the Company’s objectives and to increase its enterprise value and revenue goals

Education & Experience : 

  1. BE/B.Tech/Masters in a quantitative field such as CS, EE, Information sciences, Statistics, Mathematics, Economics, Operations Research, or related, with focus on applied and foundational Machine Learning , AI , NLP and/or / data-driven statistical analysis & modelling
  2. 3+ years of Experience majorly in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions to multiple domains, including financial engineering, financial processes a plus
  3. Strong, proven programming skills and with machine learning and deep learning and Big data frameworks including TensorFlow, Caffe, Spark, Hadoop. Experience with writing complex programs and implementing custom algorithms in these and other environments
  4. Experience beyond using open source tools as-is, and writing custom code on top of, or in addition to, existing open source frameworks
  5. Proven capability in demonstrating successful advanced technology solutions (either prototypes, POCs, well-cited research publications, and/or products) using ML/AI/NLP/data science in one or more domains
  6. Research and implement novel machine learning and statistical approaches
  7. Experience in data management, data analytics middleware, platforms and infrastructure, cloud and fog computing is a plus
  8. Excellent communication skills (oral and written) to explain complex algorithms, solutions to stakeholders across multiple disciplines, and ability to work in a diverse team

Accomplishment Set: 


  1. Extensive experience with Hadoop and Machine learning algorithms
  2. Exposure to Deep Learning, Neural Networks, or related fields and a strong interest and desire to pursue them
  3. Experience in Natural Language Processing, Computer Vision, Machine Learning or Machine Intelligence (Artificial Intelligence)
  4. Passion for solving NLP problems
  5. Experience with specialized tools and project for working with natural language processing
  6. Knowledge of machine learning frameworks like Tensorflow, Pytorch
  7. Experience with software version control systems like Github
  8. Fast learner and be able to work independently as well as in a team environment with good written and verbal communication skills
Read more

Fast paced Startup

Agency job
via Kavayah People Consulting by Kavita Singh
icon
Pune
icon
3 - 6 yrs
icon
₹15L - ₹22L / yr
Big Data
Data engineering
Hadoop
Spark
Apache Hive
+6 more

ears of Exp: 3-6+ Years 
Skills: Scala, Python, Hive, Airflow, Spark

Languages: Java, Python, Shell Scripting

GCP: BigTable, DataProc,  BigQuery, GCS, Pubsub

OR
AWS: Athena, Glue, EMR, S3, Redshift

MongoDB, MySQL, Kafka

Platforms: Cloudera / Hortonworks
AdTech domain experience is a plus.
Job Type - Full Time 

Read more
DP
Posted by Mishita Juneja
icon
Pune
icon
10 - 14 yrs
icon
₹30L - ₹35L / yr
Artificial Intelligence (AI)
Natural Language Processing (NLP)
Machine Learning (ML)
Deep Learning
Research and development
+10 more

Director - Applied AI


Who we are?

Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.


What do we believe?

  • Best practices are overrated
      • Implementing best practices can only make one an average .
  • Honesty and Transparency
      • We believe in naked truth. We do what we tell and tell what we do.
  • Client Partnership
    • Client - Vendor relationship: No. We partner with clients instead. 
    • And our sales team comprises 100% of our clients.

How do we work ?

It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.

  • Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
  • Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
  • Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
  • Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
  • Innovative: Innovate or Die. We love to challenge the status quo.
  • Experimental: We encourage curiosity & making mistakes.
  • Responsible: Driven. Self motivated. Self governing teams. We own it.

So, what are we hunting for ?

  1. To devise strategy through the delivery of sustainable intelligent solutions, strategic customer engagements, and research and development
  2. To enable and lead our data and analytics team and develop machine learning and AI paths across strategic programs, solution implementation, and customer relationships
  3. To manage existing customers and realize new opportunities and capabilities of growth
  4. To collaborate with different stakeholders for delivering automated, high availability and secure solutions
  5. To develop talent and skills to create a high performance team that delivers superior products
  6. To communicate effectively across the organization to ensure that the team is completely aligned to business objectives
  7. To build strong interpersonal relationships with peers and other key stakeholders that will contribute to your team's success

Your bucket of Undertakings :

  1. Develop an AI roadmap aligned to client needs and vision
  2. Develop a Go-To-Market strategy of AI solutions for customers
  3. Build a diverse cross-functional team to identify and prioritize key areas of the business across AI, NLP and other cognitive solutions that will drive significant business benefit
  4. Lead AI R&D initiatives to include prototypes and minimum viable products
  5. Work closely with multiple teams on projects like Visual quality inspection, ML Ops, Conversational banking, Demand forecasting, Anomaly detection etc. 
  6. Build reusable and scalable solutions for use across the customer base
  7. Create AI white papers and enable strategic partnerships with industry leaders
  8. Align, mentor, and manage, team(s) around strategic initiatives
  9. Prototype and demonstrate AI related products and solutions for customers
  10. Establish processes, operations, measurement, and controls for end-to-end life-cycle management of the digital workforce (intelligent systems)
  11. Lead AI tech challenges and proposals with team members
  12. Assist business development teams in the expansion and enhancement of a pipeline to support short- and long-range growth plans
  13. Identify new business opportunities and prioritize pursuits for AI 

Education & Experience : 

  1. Advanced or basic degree (PhD with few years experience, or MS / BS (with many years experience)) in a quantitative field such as CS, EE, Information sciences, Statistics, Mathematics, Economics, Operations Research, or related, with focus on applied and foundational Machine Learning , AI , NLP and/or / data-driven statistical analysis & modelling
  2. 10+ years of Experience majorly in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions to multiple domains, including financial engineering, financial processes a plus
  3. Strong, proven programming skills and with machine learning and deep learning and Big data frameworks including TensorFlow, Caffe, Spark, Hadoop. Experience with writing complex programs and implementing custom algorithms in these and other environments
  4. Experience beyond using open source tools as-is, and writing custom code on top of, or in addition to, existing open source frameworks
  5. Proven capability in demonstrating successful advanced technology solutions (either prototypes, POCs, well-cited research publications, and/or products) using ML/AI/NLP/data science in one or more domains
  6. Experience in data management, data analytics middleware, platforms and infrastructure, cloud and fog computing is a plus
  7. Excellent communication skills (oral and written) to explain complex algorithms, solutions to stakeholders across multiple disciplines, and ability to work in a diverse team
Read more
icon
Bengaluru (Bangalore), Hyderabad, Pune
icon
9 - 16 yrs
icon
₹7L - ₹32L / yr
Big Data
Scala
Spark
Hadoop
Python
+1 more
Greetings..
 
We have urgent requirement for the post of Big Data Architect in reputed MNC company
 
 


Location:  Pune/Nagpur,Goa,Hyderabad/Bangalore

Job Requirements:

  • 9 years and above of total experience preferably in bigdata space.
  • Creating spark applications using Scala to process data.
  • Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
  • Experience in spark job performance tuning and optimizations.
  • Should have experience in processing data using Kafka/Pyhton.
  • Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
  • Should be proficient in writing SQL queries to process data in Data Warehouse.
  • Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
  • Experience on AWS services like EMR.
Read more
Agency job
via Nu-Pie by Sanjay Biswakarma
icon
Pune, Gandhinagar, Hyderabad
icon
4 - 5 yrs
icon
₹6L - ₹18L / yr
Java
JIRA
Hibernate (Java)
Spring MVC
Mockito
+13 more
work from home is applicable
candidate should have atleast 4 year experience
well known in full stack developer
location is in bangalore and pune
Relevant skills like java angular springboot react
Read more
icon
Pune
icon
2.5 - 6 yrs
icon
₹1L - ₹8L / yr
Big Data
Hadoop
Apache Hive
Spark
Data engineering
+3 more
Job Title/Designation: Big Data Engineers - Hadoop, Pig, Hive, Spark
Employment Type: Full Time, Permanent

Job Description:
 
Work Location - Pune
Work Experience - 2.5 to 6 Years
 
Note - Candidates with short notice periods will be given preference.
 
Mandatory Skills:
  • Working knowledge and hands-on experience of Big Data / Hadoop tools and technologies.
  • Experience of working in Pig, Hive, Flume, Sqoop, Kafka etc.
  • Database development experience with a solid understanding of core database concepts, relational database design, ODS & DWH.
  • Expert level knowledge of SQL and scripting preferably UNIX shell scripting, Perl scripting.
  • Working knowledge of Data integration solution and well-versed with any ETL tool (Informatica / Datastage / Abinitio/Pentaho etc).
  • Strong problem solving and logical reasoning ability.
  • Excellent understanding of all aspects of the Software Development Lifecycle.
  • Excellent written and verbal communication skills.
  • Experience in Java will be an added advantage
  • Knowledge of object oriented programming concepts
  • Exposure to ISMS policies and procedures.
Read more
DP
Posted by Rashmi Poovaiah
icon
Bengaluru (Bangalore), Chennai, Pune
icon
4 - 10 yrs
icon
₹8L - ₹15L / yr
Big Data
Hadoop
Spark
Apache Kafka
HiveQL
+2 more

Role Summary/Purpose:

We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.

 

Requirements:

  • The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
  • Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
  • Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc…
  • Excellent knowledge in SQL & Linux Shell scripting
  • Bachelors/Master’s/Engineering Degree from a well-reputed university.
  • Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
  • Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
  • Ability to manage a diverse and challenging stakeholder community
  • Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.

 

Responsibilities

  • Should works as a senior developer/individual contributor based on situations
  • Should be part of SCRUM discussions and to take requirements
  • Adhere to SCRUM timeline and deliver accordingly
  • Participate in a team environment for the design, development and implementation
  • Should take L3 activities on need basis
  • Prepare Unit/SIT/UAT testcase and log the results
  • Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
  • Quality delivery and automation should be a top priority
  • Co-ordinate change and deployment in time
  • Should create healthy harmony within the team
  • Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
Read more
icon
Remote, Pune
icon
3 - 8 yrs
icon
₹4L - ₹15L / yr
Big Data
Hadoop
Java
Spark
Hibernate (Java)
+5 more
ob Title/Designation:
Mid / Senior Big Data Engineer
Job Description:
Role: Big Data EngineerNumber of open positions: 5Location: PuneAt Clairvoyant, we're building a thriving big data practice to help enterprises enable and accelerate the adoption of Big data and cloud services. In the big data space, we lead and serve as innovators, troubleshooters, and enablers. Big data practice at Clairvoyant, focuses on solving our customer's business problems by delivering products designed with best in class engineering practices and a commitment to keep the total cost of ownership to a minimum.
Must Have:
  • 4-10 years of experience in software development.
  • At least 2 years of relevant work experience on large scale Data applications.
  • Strong coding experience in Java is mandatory
  • Good aptitude, strong problem solving abilities, and analytical skills, ability to take ownership as appropriate
  • Should be able to do coding, debugging, performance tuning and deploying the apps to Prod.
  • Should have good working experience on
  • o Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet)
  • o Kafka
  • o J2EE Frameworks (Spring/Hibernate/REST)
  • o Spark Streaming or any other streaming technology.
  • Strong coding experience in Java is mandatory
  • Ability to work on the sprint stories to completion along with Unit test case coverage.
  • Experience working in Agile Methodology
  • Excellent communication and coordination skills
  • Knowledgeable (and preferred hands on) - UNIX environments, different continuous integration tools.
  • Must be able to integrate quickly into the team and work independently towards team goals
Role & Responsibilities:
  • Take the complete responsibility of the sprint stories' execution
  • Be accountable for the delivery of the tasks in the defined timelines with good quality.
  • Follow the processes for project execution and delivery.
  • Follow agile methodology
  • Work with the team lead closely and contribute to the smooth delivery of the project.
  • Understand/define the architecture and discuss the pros-cons of the same with the team
  • Involve in the brainstorming sessions and suggest improvements in the architecture/design.
  • Work with other team leads to get the architecture/design reviewed.
  • Work with the clients and counter-parts (in US) of the project.
  • Keep all the stakeholders updated about the project/task status/risks/issues if there are any.
Education: BE/B.Tech from reputed institute.
Experience: 4 to 9 years
Keywords: java, scala, spark, software development, hadoop, hive
Locations: Pune
Read more
icon
Pune
icon
1 - 4 yrs
icon
₹4L - ₹7L / yr
Java
Elastic Search
Solr
Hadoop
Natural Language Processing (NLP)
  • You will be responsible for design, development and testing of Products
  • Contributing in all phases of the development lifecycle

 

  • Writing well designed, testable, efficient code
  • Ensure designs are in compliance with specifications

  • Prepare and produce releases of software components

  • Support continuous improvement by investigating alternatives and technologies and presenting these for architectural review

  • Some of the technologies you will be working on: Core Java, Solr, Hadoop, Spark, Elastic search, Clustering, Text Mining, NLP, Mahout and Lucene etc.
Read more
DP
Posted by Ankit Goenka
icon
Pune
icon
3 - 7 yrs
icon
₹12L - ₹18L / yr
Data Science
Python
R Programming
Big Data
Hadoop
The selected would be a part of the inhouse Data Labs team. He/she would be responsible to creation insights-driven decision structure.

This will include:

Scorecards
Strategies
MIS

The verticals included are:

Risk
Marketing
Product
Read more
icon
Pune
icon
7 - 14 yrs
icon
₹10L - ₹20L / yr
Amazon Web Services (AWS)
Hadoop
Java
Looking for JAVA Tech Lead-AWS/HAdoop Experienced Person. Product based firm preferred.Must have handled teaam size of 10plus people
Read more
DP
Posted by Sandeep Chaudhary
icon
Pune
icon
2 - 5 yrs
icon
₹1L - ₹18L / yr
Hadoop
Spark
Apache Hive
Apache Flume
Java
+5 more
Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation
Read more
icon
Pune
icon
4 - 9 yrs
icon
₹4L - ₹12L / yr
Java
Hadoop
Spark
Machine Learning (ML)
Artificial Intelligence (AI)
We are looking to hire passionate Java techies who will be comfortable learning and working on Java and any open source frameworks & technologies. She/he should be a 100% hands-on person on technology skills and interested in solving complex analytics use cases. We are working on a complete stack platform which has already been adopted by some very large Enterprises across the world. Candidates with prior experience of having worked in typical R&D environment and/or product based companies with dynamic work environment will be have an additional edge. We currently work on some of the latest technologies like Cassandra, Hadoop, Apache Solr, Spark and Lucene, and some core Machine Learning and AI technologies. Even though prior knowledge of these skills is not mandatory at all for selection, you would be expected to learn new skills on the job.
Read more
DP
Posted by Aditya Bhelande
icon
Pune
icon
8 - 10 yrs
icon
₹25L - ₹25L / yr
Product Management
Relational Database (RDBMS)
NOSQL Databases
Hadoop
Angular (2+)
+3 more
Job Summary: In this position, you will manage and provide technical leadership of the product team. Leadership, communication, prioritization and a focus on excellence are essential characteristics for this role Responsibilities and Duties: - Manage (recruit, motivate, develop, strengthen) the product engineering team - Mentor and lead the engineering team as a subject matter expert for all technology and architecture related issues. - Architect, Design, Develop & Implement frameworks and application software components using Cloud and Enterprise/Open Source technologies - Be accountable for the overall technical excellence and quality of the platforms - Should be proactive and enhance existing software architecture by analyzing and identifying areas - Create and Manage technology strategy that can serve the business strategy - Low burn, highly iterative new product development and testing - 80/20 rule: effectively create low resource high impact technology solutions - Future ready: always looking to disrupt and challenge the status quo Required Experience, Skills and Qualifications: - B.E in Computer Science or equivalent with demonstrated problem-solving and leadership skills to pursue correct engineering process in adverse conditions. Ability to embrace and demonstrate leadership beyond ownership - Articulates a clear technology vision that is inspiring and aligned with business needs and Experienced in articulating the business goals and cascading them down the organization appropriately so that every person in the organization is appropriately stretched to achieve outcomes - Minimum of 8 – 10 years of progressive experience in software engineering out of which 70% should be in startup companies / environment, leadership capacity and experience across variety of technology stacks (from Conception to Go-Live). Ability to work efficiently in an entrepreneurial and in a startup environment. - Strong experience in building and deploying Enterprise grade products - Excellent and robust understanding of scalable product system architecture(s), platforms and core technologies - Strong experience in RDBMS & preferably worked with NoSQL & Hadoop as well and worked across multiple platforms (Front-end, Middleware) - Well versed in technologies like Angular, Core JS, Java, Python, etc.
Read more
DP
Posted by Ramakrishna Murthy
icon
Pune
icon
3 - 7 yrs
icon
₹10L - ₹15L / yr
HDFS
Apache Flume
Apache HBase
Hadoop
Impala
+3 more
Securonix is a Big Data Security Analytics product company. The only product which delivers real-time behavior analytics (UEBA) on Big Data.
Read more
DP
Posted by Shekhar Singh kshatri
icon
Pune
icon
5 - 10 yrs
icon
₹5L - ₹5L / yr
Hadoop
Scala
Spark
We at InfoVision Labs, are passionate about technology and what our clients would like to get accomplished. We continuously strive to understand business challenges, changing competitive landscape and how the cutting edge technology can help position our client to the forefront of the competition.We are a fun loving team of Usability Experts and Software Engineers, focused on Mobile Technology, Responsive Web Solutions and Cloud Based Solutions. Job Responsibilities: ◾Minimum 3 years of experience in Big Data skills required. ◾Complete life cycle experience with Big Data is highly preferred ◾Skills – Hadoop, Spark, “R”, Hive, Pig, H-Base and Scala ◾Excellent communication skills ◾Ability to work independently with no-supervision.
Read more
DP
Posted by Aditya Bhelande
icon
Pune
icon
8 - 10 yrs
icon
₹17L - ₹20L / yr
Hibernate
Amazon RedShift
Java
MySQL
Amazon Web Services (AWS)
+5 more
Responsibilities: Responsible for all aspects of development and support for internally created or supported application software, including: the development methodologies, technologies (language, databases, support tools), development and testing hardware/software environments, and management of the application development staff and project workload for the agency. Your job is to manage a project and manage a set of engineers. You are responsible for making your team happy and productive, helping them manage their careers. You are responsible for delivering great product on time and with quality. ESSENTIAL DUTIES AND RESPONSIBILITIES • Supervise the projects and responsibilities of the Web and Software Developers. • Responsible for the prioritization of projects assigned to the Application Development team. • Responsible for the complete development lifecycle of the agency software systems; including gathering requirements, database management, software development, testing, implementation, user follow up, support and Project Management. • Responsible for the Integrity, Maintenance and changes to the Application Development Servers and Databases. (DBA) • Responsible for developing and implementing change control processes for the development team to follow. • Provides ad-hoc reporting and decision support required for management decision processes. • Makes technology decisions that effect Software Development. • Works on special I.T. projects as needed. Familiarity with Technologies: • Java, Spring, Hibernate, Laravel • MySQL, MongoDB, Amazon RedShift, Hadoop • Angular.js, Boostrap • AWS cloud infrastructure QUALIFICATIONS • Bachelor’s degree in Information Science or Computer Science required. • 8-10 years of Application Development Experience required. • Five plus years of Database Design and Analysis required. • Strong verbal communication skills required.
Read more
icon
Pune, Chicago, Hyderabad, New York
icon
1 - 15 yrs
icon
₹5L - ₹10L / yr
Abinitio
Cognos
Microstrategy
Business Analysts
Hadoop
+2 more
Exusia, Inc. (ex-OO-see-ah: translated from Greek to mean "Immensely Powerful and Agile") was founded with the objective of addressing a growing gap in the data innovation and engineering space as the next global leader in big data, analytics, data integration and cloud computing solutions. Exusia is a multinational, delivery centric firm that provides consulting and software as a service (SaaS) solutions to leading financial, government, healthcare, telecommunications and high technology organizations facing the largest data volumes and the most complex information management requirements. Exusia was founded in the United States in 2012 with headquarters in New York City and regional US offices in Chicago, Atlanta and Los Angeles. Exusia’s international presence continues to expand and is driven from Toronto (Canada), Sao Paulo (Brazil), Johannesburg (South Africa) and Pune (India). Our mission is to empower clients to grow revenue, optimize costs and satisfy regulatory requirements through the innovative use of information and analytics. We leverage a unique blend of strategy, intellectual property, technical execution and outsourcing to enable our clients to achieve significant returns on investment for their business, data and technology initiatives. At the core of our philosophy is a quality-first, trust-building, delivery-focused client relationship. The foundation of this relationship is the talent of our team. By recruiting and retaining the best talent in the industry, we are able to deliver to clients, whose data volumes and requirements number among the largest in the world, a broad range of customized, cutting edge solutions.
Read more
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort