Cutshort logo

35+ Hadoop Jobs in Pune | Hadoop Job openings in Pune

Apply to 35+ Hadoop Jobs in Pune on CutShort.io. Explore the latest Hadoop Job opportunities across top companies like Google, Amazon & Adobe.

icon
AdElement

at AdElement

2 recruiters
Sachin Bhatevara
Posted by Sachin Bhatevara
Pune
0 - 1 yrs
₹0.1L - ₹3L / yr
skill iconJava
skill iconJavascript
skill iconReact.js
skill iconAngular (2+)
skill iconAngularJS (1.x)
+10 more

We are looking for computer science/engineering final year students/ fresh graduates that have solid understanding of computer science fundamentals (algorithms, data structures, object oriented programming) and strong java. programming skills. You will get to work on machine learning algorithms as applied to online advertising or do data analytics. You will learn how to collaborate in small, agile teams, do rapid development, testing and get to taste the invigorating feel of a start-up company.


Experience

None required


Required Skills

-Solid foundation in computer science, with strong competencies in data structures, algorithms, and software design

-Java / Python programming

-UI/UX HTML5 CSS3, Javascript

-MYSQL, Relational Databases

-MVC Framework, ReactJS


Optional Skills

-Familiarity with online advertising, web technologies

-Familiarity with Hadoop, Spark, Scala


Education

UG - B.Tech/B.E. - Computers; PG - M.Tech - Computers

Read more
iLink Systems

at iLink Systems

1 video
1 recruiter
Ganesh Sooriyamoorthu
Posted by Ganesh Sooriyamoorthu
Chennai, Pune, Noida, Bengaluru (Bangalore)
5 - 15 yrs
₹10L - ₹15L / yr
Apache Kafka
Big Data
skill iconJava
Spark
Hadoop
+1 more
  • KSQL
  • Data Engineering spectrum (Java/Spark)
  • Spark Scala / Kafka Streaming
  • Confluent Kafka components
  • Basic understanding of Hadoop


Read more
Telstra

at Telstra

1 video
1 recruiter
Mahesh Balappa
Posted by Mahesh Balappa
Bengaluru (Bangalore), Hyderabad, Pune
3 - 7 yrs
Best in industry
Spark
Hadoop
NOSQL Databases
Apache Kafka

About Telstra

 

Telstra is Australia’s leading telecommunications and technology company, with operations in more than 20 countries, including In India where we’re building a new Innovation and Capability Centre (ICC) in Bangalore.

 

We’re growing, fast, and for you that means many exciting opportunities to develop your career at Telstra. Join us on this exciting journey, and together, we’ll reimagine the future.

 

Why Telstra?

 

  • We're an iconic Australian company with a rich heritage that's been built over 100 years. Telstra is Australia's leading Telecommunications and Technology Company. We've been operating internationally for more than 70 years.
  • International presence spanning over 20 countries.
  • We are one of the 20 largest telecommunications providers globally
  • At Telstra, the work is complex and stimulating, but with that comes a great sense of achievement. We are shaping the tomorrow's modes of communication with our innovation driven teams.

 

Telstra offers an opportunity to make a difference to lives of millions of people by providing the choice of flexibility in work and a rewarding career that you will be proud of!

 

About the team

Being part of Networks & IT means you'll be part of a team that focuses on extending our network superiority to enable the continued execution of our digital strategy.

With us, you'll be working with world-leading technology and change the way we do IT to ensure business needs drive priorities, accelerating our digitisation programme.

 

Focus of the role

Any new engineer who comes into data chapter would be mostly into developing reusable data processing and storage frameworks that can be used across data platform.

 

About you

To be successful in the role, you'll bring skills and experience in:-

 

Essential 

  • Hands-on experience in Spark Core, Spark SQL, SQL/Hive/Impala, Git/SVN/Any other VCS and Data warehousing
  • Skilled in the Hadoop Ecosystem(HDP/Cloudera/MapR/EMR etc)
  • Azure data factory/Airflow/control-M/Luigi
  • PL/SQL
  • Exposure to NOSQL(Hbase/Cassandra/GraphDB(Neo4J)/MongoDB)
  • File formats (Parquet/ORC/AVRO/Delta/Hudi etc.)
  • Kafka/Kinesis/Eventhub

 

Highly Desirable

Experience and knowledgeable on the following:

  • Spark Streaming
  • Cloud exposure (Azure/AWS/GCP)
  • Azure data offerings - ADF, ADLS2, Azure Databricks, Azure Synapse, Eventhubs, CosmosDB etc.
  • Presto/Athena
  • Azure DevOps
  • Jenkins/ Bamboo/Any similar build tools
  • Power BI
  • Prior experience in building or working in team building reusable frameworks,
  • Data modelling.
  • Data Architecture and design principles. (Delta/Kappa/Lambda architecture)
  • Exposure to CI/CD
  • Code Quality - Static and Dynamic code scans
  • Agile SDLC      

 

If you've got a passion to innovate, succeed as part of a great team, and looking for the next step in your career, we'd welcome you to apply!

___________________________

 

We’re committed to building a diverse and inclusive workforce in all its forms. We encourage applicants from diverse gender, cultural and linguistic backgrounds and applicants who may be living with a disability. We also offer flexibility in all our roles, to ensure everyone can participate.

To learn more about how we support our people, including accessibility adjustments we can provide you through the recruitment process, visit tel.st/thrive.

Read more
Pune
5 - 9 yrs
₹5L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more
This role is for a developer with strong core application or system programming skills in Scala, java and
good exposure to concepts and/or technology across the broader spectrum. Enterprise Risk Technology
covers a variety of existing systems and green-field projects.
A Full stack Hadoop development experience with Scala development
A Full stack Java development experience covering Core Java (including JDK 1.8) and good understanding
of design patterns.
Requirements:-
• Strong hands-on development in Java technologies.
• Strong hands-on development in Hadoop technologies like Spark, Scala and experience on Avro.
• Participation in product feature design and documentation
• Requirement break-up, ownership and implantation.
• Product BAU deliveries and Level 3 production defects fixes.
Qualifications & Experience
• Degree holder in numerate subject
• Hands on Experience on Hadoop, Spark, Scala, Impala, Avro and messaging like Kafka
• Experience across a core compiled language – Java
• Proficiency in Java related frameworks like Springs, Hibernate, JPA
• Hands on experience in JDK 1.8 and strong skillset covering Collections, Multithreading with

For internal use only
For internal use only
experience working on Distributed applications.
• Strong hands-on development track record with end-to-end development cycle involvement
• Good exposure to computational concepts
• Good communication and interpersonal skills
• Working knowledge of risk and derivatives pricing (optional)
• Proficiency in SQL (PL/SQL), data modelling.
• Understanding of Hadoop architecture and Scala program language is a good to have.
Read more
xpressbees
Alfiya Khan
Posted by Alfiya Khan
Pune, Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹25L / yr
Big Data
Data Warehouse (DWH)
Data modeling
Apache Spark
Data integration
+10 more
Company Profile
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.

Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.

What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.

Qualifications & Experience relevant for the role

• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).

• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Read more
Pune
5 - 8 yrs
₹1L - ₹15L / yr
Informatica
Informatica PowerCenter
Spark
Hadoop
Big Data
+6 more

Technical/Core skills

  1. Minimum 3 yrs of exp in Informatica Big data Developer(BDM) in Hadoop environment.
  2. Have knowledge of informatica Power exchange (PWX).
  3. Minimum 3 yrs of exp in big data querying tool like Hive and Impala.
  4. Ability to designing/development of complex mappings using informatica Big data Developer.
  5. Create and manage Informatica power exchange and CDC real time implementation
  6. Strong Unix knowledge skills for writing shell scripts and troubleshoot of existing scripts.
  7. Good knowledge of big data platforms and its framework.
  8. Good to have an experience in cloudera data platform (CDP)
  9. Experience with building stream processing systems using Kafka and spark
  10. Excellent SQL knowledge

 

Soft skills :

  1. Ability to work independently 
  2. Strong analytical and problem solving skills
  3. Attitude of learning new technology
  4. Regular interaction with vendors, partners and stakeholders
Read more
EnterpriseMinds

at EnterpriseMinds

2 recruiters
phani kalyan
Posted by phani kalyan
Pune
9 - 14 yrs
₹20L - ₹40L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more
Job Id: SG0601

Hi,

Enterprise Minds is looking for Data Architect for Pune Location.

Req Skills:
Python,Pyspark,Hadoop,Java,Scala
Read more
Tier 1 MNC
Chennai, Pune, Bengaluru (Bangalore), Noida, Gurugram, Kochi (Cochin), Coimbatore, Hyderabad, Mumbai, Navi Mumbai
3 - 12 yrs
₹3L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+1 more
Greetings,
We are hiring for Tier 1 MNC for the software developer with good knowledge in Spark,Hadoop and Scala
Read more
Celebal Technologies

at Celebal Technologies

2 recruiters
Payal Hasnani
Posted by Payal Hasnani
Jaipur, Noida, Gurugram, Delhi, Ghaziabad, Faridabad, Pune, Mumbai
5 - 15 yrs
₹7L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more
Job Responsibilities:

• Project Planning and Management
o Take end-to-end ownership of multiple projects / project tracks
o Create and maintain project plans and other related documentation for project
objectives, scope, schedule and delivery milestones
o Lead and participate across all the phases of software engineering, right from
requirements gathering to GO LIVE
o Lead internal team meetings on solution architecture, effort estimation, manpower
planning and resource (software/hardware/licensing) planning
o Manage RIDA (Risks, Impediments, Dependencies, Assumptions) for projects by
developing effective mitigation plans
• Team Management
o Act as the Scrum Master
o Conduct SCRUM ceremonies like Sprint Planning, Daily Standup, Sprint Retrospective
o Set clear objectives for the project and roles/responsibilities for each team member
o Train and mentor the team on their job responsibilities and SCRUM principles
o Make the team accountable for their tasks and help the team in achieving them
o Identify the requirements and come up with a plan for Skill Development for all team
members
• Communication
o Be the Single Point of Contact for the client in terms of day-to-day communication
o Periodically communicate project status to all the stakeholders (internal/external)
• Process Management and Improvement
o Create and document processes across all disciplines of software engineering
o Identify gaps and continuously improve processes within the team
o Encourage team members to contribute towards process improvement
o Develop a culture of quality and efficiency within the team

Must have:
• Minimum 08 years of experience (hands-on as well as leadership) in software / data engineering
across multiple job functions like Business Analysis, Development, Solutioning, QA, DevOps and
Project Management
• Hands-on as well as leadership experience in Big Data Engineering projects
• Experience developing or managing cloud solutions using Azure or other cloud provider
• Demonstrable knowledge on Hadoop, Hive, Spark, NoSQL DBs, SQL, Data Warehousing, ETL/ELT,
DevOps tools
• Strong project management and communication skills
• Strong analytical and problem-solving skills
• Strong systems level critical thinking skills
• Strong collaboration and influencing skills

Good to have:
• Knowledge on PySpark, Azure Data Factory, Azure Data Lake Storage, Synapse Dedicated SQL
Pool, Databricks, PowerBI, Machine Learning, Cloud Infrastructure
• Background in BFSI with focus on core banking
• Willingness to travel

Work Environment
• Customer Office (Mumbai) / Remote Work

Education
• UG: B. Tech - Computers / B. E. – Computers / BCA / B.Sc. Computer Science
Read more
Bengaluru (Bangalore), Pune, Hyderabad
4 - 6 yrs
₹6L - ₹22L / yr
Apache HBase
Apache Hive
Apache Spark
skill iconGo Programming (Golang)
skill iconRuby on Rails (ROR)
+5 more
Urgently require Hadoop Developer in reputed MNC company

Location: Bangalore/Pune/Hyderabad/Nagpur

4-5 years of overall experience in software development.
- Experience on Hadoop (Apache/Cloudera/Hortonworks) and/or other Map Reduce Platforms
- Experience on Hive, Pig, Sqoop, Flume and/or Mahout
- Experience on NO-SQL – HBase, Cassandra, MongoDB
- Hands on experience with Spark development,  Knowledge of Storm, Kafka, Scala
- Good knowledge of Java
- Good background of Configuration Management/Ticketing systems like Maven/Ant/JIRA etc.
- Knowledge around any Data Integration and/or EDW tools is plus
- Good to have knowledge of  using Python/Perl/Shell

 

Please note - Hbase hive and spark are must.

Read more
Gurugram, Pune, Bengaluru (Bangalore), Delhi, Noida, Ghaziabad, Faridabad
2 - 9 yrs
₹8L - ₹20L / yr
skill iconPython
Hadoop
Big Data
Spark
Data engineering
+3 more

Key Responsibilities : ( Data Developer Python, Spark)

Exp : 2 to 9 Yrs 

Development of data platforms, integration frameworks, processes, and code.

Develop and deliver APIs in Python or Scala for Business Intelligence applications build using a range of web languages

Develop comprehensive automated tests for features via end-to-end integration tests, performance tests, acceptance tests and unit tests.

Elaborate stories in a collaborative agile environment (SCRUM or Kanban)

Familiarity with cloud platforms like GCP, AWS or Azure.

Experience with large data volumes.

Familiarity with writing rest-based services.

Experience with distributed processing and systems

Experience with Hadoop / Spark toolsets

Experience with relational database management systems (RDBMS)

Experience with Data Flow development

Knowledge of Agile and associated development techniques including:

Read more
Intergral Add Science
Pune
5 - 8 yrs
₹9L - ₹25L / yr
skill iconJava
Hadoop
Apache Spark
skill iconScala
skill iconPython
+3 more
  • 6+ years of recent hands-on Java development
  • Developing data pipelines in AWS or Google Cloud
  • Java, Python, JavaScript programming languages
  • Great understanding of designing for performance, scalability, and reliability of data intensive application
  • Hadoop MapReduce, Spark, Pig. Understanding of database fundamentals and advanced SQL knowledge.
  • In-depth understanding of object oriented programming concepts and design patterns
  • Ability to communicate clearly to technical and non-technical audiences, verbally and in writing
  • Understanding of full software development life cycle, agile development and continuous integration
  • Experience in Agile methodologies including Scrum and Kanban
Read more
Consulting Leader
Pune, Mumbai
8 - 10 yrs
₹8L - ₹16L / yr
Data integration
talend
Hadoop
Integration
skill iconJava
+1 more

 

Job Description for :

Role: Data/Integration Architect

Experience – 8-10 Years

Notice Period: Under 30 days

Key Responsibilities: Designing, Developing frameworks for batch and real time jobs on Talend. Leading migration of these jobs from Mulesoft to Talend, maintaining best practices for the team, conducting code reviews and demos.

Core Skillsets:

Talend Data Fabric - Application, API Integration, Data Integration. Knowledge on Talend Management Cloud, deployment and scheduling of jobs using TMC or Autosys.

Programming Languages - Python/Java
Databases: SQL Server, Other Databases, Hadoop

Should have worked on Agile

Sound communication skills

Should be open to learning new technologies based on business needs on the job

Additional Skills:

Awareness of other data/integration platforms like Mulesoft, Camel

Awareness Hadoop, Snowflake, S3

Read more
Simplifai Cognitive Solutions Pvt Ltd
Priyanka Malani
Posted by Priyanka Malani
Pune
2 - 15 yrs
₹10L - ₹30L / yr
Spark
Big Data
Apache Spark
skill iconPython
PySpark
+1 more

We are looking for a skilled Senior/Lead Bigdata Engineer to join our team. The role is part of the research and development team, where you with enthusiasm and knowledge are going to be our technical evangelist for the development of our inspection technology and products.

 

At Elop we are developing product lines for sustainable infrastructure management using our own patented technology for ultrasound scanners and combine this with other sources to see holistic overview of the concrete structure. At Elop we will provide you with world-class colleagues highly motivated to position the company as an international standard of structural health monitoring. With the right character you will be professionally challenged and developed.

This position requires travel to Norway.

 

Elop is sister company of Simplifai and co-located together in all geographic locations.

https://elop.no/

https://www.simplifai.ai/en/


Roles and Responsibilities

  • Define technical scope and objectives through research and participation in requirements gathering and definition of processes
  • Ingest and Process data from data sources (Elop Scanner) in raw format into Big Data ecosystem
  • Realtime data feed processing using Big Data ecosystem
  • Design, review, implement and optimize data transformation processes in Big Data ecosystem
  • Test and prototype new data integration/processing tools, techniques and methodologies
  • Conversion of MATLAB code into Python/C/C++.
  • Participate in overall test planning for the application integrations, functional areas and projects.
  • Work with cross functional teams in an Agile/Scrum environment to ensure a quality product is delivered.

Desired Candidate Profile

  • Bachelor's degree in Statistics, Computer or equivalent
  • 7+ years of experience in Big Data ecosystem, especially Spark, Kafka, Hadoop, HBase.
  • 7+ years of hands-on experience in Python/Scala is a must.
  • Experience in architecting the big data application is needed.
  • Excellent analytical and problem solving skills
  • Strong understanding of data analytics and data visualization, and must be able to help development team with visualization of data.
  • Experience with signal processing is plus.
  • Experience in working on client server architecture is plus.
  • Knowledge about database technologies like RDBMS, Graph DB, Document DB, Apache Cassandra, OpenTSDB
  • Good communication skills, written and oral, in English

We can Offer

  • An everyday life with exciting and challenging tasks with the development of socially beneficial solutions
  • Be a part of companys research and Development team to create unique and innovative products
  • Colleagues with world-class expertise, and an organization that has ambitions and is highly motivated to position the company as an international player in maintenance support and monitoring of critical infrastructure!
  • Good working environment with skilled and committed colleagues an organization with short decision paths.
  • Professional challenges and development
Read more
The other Fruit

at The other Fruit

1 video
3 recruiters
Dipendra SIngh
Posted by Dipendra SIngh
Pune
1 - 5 yrs
₹3L - ₹15L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
skill iconPython
Data Structures
Algorithms
+17 more
 
SD (ML and AI) job description:

Advanced degree in computer science, math, statistics or a related discipline ( Must have master degree )
Extensive data modeling and data architecture skills
Programming experience in Python, R
Background in machine learning frameworks such as TensorFlow or Keras
Knowledge of Hadoop or another distributed computing systems
Experience working in an Agile environment
Advanced math skills (Linear algebra
Discrete math
Differential equations (ODEs and numerical)
Theory of statistics 1
Numerical analysis 1 (numerical linear algebra) and 2 (quadrature)
Abstract algebra
Number theory
Real analysis
Complex analysis
Intermediate analysis (point set topology)) ( important )
Strong written and verbal communications
Hands on experience on NLP and NLG
Experience in advanced statistical techniques and concepts. ( GLM/regression, Random forest, boosting, trees, text mining ) and experience with application.
 
Read more
DataMetica

at DataMetica

1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
7 - 12 yrs
₹12L - ₹33L / yr
Big Data
Hadoop
Spark
Apache Spark
Apache Hive
+3 more

Job description

Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)

Primary Location : India-Pune, Hyderabad

Experience : 7 - 12 Years

Management Level: 7

Joining Time: Immediate Joiners are preferred


  • Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
  • Align architecture with business requirements and stabilizing the developed solution
  • Ability to build prototypes to demonstrate the technical feasibility of your vision
  • Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
  • To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
  • Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
  • Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
  • Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
  • Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
  • Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
  • Deployment sophisticated analytics program of code using any of cloud application.


Perks and Benefits we Provide!


  • Working with Highly Technical and Passionate, mission-driven people
  • Subsidized Meals & Snacks
  • Flexible Schedule
  • Approachable leadership
  • Access to various learning tools and programs
  • Pet Friendly
  • Certification Reimbursement Policy
  • Check out more about us on our website below!

www.datametica.com

Read more
DataMetica

at DataMetica

1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
3 - 12 yrs
₹5L - ₹25L / yr
Apache Kafka
Big Data
Hadoop
Apache Hive
skill iconJava
+1 more

Summary
Our Kafka developer has a combination of technical skills, communication skills and business knowledge. The developer should be able to work on multiple medium to large projects. The successful candidate will have excellent technical skills of Apache/Confluent Kafka, Enterprise Data WareHouse preferable GCP BigQuery or any equivalent Cloud EDW and also will be able to take oral and written business requirements and develop efficient code to meet set deliverables.

 

Must Have Skills

  • Participate in the development, enhancement and maintenance of data applications both as an individual contributor and as a lead.
  • Leading in the identification, isolation, resolution and communication of problems within the production environment.
  • Leading developer and applying technical skills Apache/Confluent Kafka (Preferred) AWS Kinesis (Optional), Cloud Enterprise Data Warehouse Google BigQuery (Preferred) or AWS RedShift or SnowFlakes (Optional)
  • Design recommending best approach suited for data movement from different sources to Cloud EDW using Apache/Confluent Kafka
  • Performs independent functional and technical analysis for major projects supporting several corporate initiatives.
  • Communicate and Work with IT partners and user community with various levels from Sr Management to detailed developer to business SME for project definition .
  • Works on multiple platforms and multiple projects concurrently.
  • Performs code and unit testing for complex scope modules, and projects
  • Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment (~900 Million messages)
  • Provide expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.
  • Provide expertise and hands on experience working on AvroConverters, JsonConverters, and StringConverters.
  • Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector,  JMS source connectors, Tasks, Workers, converters, Transforms.
  • Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API.
  • Working knowledge on Kafka Rest proxy.
  • Ensure optimum performance, high availability and stability of solutions.
  • Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
  • Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms.  Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem. 
  • Use automation tools like provisioning using Jenkins, Udeploy or relevant technologies
  • Ability to perform data related benchmarking, performance analysis and tuning.
  • Strong skills in In-memory applications, Database Design, Data Integration.
Read more
DataMetica

at DataMetica

1 video
7 recruiters
Sumangali Desai
Posted by Sumangali Desai
Pune, Hyderabad
7 - 12 yrs
₹7L - ₹20L / yr
Apache Spark
Big Data
Spark
skill iconScala
Hadoop
+3 more
We at Datametica Solutions Private Limited are looking for Big Data Spark Lead who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.

Job Description
Experience : 7+ years
Location : Pune / Hyderabad
Skills :
  • Drive and participate in requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Participate and contribute in Solution Design and Solution Architecture for implementing Big Data Projects on-premise and on cloud
  • Technical Hands on experience in design, coding, development and managing Large Hadoop implementation
  • Proficient in SQL, Hive, PIG, Spark SQL, Shell Scripting, Kafka, Flume, Scoop with large Big Data and Data Warehousing projects with either Java, Python or Scala based Hadoop programming background
  • Proficient with various development methodologies like waterfall, agile/scrum and iterative
  • Good Interpersonal skills and excellent communication skills for US and UK based clients

About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.


We have our own products!
Eagle –
Data warehouse Assessment & Migration Planning Product
Raven –
Automated Workload Conversion Product
Pelican -
Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.

Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.

Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy

Check out more about us on our website below!
www.datametica.com
Read more
dataeaze systems

at dataeaze systems

1 recruiter
Ankita Kale
Posted by Ankita Kale
Pune
1 - 5 yrs
₹3L - ₹10L / yr
ETL
Hadoop
Apache Hive
skill iconJava
Spark
+2 more
  • Core Java: advanced level competency, should have worked on projects with core Java development.

 

  • Linux shell : advanced level competency, work experience with Linux shell scripting, knowledge and experience to use important shell commands

 

  • Rdbms, SQL: advanced level competency, Should have expertise in SQL query language syntax, should be well versed with aggregations, joins of SQL query language.

 

  • Data structures and problem solving: should have ability to use appropriate data structure.

 

  • AWS cloud : Good to have experience with aws serverless toolset along with aws infra

 

  • Data Engineering ecosystem : Good to have experience and knowledge of data engineering, ETL, data warehouse (any toolset)

 

  • Hadoop, HDFS, YARN : Should have introduction to internal working of these toolsets

 

  • HIVE, MapReduce, Spark: Good to have experience developing transformations using hive queries, MapReduce job implementation and Spark Job Implementation. Spark implementation in Scala will be plus point.

 

  • Airflow, Oozie, Sqoop, Zookeeper, Kafka: Good to have knowledge about purpose and working of these technology toolsets. Working experience will be a plus point here.

 

Read more
DataMetica

at DataMetica

1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune
2 - 6 yrs
₹3L - ₹15L / yr
SQL
Linux/Unix
Shell Scripting
SQL server
PL/SQL
+3 more

Datametica is looking for talented SQL engineers who would get training & the opportunity to work on Cloud and Big Data Analytics.

 

Mandatory Skills:

  • Strong in SQL development
  • Hands-on at least one scripting language - preferably shell scripting
  • Development experience in Data warehouse projects

Opportunities:

  • Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume, and KafkaWould get a chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
  • Will play an active role in setting up the Modern data platform based on Cloud and Big Data
  • Would be part of teams with rich experience in various aspects of distributed systems and computing
Read more
Fast paced Startup
Pune
3 - 6 yrs
₹15L - ₹22L / yr
Big Data
Data engineering
Hadoop
Spark
Apache Hive
+6 more

ears of Exp: 3-6+ Years 
Skills: Scala, Python, Hive, Airflow, Spark

Languages: Java, Python, Shell Scripting

GCP: BigTable, DataProc,  BigQuery, GCS, Pubsub

OR
AWS: Athena, Glue, EMR, S3, Redshift

MongoDB, MySQL, Kafka

Platforms: Cloudera / Hortonworks
AdTech domain experience is a plus.
Job Type - Full Time 

Read more
Persistent Systems

at Persistent Systems

1 video
1 recruiter
Agency job
via Milestone Hr Consultancy by Haina khan
Bengaluru (Bangalore), Hyderabad, Pune
9 - 16 yrs
₹7L - ₹32L / yr
Big Data
skill iconScala
Spark
Hadoop
skill iconPython
+1 more
Greetings..
 
We have urgent requirement for the post of Big Data Architect in reputed MNC company
 
 


Location:  Pune/Nagpur,Goa,Hyderabad/Bangalore

Job Requirements:

  • 9 years and above of total experience preferably in bigdata space.
  • Creating spark applications using Scala to process data.
  • Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
  • Experience in spark job performance tuning and optimizations.
  • Should have experience in processing data using Kafka/Pyhton.
  • Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
  • Should be proficient in writing SQL queries to process data in Data Warehouse.
  • Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
  • Experience on AWS services like EMR.
Read more
DataMetica

at DataMetica

1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune
2.5 - 6 yrs
₹1L - ₹8L / yr
Big Data
Hadoop
Apache Hive
Spark
Data engineering
+3 more
Job Title/Designation: Big Data Engineers - Hadoop, Pig, Hive, Spark
Employment Type: Full Time, Permanent

Job Description:
 
Work Location - Pune
Work Experience - 2.5 to 6 Years
 
Note - Candidates with short notice periods will be given preference.
 
Mandatory Skills:
  • Working knowledge and hands-on experience of Big Data / Hadoop tools and technologies.
  • Experience of working in Pig, Hive, Flume, Sqoop, Kafka etc.
  • Database development experience with a solid understanding of core database concepts, relational database design, ODS & DWH.
  • Expert level knowledge of SQL and scripting preferably UNIX shell scripting, Perl scripting.
  • Working knowledge of Data integration solution and well-versed with any ETL tool (Informatica / Datastage / Abinitio/Pentaho etc).
  • Strong problem solving and logical reasoning ability.
  • Excellent understanding of all aspects of the Software Development Lifecycle.
  • Excellent written and verbal communication skills.
  • Experience in Java will be an added advantage
  • Knowledge of object oriented programming concepts
  • Exposure to ISMS policies and procedures.
Read more
Maveric Systems

at Maveric Systems

3 recruiters
Rashmi Poovaiah
Posted by Rashmi Poovaiah
Bengaluru (Bangalore), Chennai, Pune
4 - 10 yrs
₹8L - ₹15L / yr
Big Data
Hadoop
Spark
Apache Kafka
HiveQL
+2 more

Role Summary/Purpose:

We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.

 

Requirements:

  • The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
  • Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
  • Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc…
  • Excellent knowledge in SQL & Linux Shell scripting
  • Bachelors/Master’s/Engineering Degree from a well-reputed university.
  • Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
  • Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
  • Ability to manage a diverse and challenging stakeholder community
  • Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.

 

Responsibilities

  • Should works as a senior developer/individual contributor based on situations
  • Should be part of SCRUM discussions and to take requirements
  • Adhere to SCRUM timeline and deliver accordingly
  • Participate in a team environment for the design, development and implementation
  • Should take L3 activities on need basis
  • Prepare Unit/SIT/UAT testcase and log the results
  • Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
  • Quality delivery and automation should be a top priority
  • Co-ordinate change and deployment in time
  • Should create healthy harmony within the team
  • Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
Read more
Clairvoyant India Private Limited
Taruna Roy
Posted by Taruna Roy
Remote, Pune
3 - 8 yrs
₹4L - ₹15L / yr
Big Data
Hadoop
skill iconJava
Spark
Hibernate (Java)
+5 more
ob Title/Designation:
Mid / Senior Big Data Engineer
Job Description:
Role: Big Data EngineerNumber of open positions: 5Location: PuneAt Clairvoyant, we're building a thriving big data practice to help enterprises enable and accelerate the adoption of Big data and cloud services. In the big data space, we lead and serve as innovators, troubleshooters, and enablers. Big data practice at Clairvoyant, focuses on solving our customer's business problems by delivering products designed with best in class engineering practices and a commitment to keep the total cost of ownership to a minimum.
Must Have:
  • 4-10 years of experience in software development.
  • At least 2 years of relevant work experience on large scale Data applications.
  • Strong coding experience in Java is mandatory
  • Good aptitude, strong problem solving abilities, and analytical skills, ability to take ownership as appropriate
  • Should be able to do coding, debugging, performance tuning and deploying the apps to Prod.
  • Should have good working experience on
  • o Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet)
  • o Kafka
  • o J2EE Frameworks (Spring/Hibernate/REST)
  • o Spark Streaming or any other streaming technology.
  • Strong coding experience in Java is mandatory
  • Ability to work on the sprint stories to completion along with Unit test case coverage.
  • Experience working in Agile Methodology
  • Excellent communication and coordination skills
  • Knowledgeable (and preferred hands on) - UNIX environments, different continuous integration tools.
  • Must be able to integrate quickly into the team and work independently towards team goals
Role & Responsibilities:
  • Take the complete responsibility of the sprint stories' execution
  • Be accountable for the delivery of the tasks in the defined timelines with good quality.
  • Follow the processes for project execution and delivery.
  • Follow agile methodology
  • Work with the team lead closely and contribute to the smooth delivery of the project.
  • Understand/define the architecture and discuss the pros-cons of the same with the team
  • Involve in the brainstorming sessions and suggest improvements in the architecture/design.
  • Work with other team leads to get the architecture/design reviewed.
  • Work with the clients and counter-parts (in US) of the project.
  • Keep all the stakeholders updated about the project/task status/risks/issues if there are any.
Education: BE/B.Tech from reputed institute.
Experience: 4 to 9 years
Keywords: java, scala, spark, software development, hadoop, hive
Locations: Pune
Read more
PODIUM SYSTEMS PRIVATE LIMITED
Saujanya Sathe
Posted by Saujanya Sathe
Pune
1 - 4 yrs
₹4L - ₹7L / yr
skill iconJava
skill iconElastic Search
Solr
Hadoop
Natural Language Processing (NLP)
  • You will be responsible for design, development and testing of Products
  • Contributing in all phases of the development lifecycle

 

  • Writing well designed, testable, efficient code
  • Ensure designs are in compliance with specifications

  • Prepare and produce releases of software components

  • Support continuous improvement by investigating alternatives and technologies and presenting these for architectural review

  • Some of the technologies you will be working on: Core Java, Solr, Hadoop, Spark, Elastic search, Clustering, Text Mining, NLP, Mahout and Lucene etc.
Read more
first principle labs

at first principle labs

1 recruiter
Ankit Goenka
Posted by Ankit Goenka
Pune
3 - 7 yrs
₹12L - ₹18L / yr
skill iconData Science
skill iconPython
skill iconR Programming
Big Data
Hadoop
The selected would be a part of the inhouse Data Labs team. He/she would be responsible to creation insights-driven decision structure.

This will include:

Scorecards
Strategies
MIS

The verticals included are:

Risk
Marketing
Product
Read more
Saama Technologies

at Saama Technologies

6 recruiters
Vartica Lal
Posted by Vartica Lal
Pune
7 - 14 yrs
₹10L - ₹20L / yr
skill iconAmazon Web Services (AWS)
Hadoop
skill iconJava
Looking for JAVA Tech Lead-AWS/HAdoop Experienced Person. Product based firm preferred.Must have handled teaam size of 10plus people
Read more
Saama Technologies

at Saama Technologies

6 recruiters
Sandeep Chaudhary
Posted by Sandeep Chaudhary
Pune
2 - 5 yrs
₹1L - ₹18L / yr
Hadoop
Spark
Apache Hive
Apache Flume
skill iconJava
+5 more
Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation
Read more
Pune
4 - 9 yrs
₹4L - ₹12L / yr
skill iconJava
Hadoop
Spark
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
We are looking to hire passionate Java techies who will be comfortable learning and working on Java and any open source frameworks & technologies. She/he should be a 100% hands-on person on technology skills and interested in solving complex analytics use cases. We are working on a complete stack platform which has already been adopted by some very large Enterprises across the world. Candidates with prior experience of having worked in typical R&D environment and/or product based companies with dynamic work environment will be have an additional edge. We currently work on some of the latest technologies like Cassandra, Hadoop, Apache Solr, Spark and Lucene, and some core Machine Learning and AI technologies. Even though prior knowledge of these skills is not mandatory at all for selection, you would be expected to learn new skills on the job.
Read more
YuktaMedia

at YuktaMedia

1 video
3 recruiters
Aditya Bhelande
Posted by Aditya Bhelande
Pune
8 - 10 yrs
₹25L - ₹25L / yr
Product Management
Relational Database (RDBMS)
NOSQL Databases
Hadoop
skill iconAngular (2+)
+3 more
Job Summary: In this position, you will manage and provide technical leadership of the product team. Leadership, communication, prioritization and a focus on excellence are essential characteristics for this role Responsibilities and Duties: - Manage (recruit, motivate, develop, strengthen) the product engineering team - Mentor and lead the engineering team as a subject matter expert for all technology and architecture related issues. - Architect, Design, Develop & Implement frameworks and application software components using Cloud and Enterprise/Open Source technologies - Be accountable for the overall technical excellence and quality of the platforms - Should be proactive and enhance existing software architecture by analyzing and identifying areas - Create and Manage technology strategy that can serve the business strategy - Low burn, highly iterative new product development and testing - 80/20 rule: effectively create low resource high impact technology solutions - Future ready: always looking to disrupt and challenge the status quo Required Experience, Skills and Qualifications: - B.E in Computer Science or equivalent with demonstrated problem-solving and leadership skills to pursue correct engineering process in adverse conditions. Ability to embrace and demonstrate leadership beyond ownership - Articulates a clear technology vision that is inspiring and aligned with business needs and Experienced in articulating the business goals and cascading them down the organization appropriately so that every person in the organization is appropriately stretched to achieve outcomes - Minimum of 8 – 10 years of progressive experience in software engineering out of which 70% should be in startup companies / environment, leadership capacity and experience across variety of technology stacks (from Conception to Go-Live). Ability to work efficiently in an entrepreneurial and in a startup environment. - Strong experience in building and deploying Enterprise grade products - Excellent and robust understanding of scalable product system architecture(s), platforms and core technologies - Strong experience in RDBMS & preferably worked with NoSQL & Hadoop as well and worked across multiple platforms (Front-end, Middleware) - Well versed in technologies like Angular, Core JS, Java, Python, etc.
Read more
Securonix

at Securonix

1 recruiter
Ramakrishna Murthy
Posted by Ramakrishna Murthy
Pune
3 - 7 yrs
₹10L - ₹15L / yr
HDFS
Apache Flume
Apache HBase
Hadoop
Impala
+3 more
Securonix is a Big Data Security Analytics product company. The only product which delivers real-time behavior analytics (UEBA) on Big Data.
Read more
InfoVision Labs India Pvt. Ltd. Pune
Shekhar Singh kshatri
Posted by Shekhar Singh kshatri
Pune
5 - 10 yrs
₹5L - ₹5L / yr
Hadoop
skill iconScala
Spark
We at InfoVision Labs, are passionate about technology and what our clients would like to get accomplished. We continuously strive to understand business challenges, changing competitive landscape and how the cutting edge technology can help position our client to the forefront of the competition.We are a fun loving team of Usability Experts and Software Engineers, focused on Mobile Technology, Responsive Web Solutions and Cloud Based Solutions. Job Responsibilities: ◾Minimum 3 years of experience in Big Data skills required. ◾Complete life cycle experience with Big Data is highly preferred ◾Skills – Hadoop, Spark, “R”, Hive, Pig, H-Base and Scala ◾Excellent communication skills ◾Ability to work independently with no-supervision.
Read more
YuktaMedia

at YuktaMedia

1 video
3 recruiters
Aditya Bhelande
Posted by Aditya Bhelande
Pune
8 - 10 yrs
₹17L - ₹20L / yr
Hibernate
Amazon RedShift
skill iconJava
MySQL
skill iconAmazon Web Services (AWS)
+5 more
Responsibilities: Responsible for all aspects of development and support for internally created or supported application software, including: the development methodologies, technologies (language, databases, support tools), development and testing hardware/software environments, and management of the application development staff and project workload for the agency. Your job is to manage a project and manage a set of engineers. You are responsible for making your team happy and productive, helping them manage their careers. You are responsible for delivering great product on time and with quality. ESSENTIAL DUTIES AND RESPONSIBILITIES • Supervise the projects and responsibilities of the Web and Software Developers. • Responsible for the prioritization of projects assigned to the Application Development team. • Responsible for the complete development lifecycle of the agency software systems; including gathering requirements, database management, software development, testing, implementation, user follow up, support and Project Management. • Responsible for the Integrity, Maintenance and changes to the Application Development Servers and Databases. (DBA) • Responsible for developing and implementing change control processes for the development team to follow. • Provides ad-hoc reporting and decision support required for management decision processes. • Makes technology decisions that effect Software Development. • Works on special I.T. projects as needed. Familiarity with Technologies: • Java, Spring, Hibernate, Laravel • MySQL, MongoDB, Amazon RedShift, Hadoop • Angular.js, Boostrap • AWS cloud infrastructure QUALIFICATIONS • Bachelor’s degree in Information Science or Computer Science required. • 8-10 years of Application Development Experience required. • Five plus years of Database Design and Analysis required. • Strong verbal communication skills required.
Read more
Pune, Chicago, Hyderabad, New York
1 - 15 yrs
₹5L - ₹10L / yr
Abinitio
Cognos
Microstrategy
Business Analysts
Hadoop
+2 more
Exusia, Inc. (ex-OO-see-ah: translated from Greek to mean "Immensely Powerful and Agile") was founded with the objective of addressing a growing gap in the data innovation and engineering space as the next global leader in big data, analytics, data integration and cloud computing solutions. Exusia is a multinational, delivery centric firm that provides consulting and software as a service (SaaS) solutions to leading financial, government, healthcare, telecommunications and high technology organizations facing the largest data volumes and the most complex information management requirements. Exusia was founded in the United States in 2012 with headquarters in New York City and regional US offices in Chicago, Atlanta and Los Angeles. Exusia’s international presence continues to expand and is driven from Toronto (Canada), Sao Paulo (Brazil), Johannesburg (South Africa) and Pune (India). Our mission is to empower clients to grow revenue, optimize costs and satisfy regulatory requirements through the innovative use of information and analytics. We leverage a unique blend of strategy, intellectual property, technical execution and outsourcing to enable our clients to achieve significant returns on investment for their business, data and technology initiatives. At the core of our philosophy is a quality-first, trust-building, delivery-focused client relationship. The foundation of this relationship is the talent of our team. By recruiting and retaining the best talent in the industry, we are able to deliver to clients, whose data volumes and requirements number among the largest in the world, a broad range of customized, cutting edge solutions.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort