Cutshort logo
Clairvoyant India Private Limited logo
Senior Data engineer
Clairvoyant India Private Limited's logo

Senior Data engineer

Taruna Roy's profile picture
Posted by Taruna Roy
4 - 9 yrs
₹10L - ₹15L / yr
Remote only
Skills
skill iconJava
Apache Spark
Spark
SQL
HiveQL
Apache Hive
Must-Have:
  • 5+ years of experience in software development.
  • At least 2 years of relevant work experience on large scale Data applications
  • Good attitude, strong problem-solving abilities, analytical skills, ability to take ownership as appropriate
  • Should be able to do coding, debugging, performance tuning, and deploying the apps to Prod.
  • Should have good working experience Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet)
  • Kafka
  • J2EE Frameworks (Spring/Hibernate/REST)
  • Spark Streaming or any other streaming technology.
  • Java programming language is mandatory.
  • Good to have experience with Java
  • Ability to work on the sprint stories to completion along with Unit test case coverage.
  • Experience working in Agile Methodology
  • Excellent communication and coordination skills
  • Knowledgeable (and preferred hands-on) - UNIX environments, different continuous integration tools.
  • Must be able to integrate quickly into the team and work independently towards team goals
Role & Responsibilities:
  • Take the complete responsibility of the sprint stories’ execution
  • Be accountable for the delivery of the tasks in the defined timelines with good quality
  • Follow the processes for project execution and delivery.
  • Follow agile methodology
  • Work with the team lead closely and contribute to the smooth delivery of the project.
  • Understand/define the architecture and discuss the pros-cons of the same with the team
  • Involve in the brainstorming sessions and suggest improvements in the architecture/design.
  • Work with other team leads to get the architecture/design reviewed.
  • Work with the clients and counterparts (in US) of the project.
  • Keep all the stakeholders updated about the project/task status/risks/issues if there are any.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Clairvoyant India Private Limited

Founded :
2014
Type
Size :
100-1000
Stage :
Profitable
About
Clairvoyant is a global technology consulting and services company. We help organizations build innovative products and solutions using big data, analytics, and the cloud. We provide the best-in-class solutions and services that leverage big data and continually exceed client expectations. Our deep vertical knowledge combined with expertise on multiple, enterprise-grade big data platforms helps support purpose-built solutions to meet our client’s business needs. Our global team consists of experienced professionals, with backgrounds in design, software engineering, analytics, and data science. Each member of our team is highly energetic and committed to helping our clients achieve their goals.
Read more
Connect with the team
Profile picture
Afreen Shaikh
Profile picture
Sandeep Bharate
Profile picture
Unnati Yadav
Profile picture
Taruna Roy
Profile picture
Chakravarthi Peram
Company social profiles
bloglinkedintwitterfacebook

Similar jobs

Carsome
at Carsome
3 recruiters
Piyush Palkar
Posted by Piyush Palkar
Kuala Lumpur
3 - 5 yrs
₹20L - ₹25L / yr
SQL
skill iconPython
Business Analysis
Statistical Modeling
MS-Office
+6 more

Your Day-to-Day

  1. Derive Insights and drive major strategic projects to improve Business Metrics and take responsibility for cost efficiency and Revenue management across the country
  2. Perform Market research, Post Mortem analyses on competitor expansion and Market Penetration patterns. 
  3. Provide in-depth business analysis and data insights for internal stakeholders to help improve business. Derive and launch projects in order to reduce the gaps between targeted and projected business metrics
  4. Responsible for optimizing Carsome’s C2B and B2C customer acquisition and Dealer retention funnel. Work closely with Marketing and Tech teams to create, produce and implement creative digital marketing campaigns and drive CRM initiatives and strategies 
  5. Analyse the Revenue flows and processes large datasets to gather process insights and propose process improvement ideas for Carsome across SE-Asia
  6. Lead commercial projects & process mapping, from conceptualization to completion, to build or re-engineer business models, tools and processes.
  7. Having experience in analyses and insights in dealing on Unit Economics, COGs and P&L will be preferred ,but not mandatory
  8. Use Business Intelligence and Data Science tools to answer the appropriate business problems using SQL, Tableau or Python.
  9. Coordinate with HQ Data Insights Team and manage internal stakeholders across departments to ensure the smooth delivery of strategic projects
  10. Work across different departments/functions (BI,DE, tech, pricing, finance, operations, marketing, CS,CX) and also on high impact projects and support business expansion initiatives





Your Know-Know


  • At least a Bachelor's Degree in Accounting/Finance/Business or the equivalent. 
  • 3-5  years of experience in strategy / consulting / analytical / project management roles; experience in e-commerce, Start-ups or Unicorns(CARS24,OLA,SWIGGY,FLIPKART,OYO) or entrepreneur experience preferred + At Least 2 years of experience leading a team
  • Top-notch academics from a Tier 1 college (IIM / IIT/ NIT)
  • Must have SQL/PostgreSQL/Tableau Experience. 
  • Excellent Market Research, reporting and analytical skills, including carrying out weekly and monthly reporting
  • Holds experience in working with Data/Business Intelligence Team
  • Analytical mindset with ability to present data in a structured and informative way
  • Enjoy a fast-paced environment and can align business objectives with product priorities
  • Good to have : Financial modelling, Developing financial forecasts , development of Financial - strategic plan/framework
Read more
Pion Global Solutions LTD
Sheela P
Posted by Sheela P
Mumbai
3 - 100 yrs
₹4L - ₹15L / yr
Spark
Big Data
Hadoop
HDFS
Apache Sqoop
+2 more
Looking for Big data Developers in Mumbai Location
Read more
Panamax InfoTech Ltd.
at Panamax InfoTech Ltd.
2 recruiters
Bhavani P
Posted by Bhavani P
Remote only
3 - 12 yrs
₹3L - ₹8L / yr
Unix administration
PL/SQL
skill iconJava
SQL Query Analyzer
Shell Scripting
+1 more
DBMS & SQL
Concepts of RDBMS, Normalization techniques
Entity Relationship diagram/ ER-Model
Transaction, commit, rollback, ACID properties
Transaction log
Difference in behavior of the column if it is nullable
SQL Statements
Join Operations
DDL, DML, Data Modelling
Optimal Query writing - with Aggregate fn, Group By, having clause, Order by etc. Should be
hands on for scenario-based query Writing
Query optimizing technique, Indexing in depth
Understanding query plan
Batching
Locking schemes
Isolation levels
Concept of stored procedure, Cursor, trigger, View
Beginner level - PL/SQL - Procedure Function writing skill.
Spring JPA and Spring Data basics
Hibernate mappings

UNIX
Basic Concepts on Unix
Commonly used Unix Commands with their options
Combining Unix commands using Pipe Filter etc.
Vi Editor & its different modes
Basic level Scripting and basic knowledge on how to execute jar files from host
Files and directory permissions
Application based scenarios.
Read more
Crisp Analytics
at Crisp Analytics
8 recruiters
Seema Pahwa
Posted by Seema Pahwa
Mumbai
2 - 6 yrs
₹6L - ₹15L / yr
Big Data
Spark
skill iconScala
skill iconAmazon Web Services (AWS)
Apache Kafka

 

The Data Engineering team is one of the core technology teams of Lumiq.ai and is responsible for creating all the Data related products and platforms which scale for any amount of data, users, and processing. The team also interacts with our customers to work out solutions, create technical architectures and deliver the products and solutions.

If you are someone who is always pondering how to make things better, how technologies can interact, how various tools, technologies, and concepts can help a customer or how a customer can use our products, then Lumiq is the place of opportunities.

 

Who are you?

  • Enthusiast is your middle name. You know what’s new in Big Data technologies and how things are moving
  • Apache is your toolbox and you have been a contributor to open source projects or have discussed the problems with the community on several occasions
  • You use cloud for more than just provisioning a Virtual Machine
  • Vim is friendly to you and you know how to exit Nano
  • You check logs before screaming about an error
  • You are a solid engineer who writes modular code and commits in GIT
  • You are a doer who doesn’t say “no” without first understanding
  • You understand the value of documentation of your work
  • You are familiar with Machine Learning Ecosystem and how you can help your fellow Data Scientists to explore data and create production-ready ML pipelines

 

Eligibility

Experience

  • At least 2 years of Data Engineering Experience
  • Have interacted with Customers


Must Have Skills

  • Amazon Web Services (AWS) - EMR, Glue, S3, RDS, EC2, Lambda, SQS, SES
  • Apache Spark
  • Python
  • Scala
  • PostgreSQL
  • Git
  • Linux


Good to have Skills

  • Apache NiFi
  • Apache Kafka
  • Apache Hive
  • Docker
  • Amazon Certification

 

 

Read more
Oneture Technologies
at Oneture Technologies
1 recruiter
Ravi Mevcha
Posted by Ravi Mevcha
Mumbai, Navi Mumbai
2 - 4 yrs
₹8L - ₹12L / yr
Spark
Big Data
ETL
Data engineering
ADF
+4 more

Job Overview


We are looking for a Data Engineer to join our data team to solve data-driven critical

business problems. The hire will be responsible for expanding and optimizing the existing

end-to-end architecture including the data pipeline architecture. The Data Engineer will

collaborate with software developers, database architects, data analysts, data scientists and platform team on data initiatives and will ensure optimal data delivery architecture is

consistent throughout ongoing projects. The right candidate should have hands on in

developing a hybrid set of data-pipelines depending on the business requirements.

Responsibilities

  • Develop, construct, test and maintain existing and new data-driven architectures.
  • Align architecture with business requirements and provide solutions which fits best
  • to solve the business problems.
  • Build the infrastructure required for optimal extraction, transformation, and loading
  • of data from a wide variety of data sources using SQL and Azure ‘big data’
  • technologies.
  • Data acquisition from multiple sources across the organization.
  • Use programming language and tools efficiently to collate the data.
  • Identify ways to improve data reliability, efficiency and quality
  • Use data to discover tasks that can be automated.
  • Deliver updates to stakeholders based on analytics.
  • Set up practices on data reporting and continuous monitoring

Required Technical Skills

  • Graduate in Computer Science or in similar quantitative area
  • 1+ years of relevant work experience as a Data Engineer or in a similar role.
  • Advanced SQL knowledge, Data-Modelling and experience working with relational
  • databases, query authoring (SQL) as well as working familiarity with a variety of
  • databases.
  • Experience in developing and optimizing ETL pipelines, big data pipelines, and datadriven
  • architectures.
  • Must have strong big-data core knowledge & experience in programming using Spark - Python/Scala
  • Experience with orchestrating tool like Airflow or similar
  • Experience with Azure Data Factory is good to have
  • Build processes supporting data transformation, data structures, metadata,
  • dependency and workload management.
  • Experience supporting and working with cross-functional teams in a dynamic
  • environment.
  • Good understanding of Git workflow, Test-case driven development and using CICD
  • is good to have
  • Good to have some understanding of Delta tables It would be advantage if the candidate also have below mentioned experience using
  • the following software/tools:
  • Experience with big data tools: Hadoop, Spark, Hive, etc.
  • Experience with relational SQL and NoSQL databases
  • Experience with cloud data services
  • Experience with object-oriented/object function scripting languages: Python, Scala, etc.
Read more
Innovative Brand Design Studio
Innovative Brand Design Studio
Agency job
via Unnati by Astha Bharadwaj
Mumbai
2 - 5 yrs
₹8L - ₹15L / yr
skill iconData Science
Data Scientist
skill iconPython
Tableau
skill iconR Programming
+7 more
Come work with a growing consumer market research team that is currently serving one of the biggest FMCG companies in the world.
 
Our client works with global brands and creates projects that are user-centric. They build cost-effective and compelling product stories that help their clients gain a competitive edge and growth in their brand image. Their team of experts consists of academicians, designers, startup specialists and experts are working for clients across 12 countries targeting new markets and solutions with an excellent understanding of end-users.
 
They work with global brands from FMCG, Beauty and Hospitality sectors, namely Unilever, Lipton, Lakme, Loreal, AXE etc. who have chosen them for a long-term relationship, depending on their insights, consumer research, storytelling and contetnt experience. The founder is a design and product activation expert with over 10 years of impact and over 300 completed projects in India, UK, South Asia and USA.
 
As a Data Scientist, you will help to deliver quantitative consumer primary market research through Survey.
 
What you will do:
  • Handling Survey Scripting Process through the use of survey software platform such as Toluna, QuestionPro, Decipher.
  • Mining large & complex data sets using SQL, Hadoop, NoSQL or Spark.
  • Delivering complex consumer data analysis through the use of software like R, Python, Excel and etc such as
  • Working on Basic Statistical Analysis such as:T-Test &Correlation
  • Performing more complex data analysis processes through Machine Learning technique such as:
  1. Classification
  2. Regression
  3. Clustering
  4. Text
  5. Analysis
  6. Neural Networking
  • Creating an Interactive Dashboard Creation through the use of software like Tableau or any other software you are able to use.
  • Working on Statistical and mathematical modelling, application of ML and AI algorithms

 

What you need to have:
  • Bachelor or Master's degree in highly quantitative field (CS, machine learning, mathematics, statistics, economics) or equivalent experience.
  • An opportunity for one, who is eager of proving his or her data analytical skills with one of the Biggest FMCG market player.

 

Read more
Numerator
at Numerator
4 recruiters
Ketaki Kambale
Posted by Ketaki Kambale
Remote, Pune
3 - 9 yrs
₹5L - ₹20L / yr
Data Warehouse (DWH)
Informatica
ETL
skill iconPython
SQL
+1 more

We’re hiring a talented Data Engineer and Big Data enthusiast to work in our platform to help ensure that our data quality is flawless.  As a company, we have millions of new data points every day that come into our system. You will be working with a passionate team of engineers to solve challenging problems and ensure that we can deliver the best data to our customers, on-time. You will be using the latest cloud data warehouse technology to build robust and reliable data pipelines.

Duties/Responsibilities Include:

  •  Develop expertise in the different upstream data stores and systems across Numerator.
  • Design, develop and maintain data integration pipelines for Numerators growing data sets and product offerings.
  • Build testing and QA plans for data pipelines.
  • Build data validation testing frameworks to ensure high data quality and integrity.
  • Write and maintain documentation on data pipelines and schemas
 

Requirements:

  • BS or MS in Computer Science or related field of study
  • 3 + years of experience in the data warehouse space
  • Expert in SQL, including advanced analytical queries
  • Proficiency in Python (data structures, algorithms, object oriented programming, using API’s)
  • Experience working with a cloud data warehouse (Redshift, Snowflake, Vertica)
  • Experience with a data pipeline scheduling framework (Airflow)
  • Experience with schema design and data modeling

Exceptional candidates will have:

  • Amazon Web Services (EC2, DMS, RDS) experience
  • Terraform and/or ansible (or similar) for infrastructure deployment
  • Airflow -- Experience building and monitoring DAGs, developing custom operators, using script templating solutions.
  • Experience supporting production systems in an on-call environment
Read more
Fragma Data Systems
at Fragma Data Systems
8 recruiters
Harpreet kour
Posted by Harpreet kour
Bengaluru (Bangalore)
1 - 6 yrs
₹10L - ₹15L / yr
Data engineering
Big Data
PySpark
SQL
skill iconPython
 Good experience in Pyspark - Including Dataframe core functions and Spark SQL
Good experience in SQL DBs - Be able to write queries including fair complexity.
Should have excellent experience in Big Data programming for data transformation and aggregations
Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
 Good customer communication.
 Good Analytical skills
Read more
Service based company
Service based company
Agency job
via Tech - Soul Technologies by Rohini Shinde
Pune
6 - 12 yrs
₹6L - ₹28L / yr
Big Data
Apache Kafka
Data engineering
Cassandra
skill iconJava
+1 more

Primary responsibilities:

  • Architect, Design and Build high performance Search systems for personalization, optimization, and targeting
  • Designing systems with Solr, Akka, Cassandra, Kafka
  • Algorithmic development with primary focus Machine Learning
  • Working with rapid and innovative development methodologies like: Kanban, Continuous Integration and Daily deployments
  • Participation in design and code reviews and recommend improvements
  • Unit testing with JUnit, Performance testing and tuning
  • Coordination with internal and external teams
  • Mentoring junior engineers
  • Participate in Product roadmap and Prioritization discussions and decisions
  • Evangelize the solution with Professional services and Customer Success teams

 

Read more
Mobile Programming LLC
at Mobile Programming LLC
1 video
34 recruiters
vandana chauhan
Posted by vandana chauhan
Remote, Chennai
3 - 7 yrs
₹12L - ₹18L / yr
Big Data
skill iconAmazon Web Services (AWS)
Hadoop
SQL
skill iconPython
+5 more
Position: Data Engineer  
Location: Chennai- Guindy Industrial Estate
Duration: Full time role
Company: Mobile Programming (https://www.mobileprogramming.com/" target="_blank">https://www.mobileprogramming.com/) 
Client Name: Samsung 


We are looking for a Data Engineer to join our growing team of analytics experts. The hire will be
responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing
data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline
builder and data wrangler who enjoy optimizing data systems and building them from the ground up.
The Data Engineer will support our software developers, database architects, data analysts and data
scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout
ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple
teams, systems and products.

Responsibilities for Data Engineer
 Create and maintain optimal data pipeline architecture,
 Assemble large, complex data sets that meet functional / non-functional business requirements.
 Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
 Build the infrastructure required for optimal extraction, transformation, and loading of data
from a wide variety of data sources using SQL and AWS big data technologies.
 Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency and other key business performance metrics.
 Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
 Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
 Work with data and analytics experts to strive for greater functionality in our data systems.

Qualifications for Data Engineer
 Experience building and optimizing big data ETL pipelines, architectures and data sets.
 Advanced working SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as working familiarity with a variety of databases.
 Experience performing root cause analysis on internal and external data and processes to
answer specific business questions and identify opportunities for improvement.
 Strong analytic skills related to working with unstructured datasets.
 Build processes supporting data transformation, data structures, metadata, dependency and
workload management.
 A successful history of manipulating, processing and extracting value from large disconnected
datasets.

 Working knowledge of message queuing, stream processing and highly scalable ‘big datadata
stores.
 Strong project management and organizational skills.
 Experience supporting and working with cross-functional teams in a dynamic environment.

We are looking for a candidate with 3-6 years of experience in a Data Engineer role, who has
attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
 Experience with big data tools: Spark, Kafka, HBase, Hive etc.
 Experience with relational SQL and NoSQL databases
 Experience with AWS cloud services: EC2, EMR, RDS, Redshift
 Experience with stream-processing systems: Storm, Spark-Streaming, etc.
 Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.

Skills: Big Data, AWS, Hive, Spark, Python, SQL
 
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos