Cutshort logo
Clairvoyant India Private Limited's logo

Big Data Engineer

Taruna Roy's profile picture
Posted by Taruna Roy
3 - 8 yrs
₹4L - ₹15L / yr
Remote, Pune
Skills
Big Data
Hadoop
skill iconJava
Spark
Hibernate (Java)
Apache Kafka
Real time media streaming
Apache Hive
SQL
Apache HBase
ob Title/Designation:
Mid / Senior Big Data Engineer
Job Description:
Role: Big Data EngineerNumber of open positions: 5Location: PuneAt Clairvoyant, we're building a thriving big data practice to help enterprises enable and accelerate the adoption of Big data and cloud services. In the big data space, we lead and serve as innovators, troubleshooters, and enablers. Big data practice at Clairvoyant, focuses on solving our customer's business problems by delivering products designed with best in class engineering practices and a commitment to keep the total cost of ownership to a minimum.
Must Have:
  • 4-10 years of experience in software development.
  • At least 2 years of relevant work experience on large scale Data applications.
  • Strong coding experience in Java is mandatory
  • Good aptitude, strong problem solving abilities, and analytical skills, ability to take ownership as appropriate
  • Should be able to do coding, debugging, performance tuning and deploying the apps to Prod.
  • Should have good working experience on
  • o Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet)
  • o Kafka
  • o J2EE Frameworks (Spring/Hibernate/REST)
  • o Spark Streaming or any other streaming technology.
  • Strong coding experience in Java is mandatory
  • Ability to work on the sprint stories to completion along with Unit test case coverage.
  • Experience working in Agile Methodology
  • Excellent communication and coordination skills
  • Knowledgeable (and preferred hands on) - UNIX environments, different continuous integration tools.
  • Must be able to integrate quickly into the team and work independently towards team goals
Role & Responsibilities:
  • Take the complete responsibility of the sprint stories' execution
  • Be accountable for the delivery of the tasks in the defined timelines with good quality.
  • Follow the processes for project execution and delivery.
  • Follow agile methodology
  • Work with the team lead closely and contribute to the smooth delivery of the project.
  • Understand/define the architecture and discuss the pros-cons of the same with the team
  • Involve in the brainstorming sessions and suggest improvements in the architecture/design.
  • Work with other team leads to get the architecture/design reviewed.
  • Work with the clients and counter-parts (in US) of the project.
  • Keep all the stakeholders updated about the project/task status/risks/issues if there are any.
Education: BE/B.Tech from reputed institute.
Experience: 4 to 9 years
Keywords: java, scala, spark, software development, hadoop, hive
Locations: Pune
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Clairvoyant India Private Limited

Founded :
2014
Type
Size :
100-1000
Stage :
Profitable
About
Clairvoyant is a global technology consulting and services company. We help organizations build innovative products and solutions using big data, analytics, and the cloud. We provide the best-in-class solutions and services that leverage big data and continually exceed client expectations. Our deep vertical knowledge combined with expertise on multiple, enterprise-grade big data platforms helps support purpose-built solutions to meet our client’s business needs. Our global team consists of experienced professionals, with backgrounds in design, software engineering, analytics, and data science. Each member of our team is highly energetic and committed to helping our clients achieve their goals.
Read more
Connect with the team
Profile picture
Afreen Shaikh
Profile picture
Sandeep Bharate
Profile picture
Unnati Yadav
Profile picture
Taruna Roy
Profile picture
Chakravarthi Peram
Company social profiles
bloglinkedintwitterfacebook

Similar jobs

Affine Analytics
at Affine Analytics
1 video
1 recruiter
Santhosh M
Posted by Santhosh M
Bengaluru (Bangalore)
4 - 8 yrs
₹10L - ₹30L / yr
Data Warehouse (DWH)
Informatica
ETL
Google Cloud Platform (GCP)
Airflow
+2 more

Objective

Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products


Roles and Responsibilities:

  • Should be comfortable in building and optimizing performant data pipelines which include data ingestion, data cleansing and curation into a data warehouse, database, or any other data platform using DASK/Spark.
  • Experience in distributed computing environment and Spark/DASK architecture.
  • Optimize performance for data access requirements by choosing the appropriate file formats (AVRO, Parquet, ORC etc) and compression codec respectively.
  • Experience in writing production ready code in Python and test, participate in code reviews to maintain and improve code quality, stability, and supportability.
  • Experience in designing data warehouse/data mart.
  • Experience with any RDBMS preferably SQL Server and must be able to write complex SQL queries.
  • Expertise in requirement gathering, technical design and functional documents.
  • Experience in Agile/Scrum practices.
  • Experience in leading other developers and guiding them technically.
  • Experience in deploying data pipelines using automated CI/CD approach.
  • Ability to write modularized reusable code components.
  • Proficient in identifying data issues and anomalies during analysis.
  • Strong analytical and logical skills.
  • Must be able to comfortably tackle new challenges and learn.
  • Must have strong verbal and written communication skills.


Required skills:

  • Knowledge on GCP
  • Expertise in Google BigQuery
  • Expertise in Airflow
  • Good Hands on SQL
  • Data warehousing concepts
Read more
Panamax InfoTech Ltd.
at Panamax InfoTech Ltd.
2 recruiters
Bhavani P
Posted by Bhavani P
Remote only
3 - 12 yrs
₹3L - ₹8L / yr
Unix administration
PL/SQL
skill iconJava
SQL Query Analyzer
Shell Scripting
+1 more
DBMS & SQL
Concepts of RDBMS, Normalization techniques
Entity Relationship diagram/ ER-Model
Transaction, commit, rollback, ACID properties
Transaction log
Difference in behavior of the column if it is nullable
SQL Statements
Join Operations
DDL, DML, Data Modelling
Optimal Query writing - with Aggregate fn, Group By, having clause, Order by etc. Should be
hands on for scenario-based query Writing
Query optimizing technique, Indexing in depth
Understanding query plan
Batching
Locking schemes
Isolation levels
Concept of stored procedure, Cursor, trigger, View
Beginner level - PL/SQL - Procedure Function writing skill.
Spring JPA and Spring Data basics
Hibernate mappings

UNIX
Basic Concepts on Unix
Commonly used Unix Commands with their options
Combining Unix commands using Pipe Filter etc.
Vi Editor & its different modes
Basic level Scripting and basic knowledge on how to execute jar files from host
Files and directory permissions
Application based scenarios.
Read more
EnterpriseMinds
at EnterpriseMinds
2 recruiters
Rani Galipalli
Posted by Rani Galipalli
Remote only
4 - 8 yrs
₹8L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

Job Description

 

  1. Solid technical skills with a proven and successful history working with data at scale and empowering organizations through data
  2. Big data processing frameworks: Spark, Scala, Hadoop, Hive, Kafka, EMR with Python
  3. Advanced experience and hands-on architecture and administration experience on big data platforms

 

Read more
Agiletech Info Solutions pvt ltd
Kalaithendral Nagarajan
Posted by Kalaithendral Nagarajan
Chennai
4 - 9 yrs
₹4L - ₹12L / yr
skill iconData Analytics
Data Visualization
PowerBI
Tableau
Qlikview
+5 more

4 - 8 overall experience.

  • 1-2 years’ experience in Azure Data Factory - schedule Jobs in Flows and ADF Pipelines, Performance Tuning, Error logging etc..
  • 1+ years of experience with Power BI - designing and developing reports, dashboards, metrics and visualizations in Powe BI.
  • (Required) Participate in video conferencing calls - daily stand-up meetings and all day working with team members on cloud migration planning, development, and support.
  • Proficiency in relational database concepts & design using star, Azure Datawarehouse, and data vault.
  • Requires 2-3 years of experience with SQL scripting (merge, joins, and stored procedures) and best practices.
  • Knowledge on deploying and run SSIS packages in Azure.
  • Knowledge of Azure Data Bricks.
  • Ability to write and execute complex SQL queries and stored procedures.
Read more
A Product Company
A Product Company
Agency job
via wrackle by Lokesh M
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹26L / yr
Looker
Big Data
Hadoop
Spark
Apache Hive
+4 more
Job Title: Senior Data Engineer/Analyst
Location: Bengaluru
Department: - Engineering 

Bidgely is looking for extraordinary and dynamic Senior Data Analyst to be part of its core team in Bangalore. You must have delivered exceptionally high quality robust products dealing with large data. Be part of a highly energetic and innovative team that believes nothing is impossible with some creativity and hard work. 

Responsibilities 
● Design and implement a high volume data analytics pipeline in Looker for Bidgely flagship product.
●  Implement data pipeline in Bidgely Data Lake
● Collaborate with product management and engineering teams to elicit & understand their requirements & challenges and develop potential solutions 
● Stay current with the latest tools, technology ideas and methodologies; share knowledge by clearly articulating results and ideas to key decision makers. 

Requirements 
● 3-5 years of strong experience in data analytics and in developing data pipelines. 
● Very good expertise in Looker 
● Strong in data modeling, developing SQL queries and optimizing queries. 
● Good knowledge of data warehouse (Amazon Redshift, BigQuery, Snowflake, Hive). 
● Good understanding of Big data applications (Hadoop, Spark, Hive, Airflow, S3, Cloudera) 
● Attention to details. Strong communication and collaboration skills.
● BS/MS in Computer Science or equivalent from premier institutes.
Read more
DataMetica
at DataMetica
1 video
7 recruiters
Sumangali Desai
Posted by Sumangali Desai
Pune
3 - 8 yrs
₹5L - ₹20L / yr
ETL
Data Warehouse (DWH)
IBM InfoSphere DataStage
DataStage
SQL
+1 more

Datametica is Hiring for Datastage Developer

  • Must have 3 to 8 years of experience in ETL Design and Development using IBM Datastage Components.
  • Should have extensive knowledge in Unix shell scripting.
  • Understanding of DW principles (Fact, Dimension tables, Dimensional Modelling and Data warehousing concepts).
  • Research, development, document and modification of ETL processes as per data architecture and modeling requirements.
  • Ensure appropriate documentation for all new development and modifications of the ETL processes and jobs.
  • Should be good in writing complex SQL queries.

About Us!

A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

 

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

 

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.

 

We have our own products!

Eagle – Data warehouse Assessment & Migration Planning Product

Raven – Automated Workload Conversion Product

Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.

 

Why join us!

Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.

 

Benefits we Provide!

Working with Highly Technical and Passionate, mission-driven people

Subsidized Meals & Snacks

Flexible Schedule

Approachable leadership

Access to various learning tools and programs

Pet Friendly

Certification Reimbursement Policy

 

Check out more about us on our website below!

www.datametica.com

 

Read more
A2Tech Consultants
at A2Tech Consultants
3 recruiters
Dhaval B
Posted by Dhaval B
Pune
4 - 12 yrs
₹6L - ₹15L / yr
Data engineering
Data Engineer
ETL
Spark
Apache Kafka
+5 more
We are looking for a smart candidate with:
  • Strong Python Coding skills and OOP skills
  • Should have worked on Big Data product Architecture
  • Should have worked with any one of the SQL-based databases like MySQL, PostgreSQL and any one of
  • NoSQL-based databases such as Cassandra, Elasticsearch etc.
  • Hands on experience on frameworks like Spark RDD, DataFrame, Dataset
  • Experience on development of ETL for data product
  • Candidate should have working knowledge on performance optimization, optimal resource utilization, Parallelism and tuning of spark jobs
  • Working knowledge on file formats: CSV, JSON, XML, PARQUET, ORC, AVRO
  • Good to have working knowledge with any one of the Analytical Databases like Druid, MongoDB, Apache Hive etc.
  • Experience to handle real-time data feeds (good to have working knowledge on Apache Kafka or similar tool)
Key Skills:
  • Python and Scala (Optional), Spark / PySpark, Parallel programming
Read more
Elucidata Corporation
at Elucidata Corporation
3 recruiters
Bhuvnesh Sharma
Posted by Bhuvnesh Sharma
Remote, NCR (Delhi | Gurgaon | Noida)
4 - 6 yrs
₹15L - ₹20L / yr
Big Data
skill iconJavascript
skill iconAngularJS (1.x)
skill iconReact.js
About Elucidata:Our mission is to make data-driven understanding of disease, the default starting point in the drug discovery process. Our products & services further the understanding of the ways in which diseased cells are different from healthy ones. This understanding helps scientists discover new drugs in a more effective manner and complements the move towards personalization.Biological big data will outpace data generated by YouTube and Twitter by 10x in the next 7 yrs. Our platform Polly will enable scientists to process different kinds of biological data and generate insights from them to accelerate drug discovery. Polly is already being used at premier biopharma companies like Pfizer and Agios; and academic labs at Yale, MIT, Washington University.We are looking for teammates who think out-of-the-box and are not satisfied with quick fixes or canned solutions to our industry’s most challenging problems. If you seek an intellectually stimulating environment where you can have a major impact on a critically important industry, we’d like to talk to you.About RoleWe are looking for engineers who want to build data rich applications and love the end-to-end product journey from understanding customer needs to the final product.Key Responsibilities- Developing web applications to visualize and process scientific data. - Interacting with Product, Design and Engineering teams to spec, build, test and deploy new features. - Understanding user needs and the science behind it.- Mentoring junior developersRequirements- Minimum 3-4 years of experience working in web development- In-depth knowledge of JavaScript- Hands-on experience with modern frameworks (Angular, React) - Sound programming and computer science fundamentals- Good understanding of web architecture and single page applications You might be a great cultural fit for Elucidata if..- You are passionate for Science.- You are a self-learner who wants to keep learning everyday. - You regard your code as your craft that you want to keep honing. - You like to work hard to solve big challenges and enjoy the process of breaking down a problem one blow at a time. - You love science and can't stop being the geek at a party. Of course you party harder than everybody else there.
Read more
Nitor Infotech
at Nitor Infotech
2 recruiters
Balakumar Mohan
Posted by Balakumar Mohan
Pune
9 - 100 yrs
₹13L - ₹25L / yr
skill iconAmazon Web Services (AWS)
Big Data
Business Intelligence (BI)
The hunt is for a AWS BigData /DWH Architect with the ability to manage effective relationships with a wide range of stakeholders (customers & team members alike). Incumbent will demonstrate personal commitment and accountability to ensure standards are continuously sustained and improved both within the internal teams, and with partner organizations and suppliers. We at Nitor Infotech a Product Engineering Services company are always on hunt for some best talents in the IT industry & keeping with our trend of What next in IT. We are scouting for result oriented resources with passion for product, technology services, and creating great customer experiences. Someone who can take our current expertise & footprint of Nitor Infotech Inc., to an altogether different dimension & level in tune with the emerging market trends and ensure Brilliance @ Work continues to prevail in whatever we do. Nitor Infotech works with global ISVs to help them build and accelerate their product development. Nitor is able to do so because of the fact that product development is its DNA. This DNA is enriched by its 10 years of expertise, best practices and frameworks & Accelerators. Because of this ability Nitor Infotech has been able to build business relationships with product companies having revenues from $50 Million to $1 Billion. • 7-12+ years of relevant experience of working in Database, BI and Analytics space with over 0-2 yrs of architecting and designing data warehouse experience including 2 to 3 yrs in Big Data ecosystem • Experience in data warehouse design in AWS • Strong architecting, programming, design skills and proven track record of architecting and building large scale, distributed big data solutions • Professional and technical advice on Big Data concepts and technologies, in particular highlighting the business potential through real-time analysis • Provides technical leadership in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc. NoSQL stores like Mongodb, Cassandra, HBase etc.) • Performance tuning of Hadoop clusters and Hadoop MapReduce routines. • Evaluate and recommend Big Data technology stack for the platform • Drive significant technology initiatives end to end and across multiple layers of architecture • Should have breadth of BI knowledge which includes:  MSBI, Database design, new visualization tools like Tableau, Qlik View, Power BI  Understand internals and intricacies of Old and New DB platform which includes:  Strong RDMS DB Fundamentals either of it SQL Server/ MySQL/ Oracle  DB and DWH design  Designing Semantic Model using OLAP and Tabular model using MS and Non MS tools  No SQL DBs including Document, Graph, Search and Columnar DBs • Excellent communication skills and strong ability to build good rapport with prospect and existing customers • Be a Mentor and go to person for Jr. team members in the team Qualification & Experience: · Educational qualification: BE/ME/B.Tech/M.Tech, BCA/MCA/BCS/MCS, any other degree with relevant IT qualification.
Read more
zeotap India Pvt Ltd
at zeotap India Pvt Ltd
2 recruiters
Projjol Banerjea
Posted by Projjol Banerjea
Bengaluru (Bangalore)
6 - 10 yrs
₹5L - ₹40L / yr
skill iconPython
Big Data
Hadoop
skill iconScala
Spark
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos