Cutshort logo
Liquintel logo
Data Engineer
Liquintel's logo

Data Engineer

Kamal Prithiani's profile picture
Posted by Kamal Prithiani
0 - 1 yrs
₹2L - ₹3L / yr
Bengaluru (Bangalore)
Skills
Big Data
Apache Spark
skill iconMongoDB
Relational Database (RDBMS)
Apache Hive
SQL

We are looking for BE/BTech graduates (2018/2019 pass out) who want to build their career as Data Engineer covering technologies like Hadoop, NoSQL, RDBMS, Spark, Kafka, Hive, ETL, MDM & Data Quality. You should be willing to learn, explore, experiment, develop POCs/Solutions using these technologies with guidance and support from highly experienced Industry Leaders. You should be passionate about your work and willing to go extra mile to achieve results.

We are looking for candidates who believe in commitment and in building strong relationships. We need people who are passionate about solving problems through software and are flexible.

Required Experience, Skills and Qualifications

Passionate to learn and explore new technologies

Any RDBMS experience (SQL Server/Oracle/MySQL)

Any ETL tool experience (Informatica/Talend/Kettle/SSIS)

Understanding of Big Data technologies

Good Communication Skills

Excellent Mathematical / Logical / Reasoning Skills

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Liquintel

Founded :
2018
Type
Size :
20-100
Stage :
Bootstrapped
About
N/A
Connect with the team
Profile picture
Kamal Prithiani
Company social profiles
N/A

Similar jobs

Gipfel & Schnell Consultings Pvt Ltd
TanmayaKumar Pattanaik
Posted by TanmayaKumar Pattanaik
Bengaluru (Bangalore)
3 - 9 yrs
₹9L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+10 more

Qualifications & Experience:


▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design

▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.

▪ Expert in SQL, worked on advanced SQL for at least 2+ years

▪ Good development skills in Java, Python or other languages

▪ Experience with EMR, S3

▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview

▪ Comfortable working in an agile environment

Read more
SmartHub Innovation Pvt Ltd
Sathya Venkatesh
Posted by Sathya Venkatesh
Bengaluru (Bangalore)
5 - 7 yrs
₹15L - ₹20L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more

JD Code: SHI-LDE-01 

Version#: 1.0 

Date of JD Creation: 27-March-2023 

Position Title: Lead Data Engineer 

Reporting to: Technical Director 

Location: Bangalore Urban, India (on-site) 

 

SmartHub.ai (www.smarthub.ai) is a fast-growing Startup headquartered in Palo Alto, CA, and with offices in Seattle and Bangalore. We operate at the intersection of AI, IoT & Edge Computing. With strategic investments from leaders in infrastructure & data management, SmartHub.ai is redefining the Edge IoT space. Our “Software Defined Edge” products help enterprises rapidly accelerate their Edge Infrastructure Management & Intelligence. We empower enterprises to leverage their Edge environment to increase revenue, efficiency of operations, manage safety and digital risks by using Edge and AI technologies. 

 

SmartHub is an equal opportunity employer and will always be committed to nurture a workplace culture that supports, inspires and respects all individuals, encourages employees to bring their best selves to work, laugh and share. We seek builders who hail from a variety of backgrounds, perspectives and skills to join our team.  

Summary 

This role requires the candidate to translate business and product requirements to build, maintain, optimize data systems which can be relational or non-relational in nature. The candidate is expected to tune and analyse the data including from a short and long-term trend analysis and reporting, AI/ML uses cases. 

We are looking for a talented technical professional with at least 8 years of proven experience in owning, architecting, designing, operating and optimising databases that are used for large scale analytics and reports. 

Responsibilities 

  • Provide technical & architectural leadership for the next generation of product development. 
  • Innovate, Research & Evaluate new technologies and tools for a quality output. 
  • Architect, Design and Implement ensuring scalability, performance and security. 
  • Code and implement new algorithms to solve complex problems. 
  • Analyze complex data, develop, optimize and transform large data sets both structured and unstructured. 
  • Ability to deploy and administrator the database and continuously tuning for performance especially container orchestration stacks such as Kubernetes  
  • Develop analytical models and solutions Mentor Junior members technically in Architecture, Designing and robust Coding. 
  • Work in an Agile development environment while continuously evaluating and improvising engineering processes 

Required 

  • At least 8 years of experience with significant depth in designing and building scalable distributed database systems for enterprise class products, experience of working in product development companies. 
  • Should have been feature/component lead for several complex features involving large datasets. 
  • Strong background in relational and non-relational database like Postgres, MongoDB, Hadoop etl. 
  • Deep exp database optimization, tuning ertise in SQL, Time Series Databases, Apache Drill, HDFS, Spark are good to have 
  • Excellent analytical and problem-solving skill sets. 
  • Experience in  for high throughput is highly desirable 
  • Exposure to database provisioning in Kubernetes/non-Kubernetes environments, configuration and tuning in a highly available mode. 
  • Demonstrated ability to provide technical leadership and mentoring to the team 


Read more
Persistent Systems
at Persistent Systems
1 video
1 recruiter
Agency job
via Milestone Hr Consultancy by Haina khan
Bengaluru (Bangalore), Hyderabad, Pune
9 - 16 yrs
₹7L - ₹32L / yr
Big Data
skill iconScala
Spark
Hadoop
skill iconPython
+1 more
Greetings..
 
We have urgent requirement for the post of Big Data Architect in reputed MNC company
 
 


Location:  Pune/Nagpur,Goa,Hyderabad/Bangalore

Job Requirements:

  • 9 years and above of total experience preferably in bigdata space.
  • Creating spark applications using Scala to process data.
  • Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
  • Experience in spark job performance tuning and optimizations.
  • Should have experience in processing data using Kafka/Pyhton.
  • Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
  • Should be proficient in writing SQL queries to process data in Data Warehouse.
  • Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
  • Experience on AWS services like EMR.
Read more
Think n Solutions
at Think n Solutions
2 recruiters
TnS HR
Posted by TnS HR
Remote only
1.5 - 6 yrs
Best in industry
Microservices
RESTful APIs
Microsoft SQL Server
SQL Server Integration Services (SSIS)
SQL Server Reporting Services (SSRS)
+10 more

*Apply only if you are serving Notice Period


HIRING SQL Developers
with max 20 days Of NOTICE PERIOD


Job ID: TNS2023DB01

Who Should apply?

  • Only for Serious job seekers who are ready to work in night shift
  • Technically Strong Candidates who are willing to take up challenging roles and want to raise their Career graph
  • No DBAs & BI Developers, please

 

Why Think n Solutions Software?

  • Exposure to the latest technology
  • Opportunity to work on different platforms
  • Rapid Career Growth
  • Friendly Knowledge-Sharing Environment

 

Criteria:

  • BE/MTech/MCA/MSc
  • 2yrs Hands Experience in MS SQL / NOSQL
  • Immediate joiners preferred/ Maximum notice period between 15 to 20 days
  • Candidates will be selected based on logical/technical and scenario-based testing
  • Work time - 10:00 pm to 6:00 am

 

Note: Candidates who have attended the interview process with TnS in the last 6 months will not be eligible.

 

 

Job Description:

 

  1. Technical Skills Desired:
    1. Experience in MS SQL Server, and one of these Relational DB’s, PostgreSQL / AWS Aurora DB / MySQL / any of NoSQL DBs (MongoDB / DynamoDB / DocumentDB) in an application development environment and eagerness to switch
    2. Design database tables, views, indexes
    3. Write functions and procedures for Middle Tier Development Team
    4. Work with any front-end developers in completing the database modules end to end (hands-on experience in the parsing of JSON & XML in Stored Procedures would be an added advantage).
    5. Query Optimization for performance improvement
    6. Design & develop SSIS Packages or any other Transformation tools for ETL

 

  1. Functional Skills Desired:
    1. The banking / Insurance / Retail domain would be a
    2. Interaction with a client a

 

3.       Good to Have Skills:

  1. Knowledge in a Cloud Platform (AWS / Azure)
  2. Knowledge on version control system (SVN / Git)
  3. Exposure to Quality and Process Management
  4. Knowledge in Agile Methodology

 

  1. Soft skills: (additional)
    1. Team building (attitude to train, work along, and mentor juniors)
    2. Communication skills (all kinds)
    3. Quality consciousness
    4. Analytical acumen to all business requirements
    5. Think out-of-box for business solution
Read more
Games 24x7
Agency job
via zyoin by Shubha N
Bengaluru (Bangalore)
0 - 6 yrs
₹10L - ₹21L / yr
PowerBI
Big Data
Hadoop
Apache Hive
Business Intelligence (BI)
+5 more
Location: Bangalore
Work Timing: 5 Days A Week

Responsibilities include:

• Ensure right stakeholders gets right information at right time
• Requirement gathering with stakeholders to understand their data requirement
• Creating and deploying reports
• Participate actively in datamarts design discussions
• Work on both RDBMS as well as Big Data for designing BI Solutions
• Write code (queries/procedures) in SQL / Hive / Drill that is both functional and elegant,
following appropriate design patterns
• Design and plan BI solutions to automate regular reporting
• Debugging, monitoring and troubleshooting BI solutions
• Creating and deploying datamarts
• Writing relational and multidimensional database queries
• Integrate heterogeneous data sources into BI solutions
• Ensure Data Integrity of data flowing from heterogeneous data sources into BI solutions.

Minimum Job Qualifications:
• BE/B.Tech in Computer Science/IT from Top Colleges
• 1-5 years of experience in Datawarehousing and SQL
• Excellent Analytical Knowledge
• Excellent technical as well as communication skills
• Attention to even the smallest detail is mandatory
• Knowledge of SQL query writing and performance tuning
• Knowledge of Big Data technologies like Apache Hadoop, Apache Hive, Apache Drill
• Knowledge of fundamentals of Business Intelligence
• In-depth knowledge of RDBMS systems, Datawarehousing and Datamarts
• Smart, motivated and team oriented
Desirable Requirements
• Sound knowledge of software development in Programming (preferably Java )
• Knowledge of the software development lifecycle (SDLC) and models
Read more
Technology service company
Remote only
5 - 10 yrs
₹10L - ₹20L / yr
Relational Database (RDBMS)
NOSQL Databases
NOSQL
Performance tuning
SQL
+10 more

Preferred Education & Experience:

  • Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics or related technical field or equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.

  • Well-versed in and 5+ years of hands-on demonstrable experience with:
    ▪ Data Analysis & Data Modeling
    ▪ Database Design & Implementation
    ▪ Database Performance Tuning & Optimization
    ▪ PL/pgSQL & SQL

  • 5+ years of hands-on development experience in Relational Database (PostgreSQL/SQL Server/Oracle).

  • 5+ years of hands-on development experience in SQL, PL/PgSQL, including stored procedures, functions, triggers, and views.

  • Hands-on experience with demonstrable working experience in Database Design Principles, SQL Query Optimization Techniques, Index Management, Integrity Checks, Statistics, and Isolation levels

  • Hands-on experience with demonstrable working experience in Database Read & Write Performance Tuning & Optimization.

  • Knowledge and Experience working in Domain Driven Design (DDD) Concepts, Object Oriented Programming System (OOPS) Concepts, Cloud Architecture Concepts, NoSQL Database Concepts are added values

  • Knowledge and working experience in Oil & Gas, Financial, & Automotive Domains is a plus

  • Hands-on development experience in one or more NoSQL data stores such as Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc. a plus.

Read more
UAE Client
Remote, Bengaluru (Bangalore), Hyderabad
6 - 10 yrs
₹15L - ₹22L / yr
Informatica
Big Data
SQL
Hadoop
Apache Spark
+1 more

Skills- Informatica with Big Data Management

 

1.Minimum 6 to 8 years of experience in informatica BDM development
2.Experience working on Spark/SQL
3.Develops informtica mapping/Sql 

4. Should have experience in Hadoop, spark etc
Read more
Impetus
at Impetus
3 recruiters
Rohit Agrawal
Posted by Rohit Agrawal
Remote, Bengaluru (Bangalore), Noida, Gurugram
3 - 6 yrs
₹10L - ₹25L / yr
skill iconPython
PySpark
Data engineering
Big Data
Hadoop
+2 more
  • Experience of providing technical leadership in the Big Data space (Hadoop Stack like Spark, M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc. Should have contributed to open source Big Data technologies.
  • Expert-level proficiency in Python
  • Experience in visualizing and evangelizing next-generation infrastructure in Big Data space (Batch, Near Real-time, Real-time technologies).
  • Passionate for continuous learning, experimenting, applying, and contributing towards cutting edge open source technologies and software paradigms
  • Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop 2.0 (YARN; MR & HDFS) and associated technologies.
  • Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib)
    Operating knowledge of cloud computing platforms (AWS, especially EMR, EC2, S3, SWF services, and the AWS CLI)
  • Experience working within a Linux computing environment, and use of command-line tools including knowledge of shell/Python scripting for automating common tasks
  •  
Read more
DataMetica
at DataMetica
1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
7 - 12 yrs
₹12L - ₹33L / yr
Big Data
Hadoop
Spark
Apache Spark
Apache Hive
+3 more

Job description

Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)

Primary Location : India-Pune, Hyderabad

Experience : 7 - 12 Years

Management Level: 7

Joining Time: Immediate Joiners are preferred


  • Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
  • Align architecture with business requirements and stabilizing the developed solution
  • Ability to build prototypes to demonstrate the technical feasibility of your vision
  • Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
  • To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
  • Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
  • Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
  • Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
  • Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
  • Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
  • Deployment sophisticated analytics program of code using any of cloud application.


Perks and Benefits we Provide!


  • Working with Highly Technical and Passionate, mission-driven people
  • Subsidized Meals & Snacks
  • Flexible Schedule
  • Approachable leadership
  • Access to various learning tools and programs
  • Pet Friendly
  • Certification Reimbursement Policy
  • Check out more about us on our website below!

www.datametica.com

Read more
SpringML
at SpringML
1 video
4 recruiters
Sai Raj Sampath
Posted by Sai Raj Sampath
Remote, Hyderabad
4 - 9 yrs
₹12L - ₹20L / yr
Big Data
Data engineering
TensorFlow
Apache Spark
skill iconJava
+2 more
REQUIRED SKILLS:

• Total of 4+ years of experience in development, architecting/designing and implementing Software solutions for enterprises.

• Must have strong programming experience in either Python or Java/J2EE.

• Minimum of 4+ year’s experience working with various Cloud platforms preferably Google Cloud Platform.

• Experience in Architecting and Designing solutions leveraging Google Cloud products such as Cloud BigQuery, Cloud DataFlow, Cloud Pub/Sub, Cloud BigTable and Tensorflow will be highly preferred.

• Presentation skills with a high degree of comfort speaking with management and developers

• The ability to work in a fast-paced, work environment

• Excellent communication, listening, and influencing skills

RESPONSIBILITIES:

• Lead teams to implement and deliver software solutions for Enterprises by understanding their requirements.

• Communicate efficiently and document the Architectural/Design decisions to customer stakeholders/subject matter experts.

• Opportunity to learn new products quickly and rapidly comprehend new technical areas – technical/functional and apply detailed and critical thinking to customer solutions.

• Implementing and optimizing cloud solutions for customers.

• Migration of Workloads from on-prem/other public clouds to Google Cloud Platform.

• Provide solutions to team members for complex scenarios.

• Promote good design and programming practices with various teams and subject matter experts.

• Ability to work on any product on the Google cloud platform.

• Must be hands-on and be able to write code as required.

• Ability to lead junior engineers and conduct code reviews



QUALIFICATION:

• Minimum B.Tech/B.E Engineering graduate
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos