Cutshort logo
Propellor.ai logo
Data Engineer
Propellor.ai's logo

Data Engineer

Anila Nair's profile picture
Posted by Anila Nair
2 - 5 yrs
₹5L - ₹15L / yr
Remote only
Skills
SQL
API
skill iconPython
Spark

Job Description - Data Engineer

About us
Propellor is aimed at bringing Marketing Analytics and other Business Workflows to the Cloud ecosystem. We work with International Clients to make their Analytics ambitions come true, by deploying the latest tech stack and data science and engineering methods, making their business data insightful and actionable. 

 

What is the role?
This team is responsible for building a Data Platform for many different units. This platform will be built on Cloud and therefore in this role, the individual will be organizing and orchestrating different data sources, and
giving recommendations on the services that fulfil goals based on the type of data

Qualifications:

• Experience with Python, SQL, Spark
• Knowledge/notions of JavaScript
• Knowledge of data processing, data modeling, and algorithms
• Strong in data, software, and system design patterns and architecture
• API building and maintaining
• Strong soft skills, communication
Nice to have:
• Experience with cloud: Google Cloud Platform, AWS, Azure
• Knowledge of Google Analytics 360 and/or GA4.
Key Responsibilities
• Work on the core backend and ensure it meets the performance benchmarks.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.

Key Responsibilities
• Design and develop platform based on microservices architecture.
• Work on the core backend and ensure it meets the performance benchmarks.
• Work on the front end with ReactJS.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.

What are we looking for?
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.
• Education - BE/MCA or equivalent.
• Agnostic/Polyglot with multiple tech stacks.
• Worked on open-source technologies – NodeJS, ReactJS, MySQL, NoSQL, MongoDB, DynamoDB.
• Good experience with Front-end technologies like ReactJS.
• Backend exposure – good knowledge of building API.
• Worked on serverless technologies.
• Efficient in building microservices in combining server & front-end.
• Knowledge of cloud architecture.
• Should have sound working experience with relational and columnar DB.
• Should be innovative and communicative in approach.
• Will be responsible for the functional/technical track of a project.

Whom will you work with?
You will closely work with the engineering team and support the Product Team.

Hiring Process includes : 

a. Written Test on Python and SQL

b. 2 - 3 rounds of Interviews

Immediate Joiners will be preferred

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Propellor.ai

Founded :
2016
Type
Size :
20-100
Stage :
Raised funding
About


Who we are


At Propellor, we are passionate about solving Data Unification challenges faced by our clients. We build solutions using the latest tech stack. We believe all solutions lie in the congruence of Business, Technology, and Data Science. Combining the 3, our team of young Data professionals solves some real-world problems. Here's what we live by:


Skin in the game

We believe that Individual and Collective success orientations both propel us ahead.

 

Cross Fertility

Borrowing from and building on one another’s varied perspectives means we are always viewing business problems with a fresh lens.

 

Sub 25's

A bunch of young turks, who keep our explorer mindset alive and kicking.

 

Future-proofing

Keeping an eye ahead, we are upskilling constantly, staying relevant at any given point in time.

 

Tech Agile

Tech changes quickly. Whatever your stack, we adapt speedily and easily.


If you are evaluating us to be your next employer, we urge you to read more about our team and culture here: https://bit.ly/3ExSNA2. We assure you, it's worth a read!

Read more
Tech Stack
skill iconPython
Apache Spark
Hadoop
skill iconMachine Learning (ML)
Company video
Propellor.ai's video section
Propellor.ai's video section
Candid answers by the company
What is Propellor.ai?
How do you define the company’s culture?
Two things techies love about the company?
What is the compensation philosophy?
Why join Propellor.ai?
Video thumbnail
Video thumbnail
Photos
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Connect with the team
Profile picture
Anila Nair
Profile picture
abhijat shukla
Profile picture
Kajal Jain
Company social profiles
bloginstagramlinkedintwitterfacebook

Similar jobs

It's a deep-tech and research company.
Bengaluru (Bangalore)
3 - 8 yrs
₹10L - ₹25L / yr
skill iconData Science
skill iconPython
Natural Language Processing (NLP)
skill iconDeep Learning
Long short-term memory (LSTM)
+8 more
Job Description: 
 
We are seeking passionate engineers experienced in software development using Machine Learning (ML) and Natural Language Processing (NLP) techniques to join our development team in Bangalore, India. We're a fast-growing startup working on an enterprise product - An intelligent data extraction Platform for various types of documents. 
 
Your responsibilities: 
 
• Build, improve and extend NLP capabilities 
• Research and evaluate different approaches to NLP problems 
• Must be able to write code that is well designed, produce deliverable results 
• Write code that scales and can be deployed to production 
 
You must have: 
 
• Fundamentals of statistical methods is a must 
• Experience in named entity recognition, POS Tagging, Lemmatization, vector representations of textual data and neural networks - RNN, LSTM 
• A solid foundation in Python, data structures, algorithms, and general software development skills. 
• Ability to apply machine learning to problems that deal with language 
• Engineering ability to build robustly scalable pipelines
 • Ability to work in a multi-disciplinary team with a strong product focus
Read more
Piako
PiaKo Store
Posted by PiaKo Store
Kolkata
4 - 8 yrs
₹12L - ₹24L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
ETL

We are a rapidly expanding global technology partner, that is looking for a highly skilled Senior (Python) Data Engineer to join their exceptional Technology and Development team. The role is in Kolkata. If you are passionate about demonstrating your expertise and thrive on collaborating with a group of talented engineers, then this role was made for you!

At the heart of technology innovation, our client specializes in delivering cutting-edge solutions to clients across a wide array of sectors. With a strategic focus on finance, banking, and corporate verticals, they have earned a stellar reputation for their commitment to excellence in every project they undertake.

We are searching for a senior engineer to strengthen their global projects team. They seek an experienced Senior Data Engineer with a strong background in building Extract, Transform, Load (ETL) processes and a deep understanding of AWS serverless cloud environments.

As a vital member of the data engineering team, you will play a critical role in designing, developing, and maintaining data pipelines that facilitate data ingestion, transformation, and storage for our organization.

Your expertise will contribute to the foundation of our data infrastructure, enabling data-driven decision-making and analytics.

Key Responsibilities:

  • ETL Pipeline Development: Design, develop, and maintain ETL processes using Python, AWS Glue, or other serverless technologies to ingest data from various sources (databases, APIs, files), transform it into a usable format, and load it into data warehouses or data lakes.
  • AWS Serverless Expertise: Leverage AWS services such as AWS Lambda, AWS Step Functions, AWS Glue, AWS S3, and AWS Redshift to build serverless data pipelines that are scalable, reliable, and cost-effective.
  • Data Modeling: Collaborate with data scientists and analysts to understand data requirements and design appropriate data models, ensuring data is structured optimally for analytical purposes.
  • Data Quality Assurance: Implement data validation and quality checks within ETL pipelines to ensure data accuracy, completeness, and consistency.
  • Performance Optimization: Continuously optimize ETL processes for efficiency, performance, and scalability, monitoring and troubleshooting any bottlenecks or issues that may arise.
  • Documentation: Maintain comprehensive documentation of ETL processes, data lineage, and system architecture to ensure knowledge sharing and compliance with best practices.
  • Security and Compliance: Implement data security measures, encryption, and compliance standards (e.g., GDPR, HIPAA) as required for sensitive data handling.
  • Monitoring and Logging: Set up monitoring, alerting, and logging systems to proactively identify and resolve data pipeline issues.
  • Collaboration: Work closely with cross-functional teams, including data scientists, data analysts, software engineers, and business stakeholders, to understand data requirements and deliver solutions.
  • Continuous Learning: Stay current with industry trends, emerging technologies, and best practices in data engineering and cloud computing and apply them to enhance existing processes.

Qualifications:

  • Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
  • Proven experience as a Data Engineer with a focus on ETL pipeline development.
  • Strong proficiency in Python programming.
  • In-depth knowledge of AWS serverless technologies and services.
  • Familiarity with data warehousing concepts and tools (e.g., Redshift, Snowflake).
  • Experience with version control systems (e.g., Git).
  • Strong SQL skills for data extraction and transformation.
  • Excellent problem-solving and troubleshooting abilities.
  • Ability to work independently and collaboratively in a team environment.
  • Effective communication skills for articulating technical concepts to non-technical stakeholders.
  • Certifications such as AWS Certified Data Analytics - Specialty or AWS Certified DevOps Engineer are a plus.

Preferred Experience:

  • Knowledge of data orchestration and workflow management tools
  • Familiarity with data visualization tools (e.g., Tableau, Power BI).
  • Previous experience in industries with strict data compliance requirements (e.g., insurance, finance) is beneficial.

What You Can Expect:

- Innovation Abounds: Join a company that constantly pushes the boundaries of technology and encourages creative thinking. Your ideas and expertise will be valued and put to work in pioneering solutions.

- Collaborative Excellence: Be part of a team of engineers who are as passionate and skilled as you are. Together, you'll tackle challenging projects, learn from each other, and achieve remarkable results.

- Global Impact: Contribute to projects with a global reach and make a tangible difference. Your work will shape the future of technology in finance, banking, and corporate sectors.

They offer an exciting and professional environment with great career and growth opportunities. Their office is located in the heart of Salt Lake Sector V, offering a terrific workspace that's both accessible and inspiring. Their team members enjoy a professional work environment with regular team outings. Joining the team means becoming part of a vibrant and dynamic team where your skills will be valued, your creativity will be nurtured, and your contributions will make a difference. In this role, you can work alongside some of the brightest minds in the industry.

If you're ready to take your career to the next level and be part of a dynamic team that's driving innovation on a global scale, we want to hear from you.

Apply today for more information about this exciting opportunity.

Onsite Location: Kolkata, India (Salt Lake Sector V)


Read more
Kloud9 Technologies
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹30L / yr
Google Cloud Platform (GCP)
PySpark
skill iconPython
skill iconScala

About Kloud9:

 

Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.

 

Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.

 

At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.

 

Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.

 

We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.


●    Overall 8+ Years of Experience in Web Application development.

●    5+ Years of development experience with JAVA8 , Springboot, Microservices and middleware

●    3+ Years of Designing Middleware using Node JS platform.

●    good to have 2+ Years of Experience in using NodeJS along with AWS Serverless platform.

●    Good Experience with Javascript / TypeScript, Event Loops, ExpressJS, GraphQL, SQL DB (MySQLDB), NoSQL DB(MongoDB) and YAML templates.

●    Good Experience with TDD Driven Development and Automated Unit Testing.

●    Good Experience with exposing and consuming Rest APIs in Java 8, Springboot platform and Swagger API contracts.

●    Good Experience in building NodeJS middleware performing Transformations, Routing, Aggregation, Orchestration and Authentication(JWT/OAUTH).

●    Experience supporting and working with cross-functional teams in a dynamic environment.

●    Experience working in Agile Scrum Methodology.

●    Very good Problem-Solving Skills.

●    Very good learner and passion for technology.

●     Excellent verbal and written communication skills in English

●     Ability to communicate effectively with team members and business stakeholders


Secondary Skill Requirements:

 

● Experience working with any of Loopback, NestJS, Hapi.JS, Sails.JS, Passport.JS


Why Explore a Career at Kloud9:

 

With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.

Read more
Gurugram, Pune, Bengaluru (Bangalore), Delhi, Noida, Ghaziabad, Faridabad
2 - 9 yrs
₹8L - ₹20L / yr
skill iconPython
Hadoop
Big Data
Spark
Data engineering
+3 more

Key Responsibilities : ( Data Developer Python, Spark)

Exp : 2 to 9 Yrs 

Development of data platforms, integration frameworks, processes, and code.

Develop and deliver APIs in Python or Scala for Business Intelligence applications build using a range of web languages

Develop comprehensive automated tests for features via end-to-end integration tests, performance tests, acceptance tests and unit tests.

Elaborate stories in a collaborative agile environment (SCRUM or Kanban)

Familiarity with cloud platforms like GCP, AWS or Azure.

Experience with large data volumes.

Familiarity with writing rest-based services.

Experience with distributed processing and systems

Experience with Hadoop / Spark toolsets

Experience with relational database management systems (RDBMS)

Experience with Data Flow development

Knowledge of Agile and associated development techniques including:

Read more
Banyan Data Services
at Banyan Data Services
1 recruiter
Sathish Kumar
Posted by Sathish Kumar
Bengaluru (Bangalore)
3 - 15 yrs
₹6L - ₹20L / yr
skill iconData Science
Data Scientist
skill iconMongoDB
skill iconJava
Big Data
+14 more

Senior Big Data Engineer 

Note:   Notice Period : 45 days 

Banyan Data Services (BDS) is a US-based data-focused Company that specializes in comprehensive data solutions and services, headquartered in San Jose, California, USA. 

 

We are looking for a Senior Hadoop Bigdata Engineer who has expertise in solving complex data problems across a big data platform. You will be a part of our development team based out of Bangalore. This team focuses on the most innovative and emerging data infrastructure software and services to support highly scalable and available infrastructure. 

 

It's a once-in-a-lifetime opportunity to join our rocket ship startup run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer that address next-gen data evolution challenges. 

 

 

Key Qualifications

 

·   5+ years of experience working with Java and Spring technologies

· At least 3 years of programming experience working with Spark on big data; including experience with data profiling and building transformations

· Knowledge of microservices architecture is plus 

· Experience with any NoSQL databases such as HBase, MongoDB, or Cassandra

· Experience with Kafka or any streaming tools

· Knowledge of Scala would be preferable

· Experience with agile application development 

· Exposure of any Cloud Technologies including containers and Kubernetes 

· Demonstrated experience of performing DevOps for platforms 

· Strong Skillsets in Data Structures & Algorithm in using efficient way of code complexity

· Exposure to Graph databases

· Passion for learning new technologies and the ability to do so quickly 

· A Bachelor's degree in a computer-related field or equivalent professional experience is required

 

Key Responsibilities

 

· Scope and deliver solutions with the ability to design solutions independently based on high-level architecture

· Design and develop the big data-focused micro-Services

· Involve in big data infrastructure, distributed systems, data modeling, and query processing

· Build software with cutting-edge technologies on cloud

· Willing to learn new technologies and research-orientated projects 

· Proven interpersonal skills while contributing to team effort by accomplishing related results as needed 

Read more
Accolite Software
at Accolite Software
1 video
3 recruiters
Nikita Sadarangani
Posted by Nikita Sadarangani
Remote, Bengaluru (Bangalore)
3 - 10 yrs
₹5L - ₹24L / yr
skill iconData Science
skill iconR Programming
skill iconPython
skill iconDeep Learning
Neural networks
+3 more
  • Adept at Machine learning techniques and algorithms.

Feature selection, dimensionality reduction, building and

  • optimizing classifiers using machine learning techniques
  • Data mining using state-of-the-art methods
  • Doing ad-hoc analysis and presenting results
  • Proficiency in using query languages such as N1QL, SQL

Experience with data visualization tools, such as D3.js, GGplot,

  • Plotly, PyPlot, etc.

Creating automated anomaly detection systems and constant tracking

  • of its performance
  • Strong in Python is a must.
  • Strong in Data Analysis and mining is a must
  • Deep Learning, Neural Network, CNN, Image Processing (Must)

Building analytic systems - data collection, cleansing and

  • integration

Experience with NoSQL databases, such as Couchbase, MongoDB,

Cassandra, HBase

Read more
Fragma Data Systems
at Fragma Data Systems
8 recruiters
Priyanka U
Posted by Priyanka U
Remote only
4 - 10 yrs
₹12L - ₹23L / yr
Informatica
ETL
Big Data
Spark
SQL
Skill:- informatica with big data management
 
1.Minimum 6 to 8 years of experience in informatica BDM development
2. Experience working on Spark/SQL
3. Develops informtica mapping/Sql 
4. Should have experience in Hadoop, spark etc

Work days- Sun-Thu
Day shift
 
 
 
Read more
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹14L / yr
Data Engineer
Big Data
skill iconPython
skill iconAmazon Web Services (AWS)
SQL
+2 more
  •  We are looking for a Data Engineer with 3-5 years experience in Python, SQL, AWS (EC2, S3, Elastic Beanstalk, API Gateway), and Java.
  • The applicant must be able to perform Data Mapping (data type conversion, schema harmonization) using Python, SQL, and Java.
  • The applicant must be familiar with and have programmed ETL interfaces (OAUTH, REST API, ODBC) using the same languages.
  • The company is looking for someone who shows an eagerness to learn and who asks concise questions when communicating with teammates.
Read more
Maveric Systems
at Maveric Systems
3 recruiters
Rashmi Poovaiah
Posted by Rashmi Poovaiah
Bengaluru (Bangalore), Chennai, Pune
4 - 10 yrs
₹8L - ₹15L / yr
Big Data
Hadoop
Spark
Apache Kafka
HiveQL
+2 more

Role Summary/Purpose:

We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.

 

Requirements:

  • The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
  • Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
  • Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc…
  • Excellent knowledge in SQL & Linux Shell scripting
  • Bachelors/Master’s/Engineering Degree from a well-reputed university.
  • Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
  • Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
  • Ability to manage a diverse and challenging stakeholder community
  • Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.

 

Responsibilities

  • Should works as a senior developer/individual contributor based on situations
  • Should be part of SCRUM discussions and to take requirements
  • Adhere to SCRUM timeline and deliver accordingly
  • Participate in a team environment for the design, development and implementation
  • Should take L3 activities on need basis
  • Prepare Unit/SIT/UAT testcase and log the results
  • Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
  • Quality delivery and automation should be a top priority
  • Co-ordinate change and deployment in time
  • Should create healthy harmony within the team
  • Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
Read more
Jiva adventures
at Jiva adventures
2 recruiters
Bharat Chinhara
Posted by Bharat Chinhara
Bengaluru (Bangalore)
1 - 4 yrs
₹5L - ₹15L / yr
skill iconData Science
skill iconPython
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Should be experienced in building Machine learning pipelines.   Should be proficient in Python and scientific packages like pandas, numpy, scikit, matplotlib, etc. Experience with techniques such as Data mining, Distributed Computing, Applied Mathematics and Algorthims, Probablity & statistics, Strong problem solving and conceptual thinking abilities Hands on experience in Model building Building highly customized and optimized data pipelines integrating third party API’s and inhouse data sources. Extracting features from text data using tools like Scapy Deep learning for NLP using any modern framework
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos