

Propellor.ai
https://www.propellor.ai/About
Who we are
At Propellor, we are passionate about solving Data Unification challenges faced by our clients. We build solutions using the latest tech stack. We believe all solutions lie in the congruence of Business, Technology, and Data Science. Combining the 3, our team of young Data professionals solves some real-world problems. Here's what we live by:
Skin in the game
We believe that Individual and Collective success orientations both propel us ahead.
Cross Fertility
Borrowing from and building on one another’s varied perspectives means we are always viewing business problems with a fresh lens.
Sub 25's
A bunch of young turks, who keep our explorer mindset alive and kicking.
Future-proofing
Keeping an eye ahead, we are upskilling constantly, staying relevant at any given point in time.
Tech Agile
Tech changes quickly. Whatever your stack, we adapt speedily and easily.
If you are evaluating us to be your next employer, we urge you to read more about our team and culture here: https://bit.ly/3ExSNA2. We assure you, it's worth a read!
Tech stack


Company video


Candid answers by the company


Photos
Connect with the team
Jobs at Propellor.ai

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.



Job Description: Data Scientist
At Propellor.ai, we derive insights that allow our clients to make scientific decisions. We believe in demanding more from the fields of Mathematics, Computer Science, and Business Logic. Combine these and we show our clients a 360-degree view of their business. In this role, the Data Scientist will be expected to work on Procurement problems along with a team-based across the globe.
We are a Remote-First Company.
Read more about us here: https://www.propellor.ai/consulting" target="_blank">https://www.propellor.ai/consulting
What will help you be successful in this role
- Articulate
- High Energy
- Passion to learn
- High sense of ownership
- Ability to work in a fast-paced and deadline-driven environment
- Loves technology
- Highly skilled at Data Interpretation
- Problem solver
- Ability to narrate the story to the business stakeholders
- Generate insights and the ability to turn them into actions and decisions
Skills to work in a challenging, complex project environment
- Need you to be naturally curious and have a passion for understanding consumer behavior
- A high level of motivation, passion, and high sense of ownership
- Excellent communication skills needed to manage an incredibly diverse slate of work, clients, and team personalities
- Flexibility to work on multiple projects and deadline-driven fast-paced environment
- Ability to work in ambiguity and manage the chaos
Key Responsibilities
- Analyze data to unlock insights: Ability to identify relevant insights and actions from data. Use regression, cluster analysis, time series, etc. to explore relationships and trends in response to stakeholder questions and business challenges.
- Bring in experience for AI and ML: Bring in Industry experience and apply the same to build efficient and optimal Machine Learning solutions.
- Exploratory Data Analysis (EDA) and Generate Insights: Analyse internal and external datasets using analytical techniques, tools, and visualization methods. Ensure pre-processing/cleansing of data and evaluate data points across the enterprise landscape and/or external data points that can be leveraged in machine learning models to generate insights.
- DS and ML Model Identification and Training: Identity, test, and train machine learning models that need to be leveraged for business use cases. Evaluate models based on interpretability, performance, and accuracy as required. Experiment and identify features from datasets that will help influence model outputs. Determine what models will need to be deployed, data points that need to be fed into models, and aid in the deployment and maintenance of models.
Technical Skills
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of them. We are open to promising candidates who are passionate about their work, fast learners and are team players.
- Strong experience with machine learning and AI including regression, forecasting, time series, cluster analysis, classification, Image recognition, NLP, Text Analytics and Computer Vision.
- Strong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as Python, or similar.
- Strong experience with popular database programming languages including SQL.
- Strong experience in Spark/Pyspark
- Experience in working in Databricks
What are the company benefits you get, when you join us as?
- Permanent Work from Home Opportunity
- Opportunity to work with Business Decision Makers and an internationally based team
- The work environment that offers limitless learning
- A culture void of any bureaucracy, hierarchy
- A culture of being open, direct, and with mutual respect
- A fun, high-caliber team that trusts you and provides the support and mentorship to help you grow
- The opportunity to work on high-impact business problems that are already defining the future of Marketing and improving real lives
To know more about how we work: https://bit.ly/3Oy6WlE" target="_blank">https://bit.ly/3Oy6WlE
Whom will you work with?
You will closely work with other Senior Data Scientists and Data Engineers.
Immediate to 15-day Joiners will be preferred.

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.

Job Description - Data Engineer
About us
Propellor is aimed at bringing Marketing Analytics and other Business Workflows to the Cloud ecosystem. We work with International Clients to make their Analytics ambitions come true, by deploying the latest tech stack and data science and engineering methods, making their business data insightful and actionable.
What is the role?
This team is responsible for building a Data Platform for many different units. This platform will be built on Cloud and therefore in this role, the individual will be organizing and orchestrating different data sources, and
giving recommendations on the services that fulfil goals based on the type of data
Qualifications:
• Experience with Python, SQL, Spark
• Knowledge/notions of JavaScript
• Knowledge of data processing, data modeling, and algorithms
• Strong in data, software, and system design patterns and architecture
• API building and maintaining
• Strong soft skills, communication
Nice to have:
• Experience with cloud: Google Cloud Platform, AWS, Azure
• Knowledge of Google Analytics 360 and/or GA4.
Key Responsibilities
• Work on the core backend and ensure it meets the performance benchmarks.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.
Key Responsibilities
• Design and develop platform based on microservices architecture.
• Work on the core backend and ensure it meets the performance benchmarks.
• Work on the front end with ReactJS.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.
What are we looking for?
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.
• Education - BE/MCA or equivalent.
• Agnostic/Polyglot with multiple tech stacks.
• Worked on open-source technologies – NodeJS, ReactJS, MySQL, NoSQL, MongoDB, DynamoDB.
• Good experience with Front-end technologies like ReactJS.
• Backend exposure – good knowledge of building API.
• Worked on serverless technologies.
• Efficient in building microservices in combining server & front-end.
• Knowledge of cloud architecture.
• Should have sound working experience with relational and columnar DB.
• Should be innovative and communicative in approach.
• Will be responsible for the functional/technical track of a project.
Whom will you work with?
You will closely work with the engineering team and support the Product Team.
Hiring Process includes :
a. Written Test on Python and SQL
b. 2 - 3 rounds of Interviews
Immediate Joiners will be preferred

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.

Big Data Engineer/Data Engineer
What we are solving
Welcome to today’s business data world where:
• Unification of all customer data into one platform is a challenge
• Extraction is expensive
• Business users do not have the time/skill to write queries
• High dependency on tech team for written queries
These facts may look scary but there are solutions with real-time self-serve analytics:
• Fully automated data integration from any kind of a data source into a universal schema
• Analytics database that streamlines data indexing, query and analysis into a single platform.
• Start generating value from Day 1 through deep dives, root cause analysis and micro segmentation
At Propellor.ai, this is what we do.
• We help our clients reduce effort and increase effectiveness quickly
• By clearly defining the scope of Projects
• Using Dependable, scalable, future proof technology solution like Big Data Solutions and Cloud Platforms
• Engaging with Data Scientists and Data Engineers to provide End to End Solutions leading to industrialisation of Data Science Model Development and Deployment
What we have achieved so far
Since we started in 2016,
• We have worked across 9 countries with 25+ global brands and 75+ projects
• We have 50+ clients, 100+ Data Sources and 20TB+ data processed daily
Work culture at Propellor.ai
We are a small, remote team that believes in
• Working with a few, but only with highest quality team members who want to become the very best in their fields.
• With each member's belief and faith in what we are solving, we collectively see the Big Picture
• No hierarchy leads us to believe in reaching the decision maker without any hesitation so that our actions can have fruitful and aligned outcomes.
• Each one is a CEO of their domain.So, the criteria while making a choice is so our employees and clients can succeed together!
To read more about us click here:
https://bit.ly/3idXzs0" target="_blank">https://bit.ly/3idXzs0
About the role
We are building an exceptional team of Data engineers who are passionate developers and wants to push the boundaries to solve complex business problems using the latest tech stack. As a Big Data Engineer, you will work with various Technology and Business teams to deliver our Data Engineering offerings to our clients across the globe.
Role Description
• The role would involve big data pre-processing & reporting workflows including collecting, parsing, managing, analysing, and visualizing large sets of data to turn information into business insights
• Develop the software and systems needed for end-to-end execution on large projects
• Work across all phases of SDLC, and use Software Engineering principles to build scalable solutions
• Build the knowledge base required to deliver increasingly complex technology projects
• The role would also involve testing various machine learning models on Big Data and deploying learned models for ongoing scoring and prediction.
Education & Experience
• B.Tech. or Equivalent degree in CS/CE/IT/ECE/EEE 3+ years of experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions.
Must have (hands-on) experience
• Python and SQL expertise
• Distributed computing frameworks (Hadoop Ecosystem & Spark components)
• Must be proficient in any Cloud computing platforms (AWS/Azure/GCP) • Experience in in any cloud platform would be preferred - GCP (Big Query/Bigtable, Pub sub, Data Flow, App engine )/ AWS/ Azure
• Linux environment, SQL and Shell scripting Desirable
• Statistical or machine learning DSL like R
• Distributed and low latency (streaming) application architecture
• Row store distributed DBMSs such as Cassandra, CouchDB, MongoDB, etc
. • Familiarity with API design
Hiring Process:
1. One phone screening round to gauge your interest and knowledge of fundamentals
2. An assignment to test your skills and ability to come up with solutions in a certain time
3. Interview 1 with our Data Engineer lead
4. Final Interview with our Data Engineer Lead and the Business Teams
Preferred Immediate Joiners

Similar companies
About the company
CoffeeBeans Consulting is a technology partner dedicated to driving business transformation. With deep expertise in Cloud, Data, MLOPs, AI, Infrastructure services, Application modernization services, Blockchain, and Big Data, we help organizations tackle complex challenges and seize growth opportunities in today’s fast-paced digital landscape. We’re more than just a tech service provider; we're a catalyst for meaningful change
Jobs
4
About the company
Among the multitude of healthcare IT companies, HealthAsyst has the required expertise in the US healthcare domain helping us offer world class healthcare IT services & solutions with improvement in productivity and efficiency.
HealthAsyst® is currently a leading provider of IT services to some of the largest healthcare IT vendors in the United States.
We bring the value of cutting-edge technology through our deep expertise in product engineering, custom software development, testing, large-scale healthcare IT implementation and integrations, on-going maintenance and support, BI & Analytics, and remote monitoring platforms.
As a true partner, we help our customers navigate a complex regulatory landscape, deal with cost pressures, and offer high-quality services. As a healthcare transformation agent, we enable innovation in technology and accelerate problem-solving while delivering unmatched cost benefits to healthcare technology companies.
Since 1999, HealthAsyst IT Services has been delivering technology solutions through a systematic, consultative approach to prestigious healthcare tech companies.
Jobs
4
About the company
Jobs
6
About the company
Jobs
2
About the company
Founded in 2014 by two passionate individuals during their second year at Christ College, Bangalore, Moshi Moshi is a young, creative, and committed communication company that encourages clients to always "Expect the EXTRA."
Our diverse team of over 160+ people includes Art directors, Cinematographers, Content and copy writers, marketers, developers, coders, and our beloved puppy, Momo. We offer a wide range of services, including strategy, brand design, communications, packaging, film and TVCs, PR, and more. At Moshi Moshi, we believe in creating experiences rather than just running a company.
We are amongst the fastest growing agencies in the country with a very strong value system.
Below are the five of the nine principles we believe in strongly.
- Communicate Clearly.: Prioritize clear and open dialogue.
- Doing things morally right.: Uphold integrity in all endeavors.
- Dream it, do it.: Always Embrace optimism and a can-do attitude.
- Add logic to your life.: Ensure that rationality guides our actions.
- Be that fool.: Fearlessly challenge the impossible.
Come find yourself at Moshi Moshi.
Jobs
115
About the company
Jobs
2
About the company
Jobs
10
About the company
We simplify the business employment process by setting up the right people for apt career opportunities.
Jobs
10
About the company
Joining the team behind the world’s most trusted artifact firewall isn’t just a job - it’s a mission.
🧩 What the Company Does
This company provides software tools to help development teams manage open-source code securely and efficiently. Its platform covers artifact management, automated policy enforcement, vulnerability detection, software bill of materials (SBOM) management, and AI-powered risk analysis. It's used globally by thousands of enterprises and millions of developers to secure their software supply chains.
👥 Founding Team
The company was founded in the late 2000s by a group of open-source contributors, including one who was heavily involved in building a popular Java-based build automation tool. The company was started by veteran engineers with deep roots in the open-source community—one of whom helped create a widely adopted build automation tool used by millions today.
💰 Funding & Financials
Over the years, the company has raised nearly $150 million across several funding rounds, including a large growth round led by a top-tier private equity firm. It crossed $100 million in annual recurring revenue around 2021 and has remained profitable since. Backers include well-known names in venture capital and private equity.
🏆 Key Milestones & Achievements
- Early on, the company took over stewardship of a widely used public code repository.
- It launched tools for artifact repository management and later expanded into automated security and compliance.
- Has blocked hundreds of thousands of malicious open-source packages and helped companies catch risky components before deployment.
- Released AI-powered tools that go beyond CVE databases to detect deeper threats.
- Recognized as a market leader in software composition analysis by major industry analysts.
- Today, it’s used by many Fortune 100 companies across industries like finance, government, and healthcare.
Jobs
8
About the company
Jobs
5