
We take pride in letting you know that you are interviewing with the world's largest Edtech Company and the reason for our exceptional growth lies in the DNA that we have. A true Byjuite defines his/her own limits in terms of efforts and rewards. There is no bar on the amount of incentive you can earn. Average incentive earned by a BDA in the system is around 30k monthly but our best associates make even 70-80k of incentive every month apart from their fixed salary. On an average, a performing associate gets his 1st opportunity in 1.5-2 years to become a manager and head a team of 20 people. And we have seen that the best of the associates can even become a manager even before completing a year.
The entire development journey will have two phases:
5 day SGDP (Sales Grooming & Development Program)
4 weeks OJT (On the Job Training)
1) SGDP (Sales Grooming & Development Program)
To begin with , you will be enrolled for a 5 days Sales Grooming & Development Program. The SGDP would be virtual (Work from Home) and will include theoretical aspects of BYJU’S sales process.
- You will get well-versed with the organization, our products, and how to interact with customers. ● You will be involved in various activities during this phase & you will be rigorously evaluated at different intervals.
- During this period, you will gain knowledge, a platform to practice your sales skill as well as feedback to enhance your capabilities from our team of experienced facilitators.
- These 5 days are purely invested in your learning and there would be no remuneration provided during this period.
The enrolled candidates who satisfactorily complete the SGDP phase will be given an opportunity to move to the next phase which is i.e On Job Training(OJT).
OJT involves a 4 weeks training program which will be in Bangalore/Delhi location and on successful completion of this training program,applicants will be given an opportunity to join us as a Business Development Associate with BYJUs.
2) OJT (On the Job Training)
- The OJT Phase will start for the candidates who have successfully cleared the SGDP.
- Candidates will be expected to generate revenue by implementing the theoretical knowledge acquired during the first phase into practice during the final 4 weeks of the OJT phase.
- Following the completion of the OJT, all input and output numbers will be closely monitored and reviewed to evaluate your eligibility to apply for the Business Development Associate (BDA) position at BYJU'S.
Only qualifying trainees will become eligible to be offered as BDAs. (Subject to approval from management).
The entire role will be for 6 days a week(Monday would be an off). The assessment parameters will include performance metrics, process adherence, behavioural aspects, quality audits, and feedback from the manager/ trainer and HR Team. Upon successfully passing the training program, you become eligible to be offered the role of BDA - Inside Sales at 8 LPA(5 fixed + 3 variable). You will be able to start the job in the offered 'Role location' only after you join the organization as a BDA
Allowance (ATP during OJT): INR 1250 / Week + INR 1250 Travel Allowance Once in Tenure
STIPEND (ATP during OJT): Additional Stipend of 18000 for target completion of 3 valid sales and 6 valid conduction
CTC (BDA - post qualification after training): INR 8,00,000 (5 Lakh fixed pay + 3 lakh performance pay) for Inside Sales
You might be feeling that working at Byju's has so many benefits. But, frankly, there is nothing called free lunch. You will have to be extremely committed to your effort and be target oriented in every aspect. An applicant trainee works for a 6 days work week in the first two weeks of SGDP and 6 day work week in OJT tenure. The day starts at 9:30 am and ends at 9 in the evening; apart from it few assignments need to be completed on a daily basis during the SGDP Phase and his main work throughout the day is to find potential customers by connecting with all the customers of the region who download our app. On average, you will be expected to connect to 120-140 customers and schedule one on one meetings with potential ones. We measure everything from your input effort, quality, and output. In a nutshell, you work with strict targets - call customers - book counselling sessions - visit them in person - sell the courses we have - take-home truck loads of money in the form of incentives and salary.

About BYJU’S - The Learning App
Similar jobs
About the Role:
As an Intune Engineer, you will primarily focus on migrating endpoints from SCCM to Microsoft Intune, managing co-management environments, and overseeing Intune infrastructure to ensure scalable, secure, and high-performance device management. You'll work closely with Microsoft 365 ecosystem tools and support enterprise-wide device provisioning, compliance, and security.
What You'll Do:
- Design, implement, and maintain Microsoft Intune infrastructure with ongoing integration to Microsoft 365 services
- Develop and enforce policies around secure device access, compliance, and application management
- Handle device enrollment, provisioning, and lifecycle management across Windows, iOS, and Android
- Develop automation scripts and collaborate with teams to streamline workflows
- Provide advanced Tier 3 support and training for complex Intune-related issues
- Manage Windows Autopilot and Azure AD integration for hybrid identity and seamless onboarding
- Lead security compliance checks and audit readiness through Intune policies
What We’re Looking For:
- 3-4 years of endpoint management experience with 2+ years in Microsoft Intune (SCCM migration experience is a big plus)
- Strong technical skills in Windows OS, Azure AD, Autopilot, conditional access, and device security
- Expertise troubleshooting device enrollment, app deployment, and security settings
- Familiarity with scripting (PowerShell, Graph API) and enterprise automation
- Microsoft 365 Certified Endpoint Administrator Associate or equivalent preferred
- Excellent communication skills with a team-first mindset
🚀 We’re Hiring: Senior Python Backend Developer 🚀
📍 Location: Baner, Pune (Work from Office)
💰 Compensation: ₹6 LPA
🕑 Experience Required: Minimum 2 years as a Python Backend Developer
About Us
Foto Owl AI is a fast-growing product-based company headquartered in Baner, Pune.
We specialize in:
⚡ Hyper-personalized fan engagement
🤖 AI-powered real-time photo sharing
📸 Advanced media asset management
What You’ll Do
As a Senior Python Backend Developer, you’ll play a key role in designing, building, and deploying scalable backend systems that power our cutting-edge platforms.
Architect and develop complex, secure, and scalable backend services
Build and maintain APIs & data pipelines for web, mobile, and AI-driven platforms
Optimize SQL & NoSQL databases for high performance
Manage AWS infrastructure (EC2, S3, RDS, Lambda, CloudWatch, etc.)
Implement observability, monitoring, and security best practices
Collaborate cross-functionally with product & AI teams
Mentor junior developers and conduct code reviews
Troubleshoot and resolve production issues with efficiency
What We’re Looking For
✅ Strong expertise in Python backend development
✅ Solid knowledge of Data Structures & Algorithms
✅ Hands-on experience with SQL (PostgreSQL/MySQL) and NoSQL (MongoDB, Redis, etc.)
✅ Proficiency in RESTful APIs & Microservice design
✅ Knowledge of Docker, Kubernetes, and cloud-native systems
✅ Experience managing AWS-based deployments
Why Join Us?
At Foto Owl AI, you’ll be part of a passionate team building world-class media tech products used in sports, events, and fan engagement platforms. If you love scalable backend systems, real-time challenges, and AI-driven products, this is the place for you.
Skillset:
PHP (Mandatory)
Java(Mandatory)
JQuery (Good to have)
HTML/CSS (Good to have)
Javascript (Good to have)
Angular (Good to have)
Thanks
Ravindra
Job Description:
As a Machine Learning Engineer, you will:
- Operationalize AI models for production, ensuring they are scalable, robust, and efficient.
- Work closely with data scientists to optimize machine learning model performance.
- Utilize Docker and Kubernetes for the deployment and management of AI models in a production environment.
- Collaborate with cross-functional teams to integrate AI models into products and services.
Responsibilities:
- Develop and deploy scalable machine learning models into production environments.
- Optimize models for performance and scalability.
- Implement continuous integration and deployment (CI/CD) pipelines for machine learning projects.
- Monitor and maintain model performance in production.
Key Performance Indicators (KPI) For Role:
- Success in deploying scalable and efficient AI models into production.
- Improvement in model performance and scalability post-deployment.
- Efficiency in model deployment and maintenance processes.
- Positive feedback from team members and stakeholders on AI model integration and performance.
- Adherence to best practices in machine learning engineering and deployment.
Prior Experience Required:
- 2-4 years of experience in machine learning or data science, with a focus on deploying machine learning models into production.
- Proficient in Python and familiar with data science libraries and frameworks (e.g., TensorFlow, PyTorch).
- Experience with Docker and Kubernetes for containerization and orchestration of machine learning models.
- Demonstrated ability to optimize machine learning models for performance and scalability.
- Familiarity with machine learning lifecycle management tools and practices.
- Experience in developing and maintaining scalable and robust AI systems.
- Knowledge of best practices in AI model testing, versioning, and deployment.
- Strong understanding of data preprocessing, feature engineering, and model evaluation metrics.
Employer:
RaptorX.ai
Location:
Hyderabad
Collaboration:
The role requires collaboration with data engineers, software developers, and product managers to ensure the seamless integration of AI models into products and services.
Salary:
Competitive, based on experience.
Education:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
Language Skills:
- Strong command of Business English, both verbal and written, is required.
Other Skills Required:
- Strong analytical and problem-solving skills.
- Proficiency in code versioning tools, such as Git.
- Ability to work in a fast-paced and evolving environment.
- Excellent teamwork and communication skills.
- Familiarity with agile development methodologies.
- Understanding of cloud computing services (AWS, Azure, GCP) and their use in deploying machine learning models is a plus.
Other Requirements:
- Proven track record of successfully deploying machine learning models into production.
- Ability to manage multiple projects simultaneously and meet deadlines.
- A portfolio showcasing successful AI/ML projects.
Founders and Leadership
RaptorX is led by seasoned founders with deep expertise in security, AI, and enterprise solutions. Our leadership team has held senior positions at global tech giants like Microsoft, Palo Alto Networks, Akamai, and Zscaler, solving critical problems at scale.
We bring not just technical excellence, but also a relentless passion for innovation and impact.
The Market Opportunity
Fraud costs the global economy trillions of dollars annually, and traditional fraud detection methods simply can't keep up. The demand for intelligent, adaptive solutions like RaptorX is massive and growing exponentially across industries like:
- Fintech and Banking
- E-commerce
- Payments
This is your chance to work on a product that addresses a multi-billion-dollar market with huge growth potential.
The Tech Space at RaptorX
We are solving large-scale, real-world problems using modern technologies, offering specialized growth paths for every tech role.
Why You Should Join Us
- Opportunity to Grow: As an early-stage startup, every contribution you make will have a direct impact on the company’s growth and success. You’ll wear multiple hats, learn fast, and grow exponentially.
- Innovate Every Day: Solve complex, unsolved problems using the latest in AI, Graph Databases, and advanced analytics.
- Collaborate with the Best: Work alongside some of the brightest minds in the industry. Learn from leaders who have built and scaled successful products globally.
- Make an Impact: Help businesses reduce losses, secure customers, and prevent fraud globally. Your work will create a tangible difference.
The Sr AWS/Azure/GCP Databricks Data Engineer at Koantek will use comprehensive
modern data engineering techniques and methods with Advanced Analytics to support
business decisions for our clients. Your goal is to support the use of data-driven insights
to help our clients achieve business outcomes and objectives. You can collect, aggregate, and analyze structured/unstructured data from multiple internal and external sources and
patterns, insights, and trends to decision-makers. You will help design and build data
pipelines, data streams, reporting tools, information dashboards, data service APIs, data
generators, and other end-user information portals and insight tools. You will be a critical
part of the data supply chain, ensuring that stakeholders can access and manipulate data
for routine and ad hoc analysis to drive business outcomes using Advanced Analytics. You are expected to function as a productive member of a team, working and
communicating proactively with engineering peers, technical lead, project managers, product owners, and resource managers. Requirements:
Strong experience as an AWS/Azure/GCP Data Engineer and must have
AWS/Azure/GCP Databricks experience. Expert proficiency in Spark Scala, Python, and spark
Must have data migration experience from on-prem to cloud
Hands-on experience in Kinesis to process & analyze Stream Data, Event/IoT Hubs, and Cosmos
In depth understanding of Azure/AWS/GCP cloud and Data lake and Analytics
solutions on Azure. Expert level hands-on development Design and Develop applications on Databricks. Extensive hands-on experience implementing data migration and data processing
using AWS/Azure/GCP services
In depth understanding of Spark Architecture including Spark Streaming, Spark Core, Spark SQL, Data Frames, RDD caching, Spark MLib
Hands-on experience with the Technology stack available in the industry for data
management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc
Hands-on knowledge of data frameworks, data lakes and open-source projects such
asApache Spark, MLflow, and Delta Lake
Good working knowledge of code versioning tools [such as Git, Bitbucket or SVN]
Hands-on experience in using Spark SQL with various data sources like JSON, Parquet and Key Value Pair
Experience preparing data for Data Science and Machine Learning with exposure to- model selection, model lifecycle, hyperparameter tuning, model serving, deep
learning, etc
Demonstrated experience preparing data, automating and building data pipelines for
AI Use Cases (text, voice, image, IoT data etc. ). Good to have programming language experience with. NET or Spark/Scala
Experience in creating tables, partitioning, bucketing, loading and aggregating data
using Spark Scala, Spark SQL/PySpark
Knowledge of AWS/Azure/GCP DevOps processes like CI/CD as well as Agile tools
and processes including Git, Jenkins, Jira, and Confluence
Working experience with Visual Studio, PowerShell Scripting, and ARM templates. Able to build ingestion to ADLS and enable BI layer for Analytics
Strong understanding of Data Modeling and defining conceptual logical and physical
data models. Big Data/analytics/information analysis/database management in the cloud
IoT/event-driven/microservices in the cloud- Experience with private and public cloud
architectures, pros/cons, and migration considerations. Ability to remain up to date with industry standards and technological advancements
that will enhance data quality and reliability to advance strategic initiatives
Working knowledge of RESTful APIs, OAuth2 authorization framework and security
best practices for API Gateways
Guide customers in transforming big data projects, including development and
deployment of big data and AI applications
Guide customers on Data engineering best practices, provide proof of concept, architect solutions and collaborate when needed
2+ years of hands-on experience designing and implementing multi-tenant solutions
using AWS/Azure/GCP Databricks for data governance, data pipelines for near real-
time data warehouse, and machine learning solutions. Over all 5+ years' experience in a software development, data engineering, or data
analytics field using Python, PySpark, Scala, Spark, Java, or equivalent technologies. hands-on expertise in Apache SparkTM (Scala or Python)
3+ years of experience working in query tuning, performance tuning, troubleshooting, and debugging Spark and other big data solutions. Bachelor's or Master's degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience
Ability to manage competing priorities in a fast-paced environment
Ability to resolve issues
Basic experience with or knowledge of agile methodologies
AWS Certified: Solutions Architect Professional
Databricks Certified Associate Developer for Apache Spark
Microsoft Certified: Azure Data Engineer Associate
GCP Certified: Professional Google Cloud Certified
• based web and mobile applications using React ecosystem
• Work as a part of a small team that includes other react native
• developers, designers, QA experts, and managers
• Build app and UI components from prototypes and wireframes
• Work with native modules as and when required
• Use Native APIs for tight integrations with both platforms – Android and IOS
• Write automated tests to ensure error-free code and performance
• Improve front-end performance by eliminating performance bottlenecks
• Create front-end modules with maximum code reusability and efficiency
• Implement clean, smooth animations to provide an excellent user Interface
• Work with third-party dependencies and APIs
• Work with Redux architecture to improve performance of the websites/mobile apps
• Significant experience working with React web and mobile along with tools like Flux, Flow, Redux, etc.
• In-depth knowledge of JavaScript, CSS, HTML, functional programming, and front-end languages
• Strong knowledge of React fundamentals such as Virtual DOM, component lifecycle, and component state
• A complete understanding of the full mobile app development lifecycle right from prototyping
• Knowledge of type checking, unit testing, typescript, prop types, and code debugging
• Experience working with REST APIs, document request models, offline storage, and third-party libraries
• Strong understanding of web technologies like JavaScript, HTML, and CSS
• Knowledge of user interface design and responsive designs
• Well-versed in a variety of React Native software and technologies such as Jest, Enzyme, ESLint, and so on
• Experience working on large, complex web and mobile apps
• Ability to create and maintain smooth continuous integration and continuous delivery pipeline of React Native applications
• Understanding of React Native best principles and design aesthetics
• A positive mindset and continuous-learning attitude
• Stay updated with new updates, technologies, and news of React Native
• Ability to solve issues and contribute to libraries as and when needed
• Experience working in an agile development environment
• Strong verbal and written communication skills to communicate strategy
• Familiarity with modern front-end tools and building pipelines
• A collaborative approach to build apps and solve complex problems
• Attention to detail and problem-solving skills
• Client-focused approach with a goal of creating user-centric designs
• Good interpersonal, communication, and collaboration skills
• Ability to write clean, well-documented code that follows good coding practices
- Circuit Design (Schematic Entry), Digital, Analog & Power.
- High-speed PCB Layout Design, up to 8-Layer, routing high current tracks (up to 30A)
- Experience on “Altium Designer” will be added advantage.
- Experience with ARM processors & AVR Controllers will be preferred.
- Preparing Design and test procedure documentation.
- System integration and Testing.
- Hardware Troubleshooting.
- Awareness of PCB Fabrication & assembly Processes.
- Awareness for component procurement, finding substitutes.
- Design transfer to production by defining production processes.
- Assist production for optimizing operations components sourcing, troubleshooting.
- Interact with PCB Design team.
- Interact with hardware team/consultants in India, USA
- Should have leadership qualities for handling a team.
- Communication with Component manufacturers/vendors for part choices & design support.
- Preparing documentation for design, for design transfer to production, for submittal to various labs for certifications
- B.E (Electronics and communication, Instrumentation and Electronics, E.E.E)
➢ Exposure to Sales domain will be preferred.
➢ Excellent Verbal & Written Communications Skills
➢ Custodian of all Qualitative parameters including Quality Scores, Compliance and CSAT ➢ Very Good Presentation, Feedback and Coaching skills ➢ Ability to observe, analyze and identify process improvement opportunities ➢ Must be comfortable working in 24/7 rotational shifts ➢ Ability to work under pressure, Customer Service Attitude with Analytical bent of mind ➢ Highly energetic & enthusiastic. ➢ Should be able to work as Individual Contributor and as a good Team player ➢ Hands-on experience on MS-Office; preferably on MS-Excel, Power Point. ➢ Basic Data Handling and Data interpretation skill
➢ Graduation mandatory
➢ Package: up to 4 - 6 Lac
➢ Location: Gurgaon
➢ Should be from neighbouring/ accessible locations from Gurgaon.
➢ Should be comfortable with 24*7 rotational shifts & self-commuting.
Key Responsibilities: ➢ Audit Call (Voice) interactions on Product, Process, Communication and compliance parameters ➢ Conduct One-O-One and Group Feedback Sessions for the targeted population to improve their performance ➢ Custodian of all Qualitative parameters including Quality Scores, Compliance and CSAT ➢ Data analysis and making quality reports and review decks ➢ Conducting calibration sessions with partner BPO teams and work as master calibrator to ensure consistent scoring approach ➢ Identify Process Improvements and make recommendations ➢ By using knowledge of process, proactively identify areas of concern & highlight to the change Team ➢ Identify Training needs and working in close coordination with Training team to help Advisors come up of the learning curve ➢ Conduct Certification for the New Hires ➢ Conducting Compliance Audits to trace malpractices and share internal compliance feedback with the Management
1. Ability to read the documentation and perform 3rd party API integrations
2. Experience with Postgres and Redis
3. Experience with AWS - EC2, RDS, DynamoDB, etc
4. Experience with Python
Dashboard
Skills
1. Experience with Django
a. Django Web
b. Django REST
c. Django Channels
d. Django Celery (Queues and Brokers)
2. Experience creating a dashboard with login, user profiles, roles and permissions, reports
3. Experience with Facebook and Twitter OAuth
4. Experience with handling database migrations
Chatbot
Skills
1. Basic understanding of NLP - intents and entities
2. Strong understanding of dialogue management and conversation flow
3. Integrations with Facebook and Twitter APIs.
4. Creating and managing Facebook and Twitter apps
5. Understanding of webhooks and REST APIs.











