50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!


Data Scientist
Job Title: Data Scientist – Data and Artificial Intelligence
Location: Hyderabad
Job Type: Full-time
Company Description:
Qylis is a leading provider of innovative IT solutions, specializing in Cloud, Data & AI,
and Cyber Security. We help businesses unlock the full potential of these technologies
to achieve their goals and gain a competitive edge. Our unique approach focuses on
delivering value through bespoke solutions tailored to customer specific needs. We are
driven by a customer-centric mindset and committed to delivering continuous value
through intellectual property accelerators and automation. Our team of experts is
passionate about technology and dedicated to making a positive impact. We foster an
environment of growth and innovation, constantly pushing the boundaries to deliver
exceptional results. Website: www.qylis.com, LinkedIn:
www.linkedin.com/company/qylis
Job Summary
We are an engineering organization collaborating directly with clients to address
challenges using the latest technologies. Our focus is on joint code development with
clients' engineers for cloud-based solutions, accelerating organizational progress.
Working with product teams, partners, and open-source communities, we contribute to
open source, striving for platform improvement. This role involves creating impactful
solution patterns and open-source assets. As a team member, you'll collaborate with
engineers from both teams, applying your skills and creativity to solve complex
challenges and contribute to open source, while fostering professional growth.
Responsibilities
• Researching and developing production-grade models (forecasting, anomaly
detection, optimization, clustering, etc.) for global cloud business by using
statistical and machine learning techniques.
• Manage large volumes of data, and create new and improved solutions for data
collection, management, analyses, and data science model development.
• Drive the onboarding of new data and the refinement of existing data sources
through feature engineering and feature selection.
• Apply statistical concepts and cutting-edge machine learning techniques to
analyze cloud demand and optimize data science model code for distributed
computing platforms and task automation.
• Work closely with other data scientists and data engineers to deploy models that
drive cloud infrastructure capacity planning.
• Present analytical findings and business insights to project managers,
stakeholders, and senior leadership and keep abreast of new statistical /
machine learning techniques and implement them as appropriate to improve
predictive performance.
• Oversees the analysis of data and leads the team in identifying trends, patterns,
correlations, and insights to develop new forecasting models and improve
existing models.
• Leads collaboration among team and leverages data to identify pockets of
opportunity to apply state-of-the-art algorithms to improve a solution to a
business problem.
• Consistently leverages knowledge of techniques to optimize analysis using
algorithms.
• Modifies statistical analysis tools for evaluating Machine Learning models.
Solves deep and challenging problems for circumstances such as when model
predictions are not correct, when models do not match the training data or the
design outcomes when the data is not clean when it is unclear which analyses to
run, and when the process is ambiguous.
• Provides coaching to team members on business context, interpretation, and the
implications of findings. Interprets findings and their implications for multiple
businesses, and champions methodological rigour by calling attention to the
limitations of knowledge wherever biases in data, methods, and analysis exist.
• Generates and leverages insights that inform future studies and reframe the
research agenda. Informs both current business decisions by implementing and
adapting supply-chain strategies through complex business intelligence.
Qualifications
• M.Sc. in Statistics, Applied Mathematics, Applied Economics, Computer
Science or Engineering, Data Science, Operations Research or similar applied
quantitative field
• 7+ years of industry experience in developing production-grade statistical and
machine learning code in a collaborative team environment.
• Prior experience in machine learning using R or Python (scikit / numpy / pandas /
statsmodel).
• Prior experience working on Computer Vision Project is an Add on
• Knowledge on AWS and Azure Cloud.
• Prior experience in time series forecasting.
• Prior experience with typical data management systems and tools such as SQL.
• Knowledge and ability to work within a large-scale computing or big data context,
and hands-on experience with Hadoop, Spark, DataBricks or similar.
• Excellent analytical skills; ability to understand business needs and translate
them into technical solutions, including analysis specifications and models.
• Experience in machine learning using R or Python (scikit / numpy / pandas /
statsmodel) with skill level at or near fluency.
• Experience with deep learning models (e.g., tensorflow, PyTorch, CNTK) and solid
knowledge of theory and practice.
• Practical and professional experience contributing to and maintaining a large
code base with code versioning systems such as Git.
• Creative thinking skills with emphasis on developing innovative methods to solve
hard problems under ambiguity.
• Good interpersonal and communication (verbal and written) skills, including the
ability to write concise and accurate technical documentation and communicate
technical ideas to non-technical audiences.

A BIT ABOUT US
Appknox is one of the top Mobile Application security companies recognized by Gartner and G2. A profitable B2B SaaS startup headquartered in Singapore & working from Bengaluru.
The primary goal of Appknox is to help businesses and mobile developers secure their mobile applications with a focus on delivery speed and high-quality security audits.
Our business includes Fortune 500 companies with Major brands spread across regions like India, South-East Asia, Middle-East, Japan, US, and expanding rapidly.
The Opportunity:
We are seeking a highly skilled Senior Software Engineer (Backend) to join our dynamic software development team. In this role, you will contribute to key backend projects, collaborate across teams, and play a vital part in delivering robust, scalable, and high-performance software solutions. As a senior engineer, you will work independently, make impactful technical decisions, and help shape the backend architecture while collaborating with a passionate, high-performing team.
You will work hands-on with products primarily built in Python, with opportunities to contribute to Golang. These technologies are at the core of our development stack, and your focus will be on building, scaling, and maintaining distributed services. Distributed systems are integral to our architecture, providing a chance to gain hands-on experience with maintaining and optimizing them in a fast-paced environment.
As a Senior Engineer, you are expected to:
- Write clean, maintainable, and testable code while following best practices.
- Architect solutions, address complex problems, and deliver well-thought-out technical designs.
- Take ownership of assigned modules and features, delivering them with minimal supervision.
- Contribute to code reviews and technical discussions, ensuring high-quality deliverables.
We highly value open source contributions and encourage you to check out our work on GitHub at Appknox GitHub. While no prior experience in security is required, our experienced security professionals are available to support you in understanding the domain.
This role offers a unique opportunity to work on cutting-edge technology, drive impactful solutions, and grow within a collaborative environment that values autonomy, innovation, and technical excellence.
Responsibilities:
- Drive backend development for a disruptive product in the Security domain, focusing on innovation, performance, scalability, and maintainability.
- Take ownership of the software design process, including designing workflows, system architecture, and implementation plans.
- Translate functional and technical requirements into detailed architecture and design, making independent decisions to ensure efficiency and scalability.
- Collaborate with cross-functional teams, including frontend and security teams, to deliver cohesive and high-quality solutions.
- Conduct thorough code reviews to ensure adherence to best practices, maintainability, and coding standards.
- Write clean, maintainable, and testable code using Django and Python, adhering to industry best practices.
- Design and implement scalable software components, frameworks, and APIs using Django and Django REST Framework (DRF).
- Troubleshoot, debug, and optimize existing systems to improve functionality and performance.
- Create detailed technical documentation, including flowcharts, layouts, and system requirements, to ensure clarity and alignment.
- Develop and enforce robust software verification plans, quality assurance procedures, and deployment strategies.
- Ensure timely delivery of software updates while addressing user feedback to enhance solutions.
- Provide technical expertise to solve backend challenges and participate in critical decision-making processes.
- Support team growth by sharing knowledge, fostering collaboration, and mentoring junior engineers informally as needed.
Requirements:
- 5–6 years of professional experience in backend development with a strong focus on Django and Python.
- Proficiency in Django REST Framework (DRF), relational databases, SQL, and ORMs (e.g., Django ORM, SQLAlchemy).
- Strong problem-solving skills with the ability to make independent technical decisions regarding system design and implementation.
- Hands-on experience in designing and developing scalable, maintainable, and high-performing backend systems.
- Deep understanding of software engineering practices, including Test-Driven Development (TDD), CI/CD pipelines, and deployment processes.
- Excellent communication skills, with the ability to document and present technical specifications and workflows clearly.
- Familiarity with cloud infrastructure, deployment pipelines, and microservices architectures is a bonus.
- Self-motivated and capable of working independently in a fast-paced environment with minimal supervision.
- Ability to handle ambiguity and adapt to rapidly changing business needs while maintaining focus on delivering quality solutions.
Work Expectations:
Within 1 month-
- Attend KT sessions conducted by the engineering and product teams to gain a deep understanding of the product, its architecture, and workflows.
- Learn about the team's development processes, tools, and key challenges.
- Work closely with the product team to understand product requirements and contribute to the design and development of features.
- Dive deep into the existing backend architecture, including database structures, APIs, and integration points, to fully understand the technical landscape
- Begin addressing minor technical challenges and bugs, while understanding the underlying architecture and tech stack.
- Begin to participate in creating action plans for new features, ensuring that design and implementation are aligned with product goals.
Within 3 months-
- Achieve full autonomy in working on the codebase, demonstrating the ability to independently deliver high-quality features from design to deployment.
- Take complete ownership of critical modules, ensuring they are optimized for performance and maintainability.
- Act as a technical resource for the team, offering support and guidance to peers on complex issues.
- Collaborate with DevOps to optimize deployment pipelines, debug production issues, and improve backend infrastructure.
- Lead discussions for technical solutions and provide recommendations for architectural improvements.
- Contribute to the design of new features by translating functional requirements into detailed technical specifications.
- Prepare regular updates on assigned tasks and communicate effectively with the engineering manager and other stakeholders.
Within 6 months-
- Be fully independent in their development tasks, contributing to key features and solving critical challenges.
- Demonstrate strong problem-solving skills and the ability to take ownership of technical modules.
- Actively participate in code reviews and technical discussions, ensuring high-quality deliverables.
- Collaborate seamlessly with cross-functional teams to align technical solutions with business requirements.
- Establish themselves as a reliable and proactive team member, contributing to the team’s growth and success.
Personality traits we really admire :-
- Great attitude to ask questions, learn and suggest process improvements.
- Has attention to details and helps identify edge cases.
- Highly motivated and coming up with ideas and perspective to help us move towards our goals faster.
- Follows timelines and absolute commitment to deadlines.
Interview Process
- Round 1 Interview - Profile Evaluation (EM)
- Round 2 Interview - Assignment Evaluation & Technical Problem Solving Discussion (Tech Team)
- Round 3 Interview - System Design (Sr. Architect)
- Round 4 Interview - Engineering Team & Technical Founder (CTO)
- Round 5 Interview - HR
Compensation
- Best in industry
We prefer that every employee also holds equity in the company. In this role, you will be awarded equity after 12 months, based on the impact you have created.
Please be aware that all your customers are Enterprises and Fortune 500 companies.
Why Join Us :-
- Freedom & Responsibility: If you are a person who enjoys challenging work & pushing your boundaries, then this is the right place for you. We appreciate new ideas & ownership as well as flexibility with working hours.
- Great Salary & Equity: We keep up with the market standards & provide pay packages considering updated standards. Also as Appknox continues to grow, you’ll have a great opportunity to earn more & grow with us. Moreover, we also provide equity options for our top performers.
- Holistic Growth: We foster a culture of continuous learning and take a much more holistic approach to training and developing our assets: the employees. We shall also support you all on that journey of yours.
- Transparency: Being a part of a start-up is an amazing experience one of the reasons being the open communication & transparency at multiple levels. Working with Appknox will give you the opportunity to experience it all first hand.
- Health insurance: We offer health insurance coverage upto 5 Lacs for you and your family including parents.


AccioJob is conducting an offline hiring drive with OneLab Ventures for the position of:
- AI/ML Engineer / Intern - Python, Fast API, Flask/Django, PyTorch, TensorFlow, Scikit-learn, GenAI Tools
Apply Now: https://links.acciojob.com/44MJQSB
Eligibility:
- Degree: BTech / BSc / BCA / MCA / MTech / MSc / BCS / MCS
- Graduation Year:
- For Interns - 2024 and 2025
- For experienced - 2024 and before
- Branch: All Branches
- Location: Pune (work from office)
Salary:
- For interns - 25K for 6 months and 5- 6 LPA PPO
- For experienced - Hike on the current CTC
Evaluation Process:
- Assessment at AccioJob Pune Skill Centre.
- Company side process: 2 rounds of tech interviews (Virtual +F2F) + 1 HR round
Apply Now: https://links.acciojob.com/44MJQSB
Important: Please bring your laptop & earphones for the test.


AccioJob is conducting an offline hiring drive with OneLab Ventures for the position of:
- Python Full Stack Engineer / Intern - Python, Fast API, Flask/Django, HTML, CSS, JavaScript, and frameworks like React.js or Node.js
Apply Now: https://links.acciojob.com/4d0Gtd6
Eligibility:
- Degree: BTech / BSc / BCA / MCA / MTech / MSc / BCS / MCS
- Graduation Year:
- For Interns - 2024 and 2025
- For experienced - 2024 and before
- Branch: All Branches
- Location: Pune (work from office)
Salary:
- For interns - 25K for 6 months and 5- 6 LPA PPO
- For experienced - Hike on the current CTC
Evaluation Process:
- Assessment at AccioJob Pune Skill Centre.
- Company side process: 2 rounds of tech interviews (Virtual +F2F) + 1 HR round
Apply Now: https://links.acciojob.com/4d0Gtd6
Important: Please bring your laptop & earphones for the test.

What You Need:
✅ Strong experience in backend development using Python (Django, Flask, or FastAPI).
✅ Hands-on experience with Azure Cloud services (Azure Functions, App Services, AKS, CosmosDB, etc.).
✅ Experience leading a development team and managing projects.
✅ Expertise in designing and managing APIs, microservices, and event-driven architectures.
✅ Strong database experience with MongoDB, PostgreSQL, MySQL, or CosmosDB.
✅ Knowledge of DevOps practices, including CI/CD pipelines, Docker, and Kubernetes.
✅ Ability to develop Proof of Concepts (POCs) and evaluate new technology solutions.
✅ Strong problem-solving and debugging skills.
We’re looking for an experienced SQL Developer with 3+ years of hands-on experience to join our growing team. In this role, you’ll be responsible for designing, developing, and maintaining SQL queries, procedures, and data systems that support our business operations and decision-making processes. You should be passionate about data, highly analytical, and capable of working both independently and collaboratively with cross-functional teams.
Key Responsibilities:
Design, develop, and maintain complex SQL queries, stored procedures, functions, and views.
Optimize existing queries for performance and efficiency.
Collaborate with data analysts, developers, and stakeholders to understand requirements and translate them into robust SQL solutions.
Design and implement ETL processes to move and transform data between systems.
Perform data validation, troubleshooting, and quality checks.
Maintain and improve existing databases, ensuring data integrity, security, and accessibility.
Document code, processes, and data models to support scalability and maintainability.
Monitor database performance and provide recommendations for improvement.
Work with BI tools and support dashboard/report development as needed.
Requirements:
3+ years of proven experience as an SQL Developer or in a similar role.
Strong knowledge of SQL and relational database systems (e.g., MS SQL Server, PostgreSQL, MySQL, Oracle).
Experience with performance tuning and optimization.
Proficiency in writing complex queries and working with large datasets.
Experience with ETL tools and data pipeline creation.
Familiarity with data warehousing concepts and BI reporting.
Solid understanding of database security, backup, and recovery.
Excellent problem-solving skills and attention to detail.
Good communication skills and ability to work in a team environment.
Nice to Have:
Experience with cloud-based databases (AWS RDS, Google BigQuery, Azure SQL).
Knowledge of Python, Power BI, or other scripting/analytics tools.
Experience working in Agile or Scrum environments.

About the company:
Ketto is Asia's largest tech-enabled crowdfunding platform with a vision - Healthcare for all. We are a profit-making organization with a valuation of more than 100 Million USD. With over 1,100 crores raised from more than 60 lakh donors, we have positively impacted the lives of 2 lakh+ campaigners. Ketto has embarked on a high-growth journey, and we would like you to be part of our family, helping us to create a large-scale impact on a daily basis by taking our product to the next level
Role Overview:
Ketto, Asia's largest crowdfunding platform, is looking for an innovative Product Analyst to take charge of our data systems, reporting frameworks, and generative AI initiatives. This role is pivotal in ensuring data integrity and reliability, driving key insights that fuel strategic decisions, and implementing automation through AI. This position encompasses the full data and analytics lifecycle—from requirements gathering to design planning—alongside implementing advanced analytics and generative AI solutions to support Ketto’s mission.
Key Responsibilities
● Data Strategy & Automation:
○ Lead data collection, processing, and quality assurance processes to ensure accuracy, completeness, and relevance.
○ Explore opportunities to incorporate generative AI models to automate and optimize processes, enhancing efficiencies in analytics, reporting, and decision-making.
● Data Analysis & Insight Generation:
○ Conduct in-depth analyses of user behaviour, campaign performance, and platform metrics to uncover insights that support crowdfunding success.
○ Translate complex data into clear, actionable insights that drive strategic decisions, providing stakeholders with the necessary information to enhance business outcomes.
● Reporting & Quality Assurance:
○ Design and maintain a robust reporting framework to deliver timely insights, enhancing data reliability and ensuring stakeholders are well-informed.
○ Monitor and improve data accuracy, consistency, and integrity across all data processes, identifying and addressing areas for enhancement.
● Collaboration & Strategic Planning:
○ Work closely with Business, Product, and IT teams to align data initiatives with Ketto’s objectives and growth strategy.
○ Propose data-driven strategies that leverage AI and automation to tackle business challenges and scale impact across the platform.
○ Mentor junior data scientists and analysts, fostering a culture of data-driven decision-making.
Required Skills and Qualifications
● Technical Expertise:
○ Strong background in SQL, Statistics and Maths
● Analytical & Strategic Mindset:
○ Proven ability to derive meaningful, actionable insights from large data sets and translate findings into business strategies.
○ Experience with statistical analysis, advanced analytics
● Communication & Collaboration:
○ Exceptional written and verbal communication skills, capable of explaining complex data insights to non-technical stakeholders.
○ Strong interpersonal skills to work effectively with cross-functional teams, aligning data initiatives with organisational goals.
● Preferred Experience:
○ Proven experience in advanced analytics roles
○ Experience leading data lifecycle management, model deployment, and quality assurance initiatives.
Why Join Ketto?
At Ketto, we’re committed to driving social change through innovative data and AI solutions. As our Sr. Product Analyst, you’ll have the unique opportunity to leverage advanced data science and generative AI to shape the future of crowdfunding in Asia. If you’re passionate about using data and AI for social good, we’d love to hear from you!

Mandatory
Strong Senior / Lead Software Engineer profile
Mandatory (Experience 1) - Must have Min 6 YOE in Software development, wherein 1-2 Yrs as Senior or Lead Role.
Mandatory (Experience 2) - Must have experience with Python + Django / Flask or similar framework
Mandatory (Experience 3) - Must have experience with Relational Databases (like MySQL, PostgreSQL, Oracle etc)
Mandatory (Experience 4) - Must have good experience in Micro Services or Distributed System frameworks(eg, Kafka, Google pub / Sub, AWS SNS, Azure Service Bus) or Message brokers(eg,RabbitMQ)
Mandatory (Location) - Candidate must be from Bengaluru
Mandatory (Company) - Product / Start-up companies only
Mandatory (Stability) - Should have worked for at least 2 years in 1 Company in last 3 years..

Job Summary:
We are looking for a motivated and detail-oriented Data Engineer with 1–2 years of experience to join our data engineering team. The ideal candidate should have solid foundational skills in SQL and Python, along with exposure to building or maintaining data pipelines. You’ll play a key role in helping to ingest, process, and transform data to support various business and analytical needs.
Key Responsibilities:
- Assist in the design, development, and maintenance of scalable and efficient data pipelines.
- Write clean, maintainable, and performance-optimized SQL queries.
- Develop data transformation scripts and automation using Python.
- Support data ingestion processes from various internal and external sources.
- Monitor data pipeline performance and help troubleshoot issues.
- Collaborate with data analysts, data scientists, and other engineers to ensure data quality and consistency.
- Work with cloud-based data solutions and tools (e.g., AWS, Azure, GCP – as applicable).
- Document technical processes and pipeline architecture.
Core Skills Required:
- Proficiency in SQL (data querying, joins, aggregations, performance tuning).
- Experience with Python, especially in the context of data manipulation (e.g., pandas, NumPy).
- Exposure to ETL/ELT pipelines and data workflow orchestration tools (e.g., Airflow, Prefect, Luigi – preferred).
- Understanding of relational databases and data warehouse concepts.
- Familiarity with version control systems like Git.
Preferred Qualifications:
- Experience with cloud data services (AWS S3, Redshift, Azure Data Lake, etc.)
- Familiarity with data modeling and data integration concepts.
- Basic knowledge of CI/CD practices for data pipelines.
- Bachelor’s degree in Computer Science, Engineering, or related field.

Assignment Details
Our client, a global leader in energy management and automation, is seeking a skilled and experienced Test Automation Engineer with strong expertise in developing automation frameworks for Windows and Web applications. The ideal candidate will have hands-on experience with Python and Robot Framework, and a solid background in software development, debugging, and unit testing. This role requires the ability to work independently, contribute to the entire testing lifecycle, and collaborate with cross-functional teams in an Agile environment.
Key Responsibilities:
- Design and develop robust Test Automation Frameworks for both Windows and Web applications.
- Implement automated test cases using Python and Robot Framework.
- Collaborate with development teams to understand feature requirements and break them down into actionable tasks.
- Use version control and issue tracking tools like TFS/ADO, GitHub, Jira, SVN, etc.
- Perform code reviews, unit testing, and debugging of automation scripts.
- Clearly document and report test results, defects, and automation progress.
- Maintain and enhance existing test automation suites to support continuous delivery pipelines.
Skills Required
- 5–10 years of professional experience in Test Automation and Software Development.
- Strong proficiency in Python and Robot Framework.
- Solid experience with Windows and Web application testing.
- Familiarity with version control systems such as TFS, GitHub, SVN and project tracking tools like Jira.
- Strong analytical and problem-solving skills.
- Ability to work independently with minimal supervision.
- Excellent written and verbal communication skills for documentation and reporting.

Role description:
You will be building curated enterprise grade solutions for GenAI application deployment at a production scale for clients. Solid understanding and hands on skills for GenAI application deployment that includes development and engineering skills. The role requires development and engineering skills on GenAI application development including data ingestion, choosing the right fit LLMs, simple and advanced RAG, guardrails, prompt engineering for optimisation, traceability, security, LLM evaluation, observability, and deployment at scale on cloud or on premise. As this space evolves very rapidly, candidates must also demonstrate knowledge on agentic AI frameworks. Candidates having strong background on ML with engineering skills is highly preferred for LLMOps role.
Required skills:
- 4-8 years of experience in working on ML projects that includes business requirement gathering, model development, training, deployment at scale and monitoring model performance for production use cases
- Strong knowledge on Python, NLP, Data Engineering, Langchain, Langtrace, Langfuse, RAGAS, AgentOps (optional)
- Should have worked on proprietary and open source large language models
- Experience on LLM fine tuning, creating distilled model from hosted LLMs
- Building data pipelines for model training
- Experience on model performance tuning, RAG, guardrails, prompt engineering, evaluation and observability
- Experience in GenAI application deployment on cloud and on-premise at scale for production
- Experience in creating CI/CD pipelines
- Working knowledge on Kubernetes
- Experience in minimum one cloud: AWS / GCP / Azure to deploy AI services
- Experience in creating workable prototypes using Agentic AI frameworks like CrewAI, Taskweaver, AutoGen
- Experience in light weight UI development using streamlit or chainlit (optional)
- Desired experience on open-source tools for ML development, deployment, observability and integration
- Background on DevOps and MLOps will be a plus
- Experience working on collaborative code versioning tools like GitHub/GitLab
- Team player with good communication and presentation skills

Job Description:
We are seeking a talented and detail-oriented Senior Rigging Artist to join our team. The ideal candidate will specialize in creating high-quality rigs for character designs and animations tailored for gaming. The artist must have expertise in Unreal Engine, Blender, and Character Creator/iClone workflows, ensuring the rigs are optimized for real-time performance and advanced character animations.
Key Responsibilities:
- Design and develop robust rigs for 3D characters with detailed anatomy and mechanics.
- Create rigs compatible with Unreal Engine, Blender, and Character Creator/iClone.
- Ensure the rigs support complex animation requirements, including facial rigging, dynamic bones, and advanced deformations.
- Work closely with animators and 3D modelers to ensure seamless integration of rigs into game pipelines.
- Optimize rigs for real-time applications, focusing on gaming environments.
- Troubleshoot and resolve rigging-related issues in character animation workflows.
Requirements:
- Proven experience in character rigging for gaming projects.
- Proficiency in tools like Blender, Unreal Engine, and Character Creator/iClone.
- Strong understanding of anatomy, mechanics, and skin weighting.
- Experience with advanced rigging techniques, including IK/FK systems, dynamic bones, and facial rigging.
- Familiarity with game optimization techniques for rigs.
- Ability to work collaboratively in a team environment and take feedback constructively.
Preferred Qualifications:
- Experience in scripting and automation (e.g., Python for Blender or MEL for Maya).
- Knowledge of physics-based rigging and dynamic simulations.
- Portfolio showcasing previous rigging work for gaming projects.
- Experience in integrating rigs into Unreal Engine workflows for gameplay or cinematics.
Please let me know in case any additional information is required.


Job description:
Design, develop, and deploy ML models.
Build scalable AI solutions for real-world problems.
Optimize model performance and infrastructure.
Collaborate with the Technical Team and execute any other tasks assigned by the company/its representatives.
Required Candidate profile:
Strong Python & ML frameworks (TensorFlow/PyTorch).
Experience with data pipelines & model deployment.
Problem-solving & teamwork skills.
Passion for AI innovation.
Perks and benefits:
Learning Environment, Guidance & Support

Role Summary:
AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 2+ years of prior experience in data engineering, with a strong background in AWS (Amazon Web Services) technologies. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.
Responsibilities:
· Design, develop, and maintain scalable data pipelines and ETL processes leveraging AWS services such as S3, Glue, EMR, Lambda, and Redshift.
· Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.
· Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness.
· Implement data governance and security best practices to ensure compliance and data integrity.
· Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring.
· Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.
Qualifications:
· Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
· 2+ years of prior experience in data engineering, with a focus on designing and building data pipelines.
· Proficiency in AWS services, particularly S3, Glue, EMR, Lambda, and Redshift.
· Strong programming skills in languages such as Python, Java, or Scala.
· Experience with SQL and NoSQL databases, data warehousing concepts, and big data technologies.
· Familiarity with containerization technologies (e.g., Docker, Kubernetes) and orchestration tools (e.g., Apache Airflow) is a plus.


About Us
Intain is building a blockchain-based servicing platform for structured finance, backed by top VCs and already managing $5B+ in transactions. We're a 40-member team across India, Singapore & NYC, and 50% of our team are women. We blend AI + blockchain to solve real problems in capital markets.
🎥 What we do →
🧠 What You’ll Work On
- Build full-stack AI-driven fintech applications from scratch
- Design scalable microservices & integrate APIs with external systems (banking, RPA tools like Blue Prism/UI Path)
- Use GenAI tools (ChatGPT-4, Gemini) to solve real NLP use cases
- Drive cloud-native development on Azure, CI/CD, and DevSecOps workflows
- Collaborate with cross-functional teams in a flat, Agile environment
🛠️ Skills We're Looking For
- Frontend: React.js
- Backend: Python (Flask preferred), REST APIs
- AI/NLP: ChatGPT / Gemini / GenAI tools
- DBs: PostgreSQL / MySQL, MongoDB
- Tools: RabbitMQ, Git, Jenkins, Docker
- Cloud: Azure (preferred)
- Testing: Jest / Cypress
- Agile, startup-ready mindset
🌟 Bonus Points
- Angular, Redis, Elasticsearch
- UI/UX knowledge
- Security & accessibility best practices
🚀 Why Join Us?
- Work on cutting-edge AI & blockchain tech
- Flat team, fast decisions, global clients
- Remote flexibility + strong team culture
- Competitive compensation

Job Description:
We are seeking a highly analytical and detail-oriented Data Analyst to join our team. The ideal candidate will have strong problem-solving skills, proficiency in SQL and AWS QuickSight, and a passion for extracting meaningful insights from data. You will be responsible for analyzing complex datasets, building reports and dashboards, and providing data-driven recommendations to support business decisions.
Key Responsibilities:
- Extract, transform, and analyze data from multiple sources to generate actionable insights.
- Develop interactive dashboards and reports in AWS QuickSight to visualize trends and key metrics.
- Write optimized SQL queries to retrieve and manipulate data efficiently.
- Collaborate with stakeholders to understand business requirements and provide analytical solutions.
- Identify patterns, trends, and statistical correlations in data to support strategic decision-making.
- Ensure data integrity, accuracy, and consistency across reports.
- Continuously explore new tools, techniques, and methodologies to enhance analytical capabilities.
Qualifications & Skills:
- Strong proficiency in SQL for querying and data manipulation.
- Hands-on experience with AWS QuickSight for data visualization and reporting.
- Strong analytical thinking and problem-solving skills with the ability to interpret complex data.
- Experience working with large datasets and relational databases.
- Passion for slicing and dicing data to uncover key insights.
- Exceptional communication skills to effectively understand business requirements and present insights.
- A growth mindset with a strong attitude for continuous learning and improvement.
Preferred Qualifications:
- Experience with Python is a plus.
- Familiarity with cloud-based data environments (AWS etc).
- Familiarity with leveraging existing LLMs/AI tools to enhance productivity, automate repetitive tasks, and improve analysis efficiency.

At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Work on our web frontend using React JS
- Knowledge Redux, React JS, HTML CSS is a must
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- You have 1 year of hands-on Python development experience
- You have 1 Year of hands-on React JS Development experience
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Bangalore
Compensation: Competitive salary + Meaningful equity
If you get excited about solving hard problems that have real-world impact, we should talk.
All the best!!

Role & Responsibilities
Take end-to-end ownership of critical backend services — from architecture and development to deployment and scale.
Design systems for performance, reliability, and observability. Identify bottlenecks and eliminate them proactively.
Collaborate with product and design to deeply understand user pain points and build the right solutions.
Work independently and own complex modules with minimal oversight.
Champion clean, maintainable code and help set a high bar for engineering excellence across the team.
Stay up-to-date with new tools, technologies, and backend trends — and bring the best ideas into our stack.
Ideal Candidate
2+ years of backend development experience, ideally with Kotlin and Spring Boot (or willingness to ramp up quickly).
You’ve worked in fast-moving teams and thrive when given room to figure things out.
You take initiative and can drive complex modules to completion without needing constant guidance.
Strong with both low-level and high-level design; you know how to build scalable, reliable RESTful APIs.
Proficient with relational databases and aware of common performance pitfalls.
Confident with debugging and optimizing — memory leaks, latency issues, and other hard-to-find problems don’t scare you.
You write clean, testable code and know how to leave systems better than you found them.
You bring a product mindset — caring not just about what’s built, but why and how it delivers value to users.
We are hiring a skilled Backend Developer to design and manage server-side applications, APIs, and database systems.
Key Responsibilities:
- Develop and manage APIs with Node.js and Express.js.
- Work with MongoDB and Mongoose for database management.
- Implement secure authentication using JWT.
- Optimize backend systems for performance and scalability.
- Deploy backend services on VPS and manage servers.
- Collaborate with frontend teams and use Git/GitHub for version control.
Required Skills:
- Node.js, Express.js
- MongoDB, Mongoose
- REST API, JWT
- Git, GitHub, VPS hosting
Qualifications:
- Bachelor’s degree in Computer Science or related field.
- Strong portfolio or GitHub profile preferred.

We are looking for a talented Frontend Developer to create modern, responsive, and interactive web applications using the latest technologies.
Key Responsibilities:
- Develop user interfaces with React.js, Redux, JavaScript (ES6+), HTML5, CSS3, and Tailwind CSS.
- Optimize applications for speed, scalability, and cross-browser compatibility.
- Integrate REST APIs and ensure seamless UI/UX.
- Manage deployments using Vercel and Netlify.
- Collaborate with backend teams and use Git/GitHub for version control.
Required Skills:
- React.js, Redux, JavaScript (ES6+)
- HTML5, CSS3, Tailwind CSS
- REST API integration
- Git, GitHub, Vercel, Netlify
Qualifications:
- Bachelor’s degree in Computer Science or related field.
- Strong portfolio or GitHub profile preferred.

Work Mode: Hybrid
Need B.Tech, BE, M.Tech, ME candidates - Mandatory
Must-Have Skills:
● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.
● Minimum of 3 years of proven experience as a Data Engineer.
● Strong proficiency in Python programming language and SQL.
● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.
● Good comprehension and critical thinking skills.
● Kindly note Salary bracket will vary according to the exp. of the candidate -
- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA
- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA
- Experience more than 8 yrs - Salary upto 40 LPA




Company
Crypto made easy 🚀
We are the bridge between your crypto world and everyday life; trade pairs, book flights and hotels, and purchase gift cards with your favourite currencies. All in one best-in-class digital experience. It's not rocket science.
🔗Apply link at the bottom of this post — don’t miss it!
Why Join?
By joining CryptoXpress, you'll be at the cutting edge of merging digital currency with real-world services and products. We offer a stimulating work environment where innovation and creativity are highly valued. This remote role provides the flexibility to work from any location, promoting a healthy work-life balance. We are dedicated to fostering growth and learning, offering ample opportunities for professional development in the rapidly expanding fields of AI, blockchain technology, cryptocurrency, digital marketing and e-commerce.
Role Description
We are seeking an Application Developer for a full-time remote position at CryptoXpress. In this role, you will be responsible for developing and maintaining state-of-the-art mobile and web applications that integrate seamlessly with our blockchain and API technologies. The ideal candidate will bring a passion for creating exceptional user experiences, a deep understanding of React Native and JavaScript, and experience in building responsive and scalable applications.
Job Requirements:
- Exposure and hands-on experience in mobile application development.
- Significant experience working with React web and mobile along with tools like Flux, Flow, Redux, etc.
- In-depth knowledge of JavaScript, CSS, HTML, and functional programming.
- Strong knowledge of React fundamentals, including Virtual DOM, component lifecycle, and component state.
- Comprehensive understanding of the full mobile app development lifecycle, including prototyping.
- Proficiency in type checking, unit testing, Typescript, PropTypes, and code debugging.
- Experience working with REST APIs, document request models, offline storage, and third-party libraries.
- Solid understanding of user interface design, responsive design, and web technologies.
- Familiarity with React Native tools such as Jest, Enzyme, and ESLint.
- Basic knowledge of blockchain technology.
Essential Skill Set:
- React Native & ReactJS
- Python (Flask)
- Node.js, Next.js
- Web3.js / Ethers.js integration experience
- MongoDB, Strapi, Firebase
- API design and integration
- In-app analytics / messaging tools (e.g., Firebase Messaging)
- Wallet integrations or crypto payment gateways
How to Apply:
Interested candidates must complete the application form at
https://forms.gle/J1giXJeg993fZViX6
Join us and help shape the future of social media marketing in the cryptocurrency space!
💡Pro Tip: Tips for Application Success
- Show your enthusiasm for crypto, travel, and digital innovation
- Mention any self-learning initiatives or personal crypto experiments
- Be honest about what you don’t know — we value growth mindsets
- Explore CryptoXpress before applying — take 2 minutes to download and try the app so you understand what we’re building



Title - Pncpl Software Engineer
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
Business Summary :
The Deltek Engineering and Technology team builds best-in-class solutions to delight customers and meet their business needs. We are laser-focused on software design, development, innovation and quality. Our team of experts has the talent, skills and values to deliver products and services that are easy to use, reliable, sustainable and competitive. If you're looking for a safe environment where ideas are welcome, growth is supported and questions are encouraged – consider joining us as we explore the limitless opportunities of the software industry.
Principal Software Engineer
Position Responsibilities :
- Develop and manage integrations with third-party services and APIs using industry-standard protocols like OAuth2 for secure authentication and authorization.
- Develop scalable, performant APIs for Deltek products
- Accountability for the successful implementation of the requirements by the team.
- Troubleshoot, debug, and optimize code and workflows for better performance and scalability.
- Undertake analysis, design, coding and testing activities of complex modules
- Support the company’s development processes and development guidelines including code reviews, coding style and unit testing requirements.
- Participate in code reviews and provide mentorship to junior developers.
- Stay up-to-date with emerging technologies and best practices in Python development, AWS, and frontend frameworks like React. And suggest optimisations based on them
- Adopt industry best practices in all your projects - TDD, CI/CD, Infrastructure as Code, linting
- Pragmatic enough to deliver an MVP, but aspirational enough to think about how it will work with millions of users and adapt to new challenges
- Readiness to hit the ground running – you may not know how to solve everything right off the bat, but you will put in the time and effort to understand so that you can design architecture of complex features with multiple components.
Qualifications :
- A college degree in Computer Science, Software Engineering, Information Science or a related field is required
- Minimum 8-10 years of experience Sound programming skills on Python, .Net platform (VB & C#), TypeScript / JavaScript, Frontend technologies like React.js/Ember.js, SQL Db (like PostgreSQL)
- Experience in backend development and Apache Airflow (or equivalent framework).
- Build APIs and optimize SQL queries with performance considerations.
- Experience with Agile Development
- Experience in writing and maintaining unit tests and using testing frameworks is desirable
- Exposure to Amazon Web Services (AWS) technologies, Terraform, Docker is a plus
- Strong desire to continually improve knowledge and skills through personal development activities and apply their knowledge and skills to continuous software improvement.
- The ability to work under tight deadlines, tolerate ambiguity and work effectively in an environment with multiple competing priorities.
- Strong problem-solving and debugging skills.
- Ability to work in an Agile environment and collaborate with cross-functional teams.
- Familiarity with version control systems like Git.
- Excellent communication skills and the ability to work effectively in a remote or hybrid team setting.


Title - Sr Software Engineer
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
Business Summary :
The Deltek Engineering and Technology team builds best-in-class solutions to delight customers and meet their business needs. We are laser-focused on software design, development, innovation and quality. Our team of experts has the talent, skills and values to deliver products and services that are easy to use, reliable, sustainable and competitive. If you're looking for a safe environment where ideas are welcome, growth is supported and questions are encouraged – consider joining us as we explore the limitless opportunities of the software industry.
External Job Title :
Sr Software Engineer
Position Responsibilities :
- Develop and manage integrations with third-party services and APIs using industry-standard protocols like OAuth2 for secure authentication and authorization.
- Develop scalable, performant APIs for Deltek products
- Accountability for the successful implementation of the requirements by the team.
- Troubleshoot, debug, and optimize code and workflows for better performance and scalability.
- Undertake analysis, design, coding and testing activities of complex modules
- Support the company’s development processes and development guidelines including code reviews, coding style and unit testing requirements.
- Participate in code reviews and provide mentorship to junior developers.
- Stay up-to-date with emerging technologies and best practices in Python development, AWS, and frontend frameworks like React.
- Adopt industry best practices in all your projects - TDD, CI/CD, Infrastructure as Code, linting
- Pragmatic enough to deliver an MVP, but aspirational enough to think about how it will work with millions of users and adapt to new challenges
- Readiness to hit the ground running – you may not know how to solve everything right off the bat, but you will put in the time and effort to understand so that you can design architecture of complex features with multiple components.
Qualifications :
- A college degree in Computer Science, Software Engineering, Information Science or a related field is required
- Minimum 4-6 years of experience Sound programming skills on Python, .Net platform (VB & C#), TypeScript / JavaScript, Frontend technologies like React.js/Ember.js, SQL Db (like PostgreSQL)
- Experience in backend development and Apache Airflow (or equivalent framework).
- Build APIs and optimize SQL queries with performance considerations.
- Experience with Agile Development
- Experience in writing and maintaining unit tests and using testing frameworks is desirable
- Exposure to Amazon Web Services (AWS) technologies, Terraform, Docker is a plus
- Strong desire to continually improve knowledge and skills through personal development activities and apply their knowledge and skills to continuous software improvement.
- The ability to work under tight deadlines, tolerate ambiguity and work effectively in an environment with multiple competing priorities.
- Strong problem-solving and debugging skills.
- Ability to work in an Agile environment and collaborate with cross-functional teams.
- Familiarity with version control systems like Git.
- Excellent communication skills and the ability to work effectively in a remote or hybrid team setting.


About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Work you’ll do
As a ML/ AI Engineer, you will be responsible for designing and developing intelligent software to solve business problems. You will collaborate with data scientists and domain experts to incorporate ML and AI technologies into existing or new workflows. You’ll analyze new opportunities and ideas. You’ll train and evaluate ML models, conduct experiments, develop PoCs and prototypes.
Responsibilities
- Designing, training, improving & launching machine learning models using tools such as XGBoost, Tensorflow, PyTorch.
- Own the end-to-end ML lifecycle and MLOps, including model deployment, performance tuning, on-going evaluation and maintenance.
- Improve the way we evaluate and monitor model and system performances.
- Proposing and implementing ideas that directly impact our operational and strategic metrics.
- Create tools and frameworks that accelerate the delivery of ML/ AI products.
Who you are
You are an engineer who is passionate about using AL/ML to improve processes, products and delight customers. You have experience working with less than clean data, developing ML models, and orchestrating the deployment of them to production. You thrive on taking initiatives, are very comfortable with ambiguity and can passionately defend your decisions.
Requirements and skills
- 4+ years of experience in programming languages such as Python, PySpark, or Scala.
- Proficient Knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes), and MLOps practices and platforms like MLflow.
- Strong understanding of ML algorithms and frameworks (e.g., TensorFlow, PyTorch).
- Experience with AI foundational models and associated architectural and solution development frameworks
- Broad understanding of data structures, data engineering, statistical methodologies and machine learning models.
- Strong communication skills and teamwork.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

Role Overview
We are looking for a QA Engineer with 2–5 years of experience in manual testing and software quality assurance. The ideal candidate is detail-oriented, analytical, and a strong team player, capable of ensuring a high standard of product quality across our web and mobile applications. While the role is primarily focused on manual testing, a foundational understanding of automation and scripting will be helpful when collaborating with the automation team.
Responsibilities
- Design, write, and execute comprehensive manual test cases for web and mobile applications.
- Perform various types of testing including functional, regression, exploratory, UI, and API testing.
- Log and track bugs using test management and defect tracking tools.
- Collaborate with developers, product managers, and other QA members to ensure thorough test coverage.
- Support the automation team by identifying automatable scenarios and reviewing scripts when necessary.
- Participate in all phases of the QA lifecycle – including test planning, execution, and release sign-off.
- Ensure timely delivery of high-quality software releases.
Requirements
Education:
- Bachelor’s degree in Engineering (Computer Science, IT, or a related field)
Experience:
- 2–5 years of hands-on experience in manual testing for web and mobile applications
Must-Have Skills:
- Strong knowledge of test case writing, planning, and defect management
- Basic familiarity with automation tools like Selenium
- Basic knowledge of Python
- Experience with API testing using tools like Postman
- Understanding of Agile/Scrum development methodologies
- Excellent communication skills and ability to collaborate across functions
Good-to-Have:
- Exposure to version control systems like Git and CI/CD tools like Jenkins
- Basic understanding of performance or security testing


About the Company – Gruve
Gruve is an innovative software services startup dedicated to empowering enterprise customers in managing their Data Life Cycle. We specialize in Cybersecurity, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence.
As a well-funded early-stage startup, we offer a dynamic environment, backed by strong customer and partner networks. Our mission is to help customers make smarter decisions through data-driven business strategies.
Why Gruve
At Gruve, we foster a culture of:
- Innovation, collaboration, and continuous learning
- Diversity and inclusivity, where everyone is encouraged to thrive
- Impact-focused work — your ideas will shape the products we build
We’re an equal opportunity employer and encourage applicants from all backgrounds. We appreciate all applications, but only shortlisted candidates will be contacted.
Position Summary
We are seeking a highly skilled Software Engineer to lead the development of an Infrastructure Asset Management Platform. This platform will assist infrastructure teams in efficiently managing and tracking assets for regulatory audit purposes.
You will play a key role in building a comprehensive automation solution to maintain a real-time inventory of critical infrastructure assets.
Key Responsibilities
- Design and develop an Infrastructure Asset Management Platform for tracking a wide range of assets across multiple environments.
- Build and maintain automation to track:
- Physical Assets: Servers, power strips, racks, DC rooms & buildings, security cameras, network infrastructure.
- Virtual Assets: Load balancers (LTM), communication equipment, IPs, virtual networks, VMs, containers.
- Cloud Assets: Public cloud services, process registry, database resources.
- Collaborate with infrastructure teams to understand asset-tracking requirements and convert them into technical implementations.
- Optimize performance and scalability to handle large-scale asset data in real-time.
- Document system architecture, implementation, and usage.
- Generate reports for compliance and auditing.
- Ensure integration with existing systems for streamlined asset management.
Basic Qualifications
- Bachelor’s or Master’s degree in Computer Science or a related field
- 3–6 years of experience in software development
- Strong proficiency in Golang and Python
- Hands-on experience with public cloud infrastructure (AWS, GCP, Azure)
- Deep understanding of automation solutions and parallel computing principles
Preferred Qualifications
- Excellent problem-solving skills and attention to detail
- Strong communication and teamwork skills


About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 45 years in the USA. Data Axle has set up a strategic global center of excellence in Pune. This center delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases. Data Axle is headquartered in Dallas, TX, USA.
Roles and Responsibilities:
- Design, implement, and manage scalable analytical data infrastructure, enabling efficient access to large datasets and high-performance computing on Google Cloud Platform (GCP).
- Develop and optimize data pipelines using GCP-native services like BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Data Fusion, and Cloud Storage.
- Work with diverse data sources to extract, transform, and load data into enterprise-grade data lakes and warehouses, ensuring high availability and reliability.
- Implement and maintain real-time data streaming solutions using Pub/Sub, Dataflow, and Kafka.
- Research and integrate the latest big data and visualization technologies to enhance analytics capabilities and improve efficiency.
- Collaborate with cross-functional teams to implement machine learning models and AI-driven analytics solutions using Vertex AI and BigQuery ML.
- Continuously improve existing data architectures to support scalability, performance optimization, and cost efficiency.
- Enhance data security and governance by implementing industry best practices for access control, encryption, and compliance.
- Automate and optimize data workflows to simplify reporting, dashboarding, and self-service analytics using Looker and Data Studio.
Basic Qualifications
- 7+ years of experience in data engineering, software development, business intelligence, or data science, with expertise in large-scale data processing and analytics.
- Strong proficiency in SQL and experience with BigQuery for data warehousing.
- Hands-on experience in designing and developing ETL/ELT pipelines using GCP services (Cloud Composer, Dataflow, Dataproc, Data Fusion, or Apache Airflow).
- Expertise in distributed computing and big data processing frameworks, such as Apache Spark, Hadoop, or Flink, particularly within Dataproc and Dataflow environments.
- Experience with business intelligence and data visualization tools, such as Looker, Tableau, or Power BI.
- Knowledge of data governance, security best practices, and compliance requirements in cloud environments.
Preferred Qualifications:
- Degree/Diploma in Computer Science, Engineering, Mathematics, or a related technical field.
- Experience working with GCP big data technologies, including BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud SQL.
- Hands-on experience with real-time data processing frameworks, including Kafka and Apache Beam.
- Proficiency in Python, Java, or Scala for data engineering and pipeline development.
- Familiarity with DevOps best practices, CI/CD pipelines, Terraform, and infrastructure-as-code for managing GCP resources.
- Experience integrating AI/ML models into data workflows, leveraging BigQuery ML, Vertex AI, or TensorFlow.
- Understanding of Agile methodologies, software development life cycle (SDLC), and cloud cost optimization strategies.


Roles & Responsibilities:
We are looking for a Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Manager Data Scientist who will be responsible for
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring 3. Oversight on team project execution and delivery
- Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 12+ years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred) 3. Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.


Key Responsibilities
● Develop and maintain web applications using Django and Flask frameworks.
● Design and implement RESTful APIs using Django Rest Framework (DRF).
● Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation.
● Build and integrate APIs for AI/ML models into existing systems.
● Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn.
● Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases.
● Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization.
● Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker.
● Ensure the scalability, performance, and reliability of applications and deployed models.
● Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions.
● Write clean, maintainable, and efficient code following best practices. ● Conduct code reviews and provide constructive feedback to peers.
● Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML.
Required Skills and Qualifications
● Bachelor’s degree in Computer Science, Engineering, or a related field.
● 3+ years of professional experience as a Python Developer.
● Proficient in Python with a strong understanding of its ecosystem.
● Extensive experience with Django and Flask frameworks.
● Hands-on experience with AWS services for application deployment and management.
● Strong knowledge of Django Rest Framework (DRF) for building APIs. ● Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn.
● Experience with transformer architectures for NLP and advanced AI solutions.
● Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
● Familiarity with MLOps practices for managing the machine learning lifecycle.
● Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
● Excellent problem-solving skills and the ability to work independently and as part of a team.
● Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.




● Candidate should have Hands-on development experience as Data Analyst and/or ML Engineer.
● Candidate must have Coding experience in Python.
● Candidate should have Good Experience with ML models and ML algorithms.
● Need Experience with statistical modelling of large data sets.
● Looking for Immediate joiners or max. 30 days of Notice Period candidates.
● The candidates based out of these locations - Bangalore, Pune, Hyderabad, Mumbai, will be preffered.
What You will do:
● Play the role of Data Analyst / ML Engineer
● Collection, cleanup, exploration and visualization of data
● Perform statistical analysis on data and build ML models
● Implement ML models using some of the popular ML algorithms
● Use Excel to perform analytics on large amounts of data
● Understand, model and build to bring actionable business intelligence out of data that is available in different formats
● Work with data engineers to design, build, test and monitor data pipelines for ongoing business operations
Basic Qualifications:
● Experience: 4+ years.
● Hands-on development experience playing the role of Data Analyst and/or ML Engineer.
● Experience in working with excel for data analytics
● Experience with statistical modelling of large data sets
● Experience with ML models and ML algorithms
● Coding experience in Python
Nice to have Qualifications:
● Experience with wide variety of tools used in ML
● Experience with Deep learning
Benefits:
● Competitive salary.
● Hybrid work model.
● Learning and gaining experience rapidly.
● Reimbursement for basic working set up at home.
● Insurance (including a top up insurance for COVID).

The CRM team is responsible for communications across email, mobile push and web push channels. We focus on our existing customers and manage our interactions and touchpoints to ensure that we optimise revenue generation, drive traffic to the website and app, and extend the active customer lifecycle. We also work closely with the Marketing and Product teams to ensure that any initiatives are integrated with CRM activities.
Our setup is highly data driven and requires the understanding and skill set to work with large datasets, employing data science techniques to create personalised content at a 1:1 level. The candidate for this role will have to demonstrate a strong background working in this environment, and have a proven track record of striving to find technical solutions for the many projects and situations that the business encounters.
Overview of role :
- Setting up automation pipelines in Python and SQL to flow data in and out of CRM platform for reporting, personalisation and use in data warehousing (Redshift)
- Writing, managing, and troubleshooting template logic written in Freemarker.
- Building proprietary algorithms for use in CRM campaigns, targeted at improving all areas of customer lifecycle.
- Working with big datasets to segment audiences on a large scale.
- Driving innovation by planning and implementing a range of AB tests.
- Acting as a technical touchpoint for developer and product teams to push projects over the line.
- Integrating product initiatives into CRM, and performing user acceptance testing (UAT)
- Interacting with multiple departments, and presenting to our executive team to help them understand CRM activities and plan new initiatives.
- Working with third party suppliers to optimise and improve their offering.
- Creating alert systems and troubleshooting tools to check in on health of automated jobs running in Jenkins and CRM platform.
- Setting up automated reporting in Amazon Quicksight.
- Assisting other teams with any technical advice/information they may require.
- When necessary, working in JavaScript to set up Marketing and CRM tags in Adobe Launch.
- Training team members and working with them to make processes more efficient.
- Working with REST APIs to integrate CRM System with a range of technologies from third party vendors to in-house services.
- Contributing to discussions on future strategy, interpretation of test results, and helping resolve any major CRM issues
Key skills required :
- Strong background in SQL
- Experience with a programming language (preferably Python OR Free marker)
- Understanding of REST APIs and how to utilise them
- Technical-savvy - you cast a creative eye on all activities of the team and business and suggest new ideas and improvements
- Comfortable presenting and interacting with all levels of the business and able to communicate technical information in a clear and concise manner.
- Ability to work under pressure and meet tight deadlines.
- Strong attention to detail
- Experience working with large datasets, and able to spot and pick up on important trends
- Understanding of key CRM metrics on performance and deliverability


Exp: 4-6 years
Position: Backend Engineer
Job Location: Bangalore ( office near cubbon park - opp JW marriott)
Work Mode : 5 days work from office
Requirements:
● Engineering graduate with 3-5 years of experience in software product development.
● Proficient in Python, Node.js, Go
● Good knowledge of SQL and NoSQL
● Strong Experience in designing and building APIs
● Experience with working on scalable interactive web applications
● A clear understanding of software design constructs and their implementation
● Understanding of the threading limitations of Python and multi-process architecture
● Experience implementing Unit and Integration testing
● Exposure to the Finance domain is preferred
● Strong written and oral communication skills

We are looking for a skilled and passionate Data Engineers with a strong foundation in Python programming and hands-on experience working with APIs, AWS cloud, and modern development practices. The ideal candidate will have a keen interest in building scalable backend systems and working with big data tools like PySpark.
Key Responsibilities:
- Write clean, scalable, and efficient Python code.
- Work with Python frameworks such as PySpark for data processing.
- Design, develop, update, and maintain APIs (RESTful).
- Deploy and manage code using GitHub CI/CD pipelines.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Work on AWS cloud services for application deployment and infrastructure.
- Basic database design and interaction with MySQL or DynamoDB.
- Debugging and troubleshooting application issues and performance bottlenecks.
Required Skills & Qualifications:
- 4+ years of hands-on experience with Python development.
- Proficient in Python basics with a strong problem-solving approach.
- Experience with AWS Cloud services (EC2, Lambda, S3, etc.).
- Good understanding of API development and integration.
- Knowledge of GitHub and CI/CD workflows.
- Experience in working with PySpark or similar big data frameworks.
- Basic knowledge of MySQL or DynamoDB.
- Excellent communication skills and a team-oriented mindset.
Nice to Have:
- Experience in containerization (Docker/Kubernetes).
- Familiarity with Agile/Scrum methodologies.
As an RPA (Robotic Process Automation) Lead, you will drive the strategic implementation of automation solutions, lead a team in designing and deploying robotic workflows, and collaborate with stakeholders to optimize business processes, ensuring efficiency and innovation.
We are looking for you!
You are a team player, get-it-done person, intellectually curious, customer focused, self-motivated, responsible individual who can work under pressure with a positive attitude. You have the zeal to think differently, understand that career is a journey and make the choices right. Ideal candidates would be someone who is creative, proactive, go getter and motivated to look for ways to add value to job accomplishments.
You are self-motivated with a strong work ethic, positive attitude, and demeanor, enthusiastic when embracing new challenges, able to multitask and prioritize (good time management skills), willingness to learn new technology/methodologies, adaptable and flexible when new products are assigned. You prefer to work independently with less or no supervision. You are process oriented, have a methodical approach and demonstrate quality first approach and preferably who have worked in a result-oriented team(s).
What you’ll do
- Work in customer facing roles, understanding business requirements, Process Assessment.
- Conducts architectural evaluation, design and analysis of automation deployments.
- Handson experience in Bot Design, Bot Development, Testing and Debugging
- Responsible to prepare and review of technical documentation (Solution Design Document).
- Driving best practice design - identifying reusable components, queues, configurable parameters.
- Experience with customer interaction and software development lifecycle, as well as Agile project management methodology.
- Researching, recommending and implementing new processes and technology to improve the quality of services provided.
- Partnering with the Pre-sales team to estimate efforts and craft solutions.
What you will Bring
- Bachelor's degree in Computer Science, or any related field.
- 8 to 12 years of experience with hands-on experience in RPA development and deployment.
- Certifications with RPA platforms preferably on (UiPath or Power Automate).
- Hands-on experience working in development or support projects.
- Experience working in Agile SCRUM environment.
- Strong communication, organizational, analytical and problem-solving skills.
- Ability to succeed in a collaborative and fast paced environment.
- Ensures the delivery of a high-quality solutions meeting client s expectations.
- Ability to Lead Teams with Developers and Junior Developers.
- Programming languages: knowledge on at least one of these – C#, Visual Basic, Python, .Net, Java
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.

We are looking for a highly experienced and visionary Tech Lead / Solution Architect with deep expertise in the MERN stack and AWS to join our organization. In this role, you will be responsible for providing technical leadership across multiple projects, guiding architecture decisions, and ensuring scalable, maintainable, and high-quality solutions. You will work closely with cross-functional teams to define technical strategies, mentor developers, and drive the successful execution of complex projects. Your leadership, architectural insight, and hands-on development skills will be key to the team’s success and the organization's technological growth.
Responsibilities:
- You will be responsible for all the technical decisions related to the project.
- Lead and mentor a team of developers, providing technical guidance and expertise.
- Collaborate with product managers, business analysts, and other stakeholders.
- Architect and design technical solutions that align with project goals and industry best practices.
- Develop and maintain scalable, reliable, and efficient software applications.
- Conduct code reviews, ensure code quality, and enforce coding standards.
- Identify technical risks and challenges, and propose solutions to mitigate them.
- Stay updated with emerging technologies and trends in software development.
- Collaborate with cross-functional teams to ensure seamless integration of software components.
Requirements:
- Bachelor's degree / Graduate
- Proven experience 7-10 years as a Technical Lead or similar role in software development (start-up experience preferred)
- Strong technical skills in programming languages such as MERN, Python, Postgres, MySQL.
- Knowledge of cloud technologies (e.g., AWS, Azure, Google Cloud Platform) and microservices architecture.
- Excellent leadership, communication, and interpersonal skills.
- Ability to prioritize tasks, manage multiple projects, and work in a fast-paced environment.
Benefits:
- Competitive salary and benefits package
- Opportunities for professional growth and development
- Collaborative and innovative work environment
- Certifications on us
Joining : Immediate
Location : Malad (West) - Work From Office
This opportunity is for Work From Office.
Apply for this job if your current location is mumbai.

Job Title: QA Tester – Security & Vulnerability Testing
Experience: 3+ Years
Location: Gurugram (6 Days WFO)
Job Summary :
We’re seeking a QA Tester with strong experience in Vulnerability and Security Testing.
The ideal candidate will perform manual and automated penetration testing, identify security flaws, and work closely with development teams to ensure secure, compliant applications.
Key Responsibilities :
- Perform vulnerability assessments on web, mobile, and cloud apps.
- Conduct tests for OWASP Top 10 issues (e.g., SQLi, XSS, CSRF, SSRF).
- Use tools like Burp Suite, OWASP ZAP, Metasploit, Kali Linux, Nessus, etc.
- Automate security testing and integrate with CI/CD (Jenkins, GitHub, GitLab).
- Test and secure APIs, including auth mechanisms (OAuth, JWT, SAML).
- Ensure compliance with ISO 27001, GDPR, HIPAA, PCI-DSS.
Requirements :
- 3+ Years in QA with a focus on Security/Vulnerability Testing.
- Experience in manual & automated security testing.
- Knowledge of scripting (Python, Bash, JS).
- Familiarity with cloud platforms (AWS, Azure, GCP).
- Bonus: Certifications like CEH, OSCP, Security+, etc.


Requirement:
● Role: Fullstack Developer
● Location: Noida (Hybrid)
● Experience: 1-3 years
● Type: Full-Time
Role Description : We’re seeking a Fullstack Developer to join our fast-moving team at Velto. You’ll be responsible for building robust backend services and user-facing features using a modern tech stack. In this role, you’ll also get hands-on exposure to applied AI, contributing to the development of LLM-powered workflows, agentic systems, and custom fi ne-tuning pipelines.
Responsibilities:
● Develop and maintain backend services using Python and FastAPI
● Build interactive frontend components using React
● Work with SQL databases, design schema, and integrate data models with Python
● Integrate and build features on top of LLMs and agent frameworks (e.g., LangChain, OpenAI, HuggingFace)
● Contribute to AI fi ne-tuning pipelines, retrieval-augmented generation (RAG) setups, and contract intelligence workfl ows
● Profi ciency with unit testing libraries like jest, React testing library and pytest.
● Collaborate in agile sprints to deliver high-quality, testable, and scalable code
● Ensure end-to-end performance, security, and reliability of the stack
Required Skills:
● Proficient in Python and experienced with web frameworks like FastAPI
● Strong grasp of JavaScript and React for frontend development
● Solid understanding of SQL and relational database integration with Python
● Exposure to LLMs, vector databases, and AI-based applications (projects, internships, or coursework count)
● Familiar with Git, REST APIs, and modern software development practices
● Bachelor’s degree in Computer Science or equivalent fi eld
Nice to Have:
● Experience working with LangChain, RAG pipelines, or building agentic workfl ows
● Familiarity with containerization (Docker), basic DevOps, or cloud deployment
● Prior project or internship involving AI/ML, NLP, or SaaS products
Why Join Us?
● Work on real-world applications of AI in enterprise SaaS
● Fast-paced, early-stage startup culture with direct ownership
● Learn by doing—no layers, no red tape
● Hybrid work setup and merit-based growth


Job Description:
We are looking for a highly skilled and experienced Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with experience in deploying and managing applications on AWS.
Proficiency in Django Rest Framework (DRF) and a solid understanding of machine learning concepts and their practical applications are essential.
Key Responsibilities:
Develop and maintain web applications using Django and Flask frameworks.
Design and implement RESTful APIs using Django Rest Framework (DRF).
Deploy, manage, and optimize applications on AWS.
Develop and maintain APIs for AI/ML models and integrate them into existing systems.
Create and deploy scalable AI and ML models using Python.
Ensure the scalability, performance, and reliability of applications.
Write clean, maintainable, and efficient code following best practices.
Perform code reviews and provide constructive feedback to peers.
Troubleshoot and debug applications, identifying and fixing issues in a timely manner.
Stay up-to-date with the latest industry trends and technologies to ensure our applications remain current and competitive.
Required Skills and Qualifications:
Bachelor’s degree in Computer Science, Engineering, or a related field.
3+ years of professional experience as a Python Developer.
Proficient in Python with a strong understanding of its ecosystem.
Extensive experience with Django and Flask frameworks.
Hands-on experience with AWS services, including but not limited to EC2, S3, RDS, Lambda, and CloudFormation.
Strong knowledge of Django Rest Framework (DRF) for building APIs.
Experience with machine learning libraries and frameworks, such as scikit-learn, TensorFlow, or PyTorch.
Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
Familiarity with front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
Excellent problem-solving skills and the ability to work independently and as part of a team.
Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.



Experience:
- Junior Level: 4+ years
- Senior Level: 8+ years
Work Mode:Remote
About the Role:
We are seeking a highly skilled and motivated Data Scientist with deep expertise in Machine Learning (ML), Deep Learning, and Large Language Models (LLMs) to join our forward-thinking AI & Data Science team. This is a unique opportunity to contribute to real-world impact in the healthcare industry, transforming the way patients and providers interact with health data through Generative AI and NLP-driven solutions.
Key Responsibilities:
- LLM Development & Fine-Tuning:
- Fine-tune and customize LLMs (e.g., GPT, LLaMA2, Mistral) for use cases such as text classification, NER, summarization, Q&A, and sentiment analysis.
- Experience with other transformer-based models (e.g., BERT) is a plus.
- Data Engineering & Pipeline Design:
- Collaborate with data engineering teams to build scalable, high-quality data pipelines for training/fine-tuning LLMs on structured and unstructured healthcare datasets.
- Experimentation & Evaluation:
- Design rigorous model evaluation and testing frameworks (e.g., with tools like TruLens) to assess performance and optimize model outcomes.
- Deployment & MLOps Integration:
- Work closely with MLOps teams to ensure seamless integration of models into production environments on cloud platforms (AWS, Azure, GCP).
- Predictive Modeling in Healthcare:
- Apply ML/LLM techniques to build predictive models for use cases in oncology (e.g., survival analysis, risk prediction, RWE generation).
- Cross-functional Collaboration:
- Engage with domain experts, product managers, and clinical teams to translate healthcare challenges into actionable AI solutions.
- Mentorship & Knowledge Sharing:
- Mentor junior team members and contribute to the growth of the team’s technical expertise.
Qualifications:
- Master’s or Doctoral degree in Computer Science, Data Science, Artificial Intelligence, or related field.
- 5+ years of hands-on experience in machine learning and deep learning, with at least 12 months of direct work on LLMs.
- Strong coding skills in Python, with experience in libraries like HuggingFace Transformers, spaCy, NLTK, TensorFlow, or PyTorch.
- Experience with prompt engineering, RAG pipelines, and evaluation techniques in real-world NLP deployments.
- Hands-on experience in deploying models on cloud platforms (AWS, Azure, or GCP).
- Familiarity with the healthcare domain and working on Real World Evidence (RWE) datasets is highly desirable.
Preferred Skills:
- Strong understanding of healthcare data regulations (HIPAA, PHI handling, etc.)
- Prior experience in speech and text-based AI applications
- Excellent communication and stakeholder engagement skills
- A passion for impactful innovation in the healthcare space


Job Profile : Python Developer
Job Location : Ahmedabad, Gujarat - On site
Job Type : Full time
Experience - 1-3 Years
Key Responsibilities:
Design, develop, and maintain Python-based applications and services.
Collaborate with cross-functional teams to define, design, and ship new features.
Write clean, maintainable, and efficient code following best practices.
Optimize applications for maximum speed and scalability.
Troubleshoot, debug, and upgrade existing systems.
Integrate user-facing elements with server-side logic.
Implement security and data protection measures.
Work with databases (SQL/NoSQL) and integrate data storage solutions.
Participate in code reviews to ensure code quality and share knowledge with the team.
Stay up-to-date with emerging technologies and industry trends.
Requirements:
1-3 years of professional experience in Python development.
Strong knowledge of Python frameworks such as Django, Flask, or FastAPI.
Experience with RESTful APIs and web services.
Proficiency in working with databases (e.g., PostgreSQL, MySQL, MongoDB).
Familiarity with front-end technologies (e.g., HTML, CSS, JavaScript) is a plus.
Experience with version control systems (e.g., Git).
Knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud) is a plus.
Understanding of containerization tools like Docker and orchestration tools like Kubernetes is good to have
Strong problem-solving skills and attention to detail.
Excellent communication and teamwork skills.
Good to Have:
Experience with data analysis and visualization libraries (e.g., Pandas, NumPy, Matplotlib).
Knowledge of asynchronous programming and event-driven architecture.
Familiarity with CI/CD pipelines and DevOps practices.
Experience with microservices architecture.
Knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch) is a plus.
Hands on experience in RAG and LLM model intergration would be surplus.


Job Description:
We are looking for a Python Lead who has the following experience and expertise -
- Proficiency in developing RESTful APIs using Flask/Django or Fast API framework
- Hands-on experience of using ORMs for database query mapping
- Unit test cases for code coverage and API testing
- Using Postman for validating the APIs Experienced with GIT process and rest of the code management including knowledge of ticket management systems like JIRA
- Have at least 2 years of experience in any cloud platform
- Hands-on leadership experience
- Experience of direct communication with the stakeholders
Skills and Experience:
- Good academics
- Strong teamwork and communications
- Advanced troubleshooting skills
- Ready and immediately available candidates will be preferred.
Real-World Evidence (RWE) Analyst
Summary:
As an experienced Real-World Evidence (RWE) Analyst, you will leverage our cutting-edge healthcare data platform (accessing over 60 million lives in Asia, with ambitious growth plans across Africa and the Middle East) to deliver impactful clinical insights to our pharmaceutical clients. You will be involved in the full project lifecycle, from designing analyses to execution and delivery, within our agile data science team. This is an exciting opportunity to contribute significantly to a growing early-stage company focused on improving precision medicine and optimizing patient care for diverse populations.
Responsibilities:
· Contribute to the design and execution of retrospective and prospective real-world research, including epidemiological and patient outcomes studies.
· Actively participate in problem-solving discussions by clearly defining issues and proposing effective solutions.
· Manage the day-to-day progress of assigned workstreams, ensuring seamless collaboration with the data engineering team on analytical requests.
· Provide timely and clear updates on project status to management and leadership.
· Conduct in-depth quantitative and qualitative analyses, driven by project objectives and your intellectual curiosity.
· Ensure the quality and accuracy of analytical outputs, and contextualize findings by reviewing relevant published research.
· Synthesize complex findings into clear and compelling presentations and written reports (e.g., slides, documents).
· Contribute to the development of standards and best practices for future RWE analyses.
Requirements:
· Undergraduate or post-graduate degree (MS or PhD preferred) in a quantitative analytical discipline such as Epidemiology, (Bio)statistics, Data Science, Engineering, Econometrics, or Operations Research.
· 8+ years of relevant work experience demonstrating:
o Strong analytical and problem-solving capabilities.
o Experience conducting research relevant to the pharmaceutical/biotech industry.
· Proficiency in technical skills including SQL and at least one programming language (R, Python, or similar).
· Solid understanding of the healthcare/medical and pharmaceutical industries.
· Proven experience in managing workstream or project management activities.
· Excellent written and verbal communication, and strong interpersonal skills with the ability to build collaborative partnerships.
· Exceptional attention to detail.
· Proficiency in Microsoft Office Suite (Excel, PowerPoint, Word).
Other Desirable Skills:
· Demonstrated dedication to teamwork and the ability to collaborate effectively across different functions.
· A strong desire to contribute to the growth and development of the RWE analytics function.
· A proactive and innovative mindset with an entrepreneurial spirit, eager to take on a key role in a dynamic, growing company.


Join CD Edverse, an innovative EdTech app, as AI Specialist! Develop a deep research tool to generate comprehensive courses and enhance AI mentors. Must have strong Python, NLP, and API integration skills. Be part of transforming education! Apply now.

About the job
Location: Bangalore, India
Job Type: Full-Time | On-Site
Job Description
We are looking for a highly skilled and motivated Python Backend Developer to join our growing team in Bangalore. The ideal candidate will have a strong background in backend development with Python, deep expertise in relational databases like MySQL, and hands-on experience with AWS cloud infrastructure.
Key Responsibilities
- Design, develop, and maintain scalable backend systems using Python.
- Architect and optimize relational databases (MySQL), including complex queries and indexing.
- Manage and deploy applications on AWS cloud services (EC2, S3, RDS, DynamoDB, API Gateway, Lambda).
- Automate cloud infrastructure using CloudFormation or Terraform.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Mentor junior developers and contribute to a culture of technical excellence.
- Proactively identify issues and provide solutions to challenging backend problems.
Mandatory Requirements
- Minimum 3 years of professional experience in Python backend development.
- Expert-level knowledge in MySQL database creation, optimization, and query writing.
- Strong experience with AWS services, particularly EC2, S3, RDS, DynamoDB, API Gateway, and Lambda.
- Hands-on experience with infrastructure as code using CloudFormation or Terraform.
- Proven problem-solving skills and the ability to work independently.
- Demonstrated leadership abilities and team collaboration skills.
- Excellent verbal and written communication.

Should have strong hands on experience of 8-10 yrs in Java Development.
Should have strong knowledge of Java 11+, Spring, Spring Boot, Hibernate, Rest Web Services.
Strong Knowledge of J2EE Design Patterns and Microservices design patterns.
Should have strong hand on knowledge of SQL / PostGres DB. Good to have exposure to Nosql DB.
Should have strong knowldge of AWS services (Lambda, EC2, RDS, API Gateway, S3, Could front, Airflow.
Good to have Python ,PySpark as a secondary Skill
Should have ggod knowledge of CI CD pipleline.
Should be strong in wiriting unit test cases, debug Sonar issues.
Should be able to lead/guide team of junior developers
Should be able to collab with BA and solution architects to create HLD and LLD documents

Job Description: Data Engineer
Position Overview:
Role Overview
We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.
Key Responsibilities
· Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
· Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).
· Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
· Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
· Ensure data quality and consistency by implementing validation and governance practices.
· Work on data security best practices in compliance with organizational policies and regulations.
· Automate repetitive data engineering tasks using Python scripts and frameworks.
· Leverage CI/CD pipelines for deployment of data workflows on AWS.
Required Skills and Qualifications
· Professional Experience: 5+ years of experience in data engineering or a related field.
· Programming: Strong proficiency in Python, with experience in libraries like pandas, pySpark, or boto3.
· AWS Expertise: Hands-on experience with core AWS services for data engineering, such as:
· AWS Glue for ETL/ELT.
· S3 for storage.
· Redshift or Athena for data warehousing and querying.
· Lambda for serverless compute.
· Kinesis or SNS/SQS for data streaming.
· IAM Roles for security.
· Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases.
· Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus.
· DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline.
· Version Control: Proficient with Git-based workflows.
· Problem Solving: Excellent analytical and debugging skills.
Optional Skills
· Knowledge of data modeling and data warehouse design principles.
· Experience with data visualization tools (e.g., Tableau, Power BI).
· Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes).
· Exposure to other programming languages like Scala or Java.
Education
· Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Why Join Us?
· Opportunity to work on cutting-edge AWS technologies.
· Collaborative and innovative work environment.

Requirements:
- Must have proficiency in Python
- At least 3+ years of professional experience in software application development.
- Good understanding of REST APIs and a solid experience in testing APIs.
- Should have built APIs at some point and practical knowledge on working with them
- Must have experience in API testing tools like Postman and in setting up the prerequisites and post-execution validations using these tools
- Ability to develop applications for test automation
- Should have worked in a distributed micro-service environment
- Hands-on experience with Python packages for testing (preferably pytest).
- Should be able to create fixtures, mock objects and datasets that can be used by tests across different micro-services
- Proficiency in gitStrong in writing SQL queriesTools like Jira, Asana or similar bug tracking tool, Confluence - Wiki, Jenkins - CI tool
- Excellent written and oral communication and organisational skills with the ability to work within a growing company with increasing needs
- Proven track record of ability to handle time-critical projects
Good to have:
- Good understanding of CI/CDKnowledge of queues, especially Kafka
- Ability to independently manage test environment deployments and handle issues around itPerformed load testing of API endpoints
- Should have built an API test automation framework from scratch and maintained it
- Knowledge of cloud platforms like AWS, Azure
- Knowledge of different browsers and cross-platform operating systems
- Knowledge of JavaScript
- Web Programming, Docker & 3-Tier Architecture Knowledge is preferred.
- Should have knowlege in API Creation, Coding Experience would be add on.
- 5+ years experience in test automation using tools like TestNG, Selenium Webdriver (Grid, parallel, SauceLabs), Mocha_Chai front-end and backend test automation
- Bachelor's degree in Computer Science / IT / Computer Applications


Job Title: Full Stack Engineer
Location: Delhi-NCR
Type: Full-Time
Responsibilities
Frontend:
- Develop responsive, intuitive interfaces using HTML, CSS (SASS), React, and Vanilla JS.
- Implement real-time features using sockets for dynamic, interactive user experiences.
- Collaborate with designers to ensure consistent UI/UX patterns and deliver visually compelling products.
Backend:
- Design, implement, and maintain APIs using Python (FastAPI).
- Integrate AI-driven features to enhance user experience and streamline processes.
- Ensure the code adheres to best practices in performance, scalability, and security.
- Troubleshoot and resolve production issues, minimizing downtime and improving reliability.
Database & Data Management:
- Work with PostgreSQL for relational data, ensuring optimal queries and indexing.
- Utilize ClickHouse or MongoDB where appropriate to handle specific data workloads and analytics needs.
- Contribute to building dashboards and tools for analytics and reporting.
- Leverage AI/ML concepts to derive insights from data and improve system performance.
General:
- Use Git for version control; conduct code reviews, ensure clean commit history, and maintain robust documentation.
- Collaborate with cross-functional teams to deliver features that align with business goals.
- Stay updated with industry trends, particularly in AI and emerging frameworks, and apply them to enhance our platform.
- Mentor junior engineers and contribute to continuous improvement in team processes and code quality.
- Required Minimum 3 years of Experience as a Data Engineer
- Database Knowledge: Experience with Timeseries and Graph Database is must along with SQL, PostgreSQL, MySQL, or NoSQL databases like FireStore, MongoDB,
- Data Pipelines: Understanding data Pipeline process like ETL, ELT, Streaming Pipelines with tools like AWS Glue, Google Dataflow, Apache Airflow, Apache NiFi.
- Data Modeling: Knowledge of Snowflake Schema, Fact & Dimension Tables.
- Data Warehousing Tools: Experience with Google BigQuery, Snowflake, Databricks
- Performance Optimization: Indexing, partitioning, caching, query optimization techniques.
- Python or SQL Scripting: Ability to write scripts for data processing and automation