50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

EDI Developer / Map Conversion Specialist
Role Summary:
Responsible for converting 441 existing EDI maps into the PortPro-compatible format and testing them for 147 customer configurations.
Key Responsibilities:
- Analyze existing EDI maps in Profit Tools.
- Convert, reconfigure, or rebuild maps for PortPro.
- Ensure accuracy in mapping and transformation logic.
- Unit test and debug EDI transactions.
- Support system integration and UAT phases.
Skills Required:
- Proficiency in EDI standards (X12, EDIFACT) and transaction sets.
- Hands-on experience in EDI mapping tools.
- Familiarity with both Profit Tools and PortPro data structures.
- SQL and XML/JSON data handling skills.
- Experience with scripting for automation (Python, Shell scripting preferred).
- Strong troubleshooting and debugging skills.


About the Role
As a Senior Analyst at JustAnswer, you will be a Subject Matter Expert (SME) on Chatbot Strategy, driving long-term growth and incorporating the newest technologies like LLM.
In this role, you will deliver tangible business impact by providing high-quality insights and recommendations, combining strategic thinking and problem-solving with detailed analysis.
This position offers a unique opportunity to collaborate closely with Product Managers and cross-functional teams, uncover valuable business insights, devise optimization strategies, and validate them through experiments.
What You’ll Do
- Collaborate with Product and Analytics leadership to conceive and structure analysis, delivering highly actionable insights from “deep dives” into specific business areas.
- Analyze large volumes of internal & external data to identify growth and optimization opportunities.
- Package and communicate findings and recommendations to a broad audience, including senior leadership.
- Perform both Descriptive & Prescriptive Analytics, including experimentations (A/B, MAB), and build reporting to track trends.
- Perform advanced modeling (NLP, Text Mining) – preferable.
- Implement and track business metrics to help drive the business.
- Contribute to growth strategy from a marketing and operations perspective.
- Operate independently as a lead analyst to understand the JA audience and guide strategic decisions & executions.
What We’re Looking For
- 5+ years of experience in e-commerce/customer experience products.
- Proficiency in analysis and business modeling using Excel.
- Experience with Google Analytics, BigQuery, Google Ads, PowerBI, and Python / R is a plus.
- Strong SQL skills with the ability to write complex queries.
- Expertise in Descriptive and Inferential Statistical Analysis.
- Strong experience in setting up and analyzing A/B Testing or Hypothesis Testing.
- Ability to translate analytical results into actionable business recommendations.
- Excellent written and verbal communication skills; ability to communicate with all levels of management.
- Advanced English proficiency.
- App-related experience – preferable.
About Us
We are a San Francisco-based company founded in 2003 with a simple mission: we help people. We have democratized professional services by connecting customers with verified Experts who provide reliable answers anytime, on any budget.
- 12,000+ Experts across various domains (doctors, lawyers, tech support, mechanics, vets, home repair, and more).
- 10 million customers in 196 countries.
- 16 million+ questions answered in 20 years.
- Investors include Charles Schwab, Crosslink Capital, and Glynn Capital Management.
Why Join the Team
- 1,000+ employees and growing rapidly.
- Hiring criteria: Smart. Fun. Get things done.
- Profitable and fast-growing company.
- We love what we do and celebrate success together.
Our JustAnswer Promise
We strive together to make the world a better place, one answer at a time.
Our values (“The JA Way”):
- Data Driven: Data decides, not egos.
- Courageous: We take risks and challenge the status quo.
- Innovative: Constantly learning, creating, and adapting.
- Lean: Customer-focused, using lean testing to learn and improve.
- Humble: Past success is not a guarantee of future success.
Work Environment
- Remote-first/hybrid model in most locations.
- Optional in-person meetings for collaboration and social events.
- Employee well-being is a top priority.
- Where legally permissible, employees must be fully vaccinated against Covid-19 to attend in-person events.
Our Commitment to Diversity
We embrace workplace diversity, believing it drives richer insights, fuels innovation, and creates better outcomes.
We are committed to attracting and developing an inclusive workforce. Individuals seeking opportunities are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, disability, veteran status, sexual orientation, gender identity, or any other protected status under applicable laws.

Python with Fast API Developer:
4+ years development experience on Python engineering with Fast API.
Good hands on experience on Docker .
Experience on REST API, Flask API, Fast API is must
Experience on Writing on effective and scalable code
.Experience on Unit Testing using PyTest or equivalent framework
Experience on Multi threading & Authentication/authorization techniques.
Experience on Cloud AWS services.
Experience on Postgres.
Experience in working as python Developer


About the Role
As an Analyst at JustAnswer, you will be a Subject Matter Expert (SME) on Chatbot Strategy, driving long-term growth and incorporating the newest technologies like LLM.
In this role, you will deliver tangible business impact by providing high-quality insights and recommendations, combining strategic thinking and problem-solving with detailed analysis.
This position offers a unique opportunity to collaborate closely with Product Managers and cross-functional teams, uncover valuable business insights, devise optimization strategies, and validate them through experiments.
What You’ll Do
- Collaborate with Product and Analytics leadership to conceive and structure analysis, delivering highly actionable insights from “deep dives” into specific business areas.
- Analyze large volumes of internal & external data to identify growth and optimization opportunities.
- Package and communicate findings and recommendations to a broad audience, including senior leadership.
- Perform both Descriptive & Prescriptive Analytics, including experimentations (A/B, MAB), and build reporting to track trends.
- Perform advanced modeling (NLP, Text Mining) – preferable.
- Implement and track business metrics to help drive the business.
- Contribute to growth strategy from a marketing and operations perspective.
- Operate independently as a lead analyst to understand the JA audience and guide strategic decisions & executions.
What We’re Looking For
- 5+ years of experience in e-commerce/customer experience products.
- Proficiency in analysis and business modeling using Excel.
- Experience with Google Analytics, BigQuery, Google Ads, PowerBI, and Python / R is a plus.
- Strong SQL skills with the ability to write complex queries.
- Expertise in Descriptive and Inferential Statistical Analysis.
- Strong experience in setting up and analyzing A/B Testing or Hypothesis Testing.
- Ability to translate analytical results into actionable business recommendations.
- Excellent written and verbal communication skills; ability to communicate with all levels of management.
- Advanced English proficiency.
- App-related experience – preferable.
About Us
We are a San Francisco-based company founded in 2003 with a simple mission: we help people. We have democratized professional services by connecting customers with verified Experts who provide reliable answers anytime, on any budget.
- 12,000+ Experts across various domains (doctors, lawyers, tech support, mechanics, vets, home repair, and more).
- 10 million customers in 196 countries.
- 16 million+ questions answered in 20 years.
- Investors include Charles Schwab, Crosslink Capital, and Glynn Capital Management.
Why Join the Team
- 1,000+ employees and growing rapidly.
- Hiring criteria: Smart. Fun. Get things done.
- Profitable and fast-growing company.
- We love what we do and celebrate success together.
Our JustAnswer Promise
We strive together to make the world a better place, one answer at a time.
Our values (“The JA Way”):
- Data Driven: Data decides, not egos.
- Courageous: We take risks and challenge the status quo.
- Innovative: Constantly learning, creating, and adapting.
- Lean: Customer-focused, using lean testing to learn and improve.
- Humble: Past success is not a guarantee of future success.
Work Environment
- Remote-first/hybrid model in most locations.
- Optional in-person meetings for collaboration and social events.
- Employee well-being is a top priority.
- Where legally permissible, employees must be fully vaccinated against Covid-19 to attend in-person events.
Our Commitment to Diversity
We embrace workplace diversity, believing it drives richer insights, fuels innovation, and creates better outcomes.
We are committed to attracting and developing an inclusive workforce. Individuals seeking opportunities are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, disability, veteran status, sexual orientation, gender identity, or any other protected status under applicable laws.

5+ years of hands-on strong experience in design, develop and implementation of complex software solutions using Python fast API
•LLM experience with closed source LLMs like GPT 3.5, 4, 4 – Turbo, CoHere etc
. •Excellent hands on expertise in Python coding, Pep 8 standards & best practices/design patterns, Object Oriented Design methodologies
•Hands on experience of deploying as AI/ML solutions as a service/Rest API endpoints on Cloud or Kubernetes
•Experience with development methodologies, experience in writing unit tests in Python
•Some experience with Open Source LLMs like LLAMA2, Falcon and fine tuning will be preferred.
•Experience with using PostGreSQL or any other RDBMS and SQL query writing. •Familiar with GIT Lab, CI/CD and Swagger
•Strong problem-solving and communication skills, ability to work independently and as part of Agile teams


PortOne is re−imagining payments in Korea and other international markets. We are a Series B funded startup backed by prominent VC firms Softbank and Hanwa Capital
PortOne provides a unified API for merchants to integrate with and manage all of the payment options available in Korea and SEA Markets - Thailand, Singapore, Indonesia etc. It's currently used by 2000+ companies and processing multi-billion dollars in annualized volume. We are building a team to take this product to international markets, and looking for engineers with a passion for fintech and digital payments.
Culture and Values at PortOne
- You will be joining a team that stands for Making a difference.
- You will be joining a culture that identifies more with Sports Teams rather than a 9 to 5 workplace.
- This will be remote role that allows you flexibility to save time on commute
- Your will have peers who are/have
- Highly Self Driven with A sense of purpose
- High Energy Levels - Building stuff is your sport
- Ownership - Solve customer problems end to end - Customer is your Boss
- Hunger to learn - Highly motivated to keep developing new tech skill sets
Who you are ?
- You are an athlete and building apps is your sport.
- Your passion drives you to learn and build stuff and not because your manager tells you to.
- Your work ethic is that of an athlete preparing for your next marathon. Your sport drives you and you like being in the zone.
- You are NOT a clockwatcher renting out your time, and do NOT have an attitude of "I will do only what is asked for"
Skills and Experience
- Preferably have 1 to 2 Years of experience shipping high quality products.
- Experience working at payment gateways or aggregators or payment related APIs will be strongly preferred.
- Uphold best practices in engineering, security, and design
- Enjoy being a generalist working on both the frontend, backend, and anything it takes to solve problems and delight users both internally and externally
- Take pride in working on projects to successful completion involving a wide variety of technologies and systems
What will you do ?
- Contribute to our payments stack to expand the payment coverage across multiple markets in Asia
- Build new features for internal and external users.
- Uphold our high engineering standards and bring consistency to the many codebases and processes you will encounter
- Collaborate with stakeholders across the organization such as experts in - product, design, infrastructure, and operations.
Here are some examples of our work
- Building intuitive, easy-to-use APIs for payment process.
- Integrating with local payment gateways in international markets.
- Building dashboard to manage gateways and transactions.
- Building analytics platform to provide insights.


Job Description :
This is an exciting opportunity for an experienced industry professional with strong expertise in artificial intelligence and machine learning to join and add value to a dedicated and friendly team. We are looking for an AIML Engineer who is passionate about developing AI-driven technology solutions. As a core member of the Development Team, the candidate will take ownership of AI/ML projects by working independently with little supervision.
The ideal candidate is a highly resourceful and skilled professional with experience in applying AI to practical and comprehensive technology solutions. You must also possess expertise in machine learning, deep learning, TensorFlow, Python, and NLP, along with a strong understanding of algorithms, functional design principles, and best practices.
You will be responsible for leading AI/ML initiatives, ensuring scalable and optimized solutions, and integrating AI capabilities into applications. Additionally, you will work on REST API development, NoSQL database design, and RDBMS optimizations.
Key Responsibilities :
- Develop and implement AI/ML models to solve real-world problems.
- Utilize machine learning, deep learning, TensorFlow, and NLP techniques.
- Lead AI-driven projects with program leadership, governance, and change enablement.
- Apply best practices in algorithm development, object-oriented and functional design principles.
- Design and optimize NoSQL and RDBMS databases for AI applications.
- Develop and integrate AI-powered REST APIs for seamless application functionality.
- Collaborate with cross-functional teams to deliver AI-driven solutions.
- Stay updated with the latest advancements in AI and ML technologies.
Required Qualifications :
- Qualification: Bachelor's or Master's degree in Computer Science or a related field.
- Minimum 2 years of experience in AI/ML application development.
- Strong expertise in machine learning, deep learning, TensorFlow, Python, and NLP.
- Experience in program leadership, governance, and change enablement.
- Knowledge of basic algorithms, object-oriented and functional design principles.
- Experience in REST API development, NoSQL database design, and RDBMS optimization.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills.
Location: Patna, Bihar
Why make a career at Plus91:
At Plus91, we believe we make a better world together. We value the diversity, creativity, and experience of our people. And it's your ideas that help us improve our products and customer experiences and create value for the world of healthcare.
We help our people become better professionals, as well as human beings. We are a hands-on company, and our team is all about getting things done. We nurture experiential learning. Bring passion and dedication to your job, and there's no telling what you can accomplish at Plus91.
We are always on the lookout for bright and innovative people to help us reach our business goals and your personal goals. If this role with us fits your career goals and you think you can fit into our hands-on and go-getting culture, do apply.


About Moative
Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.
Work you’ll do
As a Data Engineer, you will work on data architecture, large-scale processing systems, and data flow management. You will build and maintain optimal data architecture and data pipelines, assemble large, complex data sets, and ensure that data is readily available to data scientists, analysts, and other users. In close collaboration with ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade solutions that directly impact business outcomes. Ultimately, you will be responsible for developing and implementing systems that optimize the organization’s data use and data quality.
Responsibilities
- Create and maintain optimal data architecture and data pipelines on cloud infrastructure (such as AWS/ Azure/ GCP)
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements
- Build the pipeline infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
- Support development of analytics that utilize the data pipeline to provide actionable insights into key business metrics
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs
Who you are
You are a passionate and results-oriented engineer who understands the importance of data architecture and data quality to impact solution development, enhance products, and ultimately improve business applications. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You have experience in developing and deploying data pipelines to support real-world applications. You have a good understanding of data structures and are excellent at writing clean, efficient code to extract, create and manage large data sets for analytical uses. You have the ability to conduct regular testing and debugging to ensure optimal data pipeline performance. You are excited at the possibility of contributing to intelligent applications that can directly impact business services and make a positive difference to users.
Skills & Requirements
- 3+ years of hands-on experience as a data engineer, data architect or similar role, with a good understanding of data structures and data engineering.
- Solid knowledge of cloud infra and data-related services on AWS (EC2, EMR, RDS, Redshift) and/ or Azure.
- Advanced knowledge of SQL, including writing complex queries, stored procedures, views, etc.
- Strong experience with data pipeline and workflow management tools (such as Luigi, Airflow).
- Experience with common relational SQL, NoSQL and Graph databases.
- Strong experience with scripting languages: Python, PySpark, Scala, etc.
- Practical experience with basic DevOps concepts: CI/CD, containerization (Docker, Kubernetes), etc
- Experience with big data tools (Spark, Kafka, etc) and stream processing.
- Excellent communication skills to collaborate with colleagues from both technical and business backgrounds, discuss and convey ideas and findings effectively.
- Ability to analyze complex problems, think critically for troubleshooting and develop robust data solutions.
- Ability to identify and tackle issues efficiently and proactively, conduct thorough research and collaborate to find long-term, scalable solutions.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

Job Title: Senior Data Engineer
Location: Bangalore | Hybrid
Company: krtrimaIQ Cognitive Solutions
Role Overview:
As a Senior Data Engineer, you will design, build, and optimize robust data foundations and end-to-end solutions to unlock maximum value from data across the organization. You will play a key role in fostering data-driven thinking, not only within the IT function but also among broader business stakeholders. You will serve as a technology and subject matter expert, providing mentorship to junior engineers and translating the company’s vision and Data Strategy into actionable, high-impact IT solutions.
Key Responsibilities:
- Design, develop, and implement scalable data solutions to support business objectives and drive digital transformation.
- Serve as a subject matter expert in data engineering, providing guidance and mentorship to junior team members.
- Enable and promote data-driven culture throughout the organization, engaging both technical and business stakeholders.
- Lead the design and delivery of Data Foundation initiatives, ensuring adoption and value realization across business units.
- Collaborate with business and IT teams to capture requirements, design optimal data models, and deliver high-value insights.
- Manage and drive change management, incident management, and problem management processes related to data platforms.
- Present technical reports and actionable insights to stakeholders and leadership teams, acting as the expert in Data Analysis and Design.
- Continuously improve efficiency and effectiveness of solution delivery, driving down costs and reducing implementation times.
- Contribute to organizational knowledge-sharing and capability building (e.g., Centers of Excellence, Communities of Practice).
- Champion best practices in code quality, DevOps, CI/CD, and data governance throughout the solution lifecycle.
Key Characteristics:
- Technology expert with a passion for continuous learning and exploring multiple perspectives.
- Deep expertise in the data engineering/technology domain, with hands-on experience across the full data stack.
- Excellent communicator, able to bridge the gap between technical teams and business stakeholders.
- Trusted leader, respected across levels for subject matter expertise and collaborative approach.
Mandatory Skills & Experience:
- Mastery in public cloud platforms: AWS, Azure, SAP
- Mastery in ELT (Extract, Load, Transform) operations
- Advanced data modeling expertise for enterprise data platforms
Hands-on skills:
- Data Integration & Ingestion
- Data Manipulation and Processing
- Source/version control and DevOps tools: GITHUB, Actions, Azure DevOps
- Data engineering/data platform tools: Azure Data Factory, Databricks, SQL Database, Synapse Analytics, Stream Analytics, AWS Glue, Apache Airflow, AWS Kinesis, Amazon Redshift, SonarQube, PyTest
- Experience building scalable and reliable data pipelines for analytics and other business applications
Optional/Preferred Skills:
- Project management experience, especially running or contributing to Scrum teams
- Experience working with BPC (Business Planning and Consolidation), Planning tools
- Exposure to working with external partners in the technology ecosystem and vendor management
What We Offer:
- Opportunity to leverage cutting-edge technologies in a high-impact, global business environment
- Collaborative, growth-oriented culture with strong community and knowledge-sharing
- Chance to influence and drive key data initiatives across the organization


Quidcash is seeking a skilled Backend Developer to architect, build, and optimize mission-critical financial systems. You’ll leverage your expertise in JavaScript, Python, and OOP to develop scalable backend services that power our fintech/lending solutions. This role offers
the chance to solve complex technical challenges, integrate cutting-edge technologies, and directly impact the future of financial services for Indian SMEs.
If you are a leader who thrives on technical challenges, loves building high-performing teams, and is excited by the potential of AI/ML in fintech, we want to hear from you!
What You ll Do:
Design & Development: Build scalable backend services using JavaScript(Node.js) and Python, adhering to OOP principles and microservices architecture.
Fintech Integration: Develop secure APIs (REST/gRPC) for financial workflows(e.g., payments, transactions, data processing) and ensure compliance with regulations (PCI-DSS, GDPR).
System Optimization: Enhance performance, reliability, and scalability of cloud- native applications on AWS.
Collaboration: Partner with frontend, data, and product teams to deliver end-to-end features in Agile/Scrum cycles.
Quality Assurance: Implement automated testing (unit/integration), CI/CD pipelines, and DevOps practices.
Technical Innovation: Contribute to architectural decisions and explore AI/ML integration opportunities in financial products.
What You'll Bring (Must-Haves):
Experience:
3–5 years of backend development with JavaScript (Node.js) and Python.
Proven experience applying OOP principles, design patterns, and micro services.
Background in fintech, banking, or financial systems (e.g., payment gateways, risk engines, transactional platforms).
Technical Acumen:
Languages/Frameworks:
JavaScript (Node.js + Express.js/Fastify)
Python (Django/Flask/FastAPI)
Databases: SQL (PostgreSQL/MySQL) and/or NoSQL (MongoDB/Redis).
Cloud & DevOps: AWS/GCP/Azure, Docker, Kubernetes, CI/CD tools (Jenkins/GitLab).
Financial Tech: API security (OAuth2/JWT), message queues (Kafka/RabbitMQ), and knowledge of financial protocols (e.g., ISO 20022).
Mindset:
Problem-solver with a passion for clean, testable code and continuous improvement.
Adaptability in fast-paced environments and commitment to deadlines.
Collaborative spirit with strong communication skills.
Why Join Quidcash?
Impact: Play a pivotal role in shaping a product that directly impacts Indian SMEs' business growth.
Innovation: Work with cutting-edge technologies, including AI/ML, in a forward-thinking environment.
Growth: Opportunities for professional development and career advancement in a growing company.
Culture: Be part of a collaborative, supportive, and brilliant team that values every contribution.
Benefits: Competitive salary, comprehensive benefits package, and be a part of the next fintech evolution.
If you are interested, pls share your profile to smithaquidcash.in

Key Responsibilities
- Data Architecture & Pipeline Development
- Design, implement, and optimize ETL/ELT pipelines using Azure Data Factory, Databricks, and Synapse Analytics.
- Integrate structured, semi-structured, and unstructured data from multiple sources.
- Data Storage & Management
- Develop and maintain Azure SQL Database, Azure Synapse Analytics, and Azure Data Lake solutions.
- Ensure proper indexing, partitioning, and storage optimization for performance.
- Data Governance & Security
- Implement role-based access control, data encryption, and compliance with GDPR/CCPA.
- Ensure metadata management and data lineage tracking with Azure Purview or similar tools.
- Collaboration & Stakeholder Engagement
- Work closely with BI developers, analysts, and business teams to translate requirements into data solutions.
- Provide technical guidance and best practices for data integration and transformation.
- Monitoring & Optimization
- Set up monitoring and alerting for data pipelines.
About HelloRamp.ai
HelloRamp is on a mission to revolutionize media creation for automotive and retail using AI. Our platform powers 3D/AR experiences for leading brands like Cars24, Spinny, and Samsung. We’re now building the next generation of Computer Vision + AI products, including cutting-edge NeRF pipelines and AI-driven video generation.
What You’ll Work On
- Develop and optimize Computer Vision pipelines for large-scale media creation.
- Implement NeRF-based systems for high-quality 3D reconstruction.
- Build and fine-tune AI video generation models using state-of-the-art techniques.
- Optimize AI inference for production (CUDA, TensorRT, ONNX).
- Collaborate with the engineering team to integrate AI features into scalable cloud systems.
- Research latest AI/CV advancements and bring them into production.
Skills & Experience
- Strong Python programming skills.
- Deep expertise in Computer Vision and Machine Learning.
- Hands-on with PyTorch/TensorFlow.
- Experience with NeRF frameworks (Instant-NGP, Nerfstudio, Plenoxels) and/or video synthesis models.
- Familiarity with 3D graphics concepts (meshes, point clouds, depth maps).
- GPU programming and optimization skills.
Nice to Have
- Knowledge of Three.js or WebGL for rendering AI outputs on the web.
- Familiarity with FFmpeg and video processing pipelines.
- Experience in cloud-based GPU environments (AWS/GCP).
Why Join Us?
- Work on cutting-edge AI and Computer Vision projects with global impact.
- Join a small, high-ownership team where your work matters.
- Opportunity to experiment, publish, and contribute to open-source.
- Competitive pay and flexible work setup.

AccioJob is conducting a Walk-In Hiring Drive with one of the top global consulting & services companies for the position of Python Automation Engineer.
To apply, register and select your slot here: https://go.acciojob.com/raeUXs
Required Skills: Python, OOPs, DSA, Aptitude
Eligibility:
- Degree: BTech./BE, MTech./ME, BCA, MCA, BSc., MSc
- Branch: All
- Graduation Year: 2023, 2024, 2025
Work Details:
- Work Location: Pune (Onsite)
- CTC: 3 LPA to 6 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Pune Centre
Further Rounds (for shortlisted candidates only):
- Profile & Background Screening Round,
- Technical Interview 1
- Technical Interview 2
- Tech+Managerial Round
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/raeUXs

Quidcash seeks a versatile full-stack developer to build transformative fintech applications from end to end. You ll leverage Flutter for frontend development and JavaScript/Python for backend systems to create seamless, high-performance solutions for Indian SMEs. This role
blends UI craftsmanship with backend logic, offering the chance to architect responsive web/mobile experiences while integrating financial workflows and AI-driven features. If you excel at turning complex requirements into intuitive interfaces, thrive in full lifecycle development, and are passionate about fintech innovation – join us!
What You’ll Do:
Full-stack Development:
Design and build responsive cross-platform applications using Flutter (Dart) for web and mobile native app development.
Develop robust backend services with JavaScript (Node.js) and Python, applying OOP principles and RESTful/gRPC APIs.
Integrations:
Implement secure financial features (e.g., payment processing, dashboards, transaction workflows) with regulatory compliance.
Connect frontend UIs to backend systems (databases, cloud APIs, AI/ML models).
System Architecture: Architect scalable solutions using microservices, state management (Provider/Bloc), and cloud patterns (AWS/GCP).
Collaboration & Delivery:
Partner with product, UX, and QA teams in Agile/Scrum cycles to ship features from concept to production.
Quality & Innovation:
Enforce testing (unit/widget/integration), CI/CD pipelines, and DevOps practices.
Explore AI/ML integration for data-driven UI/UX enhancements.
What You’ll Bring (Must-Haves):
Experience:
3–5 years in full-stack development, including:
Flutter (Dart) for cross-platform apps (iOS, Android, Web).
JavaScript (Node.js + React/Express) and Python (Django/Flask).
Experience with OOP, design patterns, and full SDLC in Agile environments.
Technical Acumen:
Frontend:
Flutter (state management, animations, custom widgets).
HTML/CSS, responsive design, and performance optimization.
Backend:
Node.js/Python frameworks, API design, and database integration (SQL/NoSQL).
Tools & Practices:
Cloud platforms (AWS/GCP/Azure), Docker, CI/CD (Jenkins/GitHub Actions).
Git, testing suites (Jest/Pytest, Flutter Test), and financial security standards.
Mindset:
User-centric approach with a passion for intuitive, accessible UI/UX.
Ability to bridge technical gaps between frontend and backend teams.
Agile problem-solver thriving in fast-paced fintech environments.
Why Join Quidcash?
Impact: Play a pivotal role in shaping a product that directly impacts Indian SMEs' business growth.
Innovation: Work with cutting-edge technologies, including AI/ML, in a forward- thinking environment.
Growth: Opportunities for professional development and career advancement in a growing company.
Culture: Be part of a collaborative, supportive, and brilliant team that values every contribution.
Benefits: Competitive salary, comprehensive benefits package, and be a part of the next fintech evolution.


Job Description: Software Engineer - Backend ( 3-5 Years)
Location: Bangalore
WHO WE ARE:
TIFIN is a fintech platform backed by industry leaders including JP Morgan, Morningstar, Broadridge, Hamilton Lane, Franklin Templeton, Motive Partners and a who’s who of the financial service industry. We are creating engaging wealth experiences to better financial lives
through AI and investment intelligence powered personalization. We are working to change the world of wealth in ways that personalization has changed the world of movies, music and more but with the added responsibility of delivering better wealth outcomes.
We use design and behavioral thinking to enable engaging experiences through software and application programming interfaces (APIs). We use investment science and intelligence to build algorithmic engines inside the software and APIs to enable better investor outcomes.
In a world where every individual is unique, we match them to financial advice and investments with a recognition of their distinct needs and goals across our investment marketplace and our advice and planning divisions.
OUR VALUES: Go with your GUT
●Grow at the Edge: We embrace personal growth by stepping out of our comfort zones to
discover our genius zones, driven by self-awareness and integrity. No excuses.
●Understanding through Listening and Speaking the Truth: Transparency, radical candor,
and authenticity define our communication. We challenge ideas, but once decisions are
made, we commit fully.
●I Win for Teamwin: We operate within our genius zones, taking ownership of our work
and inspiring our team with energy and attitude to win together.
Responsibilities:
• Contribute to the entire implementation process including driving the definition of improvements based on business needs and architectural improvements.
• Review code for quality and implementation of best practices.
• Promote coding, testing, and deployment best practices through hands-on research and demonstration.
• Write testable code that enables extremely high levels of code coverage.
• Ability to review frameworks and design principles toward suitability in the project context.
• Candidates who will demonstrate an ability to identify an opportunity lay out a rational plan for pursuing that opportunity, and see it through to completion.
Requirements:
• Engineering graduate with 3+ years of experience in software product development.
• Proficient in Python, Django, Pandas, GitHub, and AWS.
• Good knowledge of PostgreSQL, and MongoDB.
• Strong Experience in designing REST APIs.
• Experience with working on scalable interactive web applications.
• A clear understanding of software design constructs and their implementation.
• Understanding of the threading limitations of Python and multi-process architecture.
• Familiarity with some ORM (Object Relational Mapper) libraries.
• Good understanding of Test Driven Development.
• Unit and Integration testing.
• Preferred exposure to Finance domain.
• Strong written and oral communication skills.


Technical Lead
The ideal candidate should possess the following qualifications:
- Education: Bachelor's degree in Computer Science, Software Engineering, or a related field.
- Experience: 9+ years in software development with a proven track record of delivering scalable applications.
- Leadership Skills: 4+ years of experience in a technical leadership role, demonstrating strong mentoring abilities.
- Technical Lead must Lead and mentor a team of software developers and validation engineers.
- Technical Skills: Technical Lead must have Proficiency in programming languages such as C#, React js, SQL, MySQL, Javascript, Web API are required .NET, or Python, along with frameworks and tools used in software development.
- Technical Lead must have General working knowledge of Selenium to support current business automation tools and future automation requirements.
- General working knowledge of PHP desired to support current legacy applications which are on the roadmap for future modernization.
- Technical Lead must have Strong understanding of software development lifecycle (SDLC).
- Experience with agile methodologies (Scrum/Kanban or similar).
- Knowledge of version control systems (Git or similar).
- Development Methodologies: Experience with Agile development methodologies and experience with CI/CD pipelines.
- Problem-Solving Skills: Strong analytical and problem-solving abilities that enable the identification of complex technical issues.
- Collaboration: Excellent communication and collaboration skills, with the ability to work effectively within a team environment.
- Innovation: A passion for technology and innovation, with a keen interest in exploring new technologies to find the best solutions.


Backend Engineer – AI Video Intelligence
Location: Remote
Type: Full-Time
Experience Required: 3–6 years
About the Role
We are hiring a Backend Engineer who is passionate about building highly performant, scalable backend systems that integrate deeply with AI and video technologies. This is an opportunity to work with a world-class engineering team, solving problems that power AI-driven products in production at scale.
If you enjoy crafting APIs, scaling backend architecture, and optimizing systems to millisecond-level response times — this is the role for you.
Responsibilities
● Build and scale backend systems using Node.js, Python, and TypeScript/Javascript
● Architect microservices for real-time video analysis and LLM integrations
● Optimize database queries, caching layers, and backend workflows
● Work closely with frontend, data, and DevOps teams
● Deploy, monitor, and scale services on AWS
● Own the backend stack from design to implementation
Must-Have Skills
● 3+ years of experience with backend systems in production
● Strong in Node.js, Python, and TypeScript
● Experience designing REST APIs and microservices
● Proficient in PostgreSQL, MySQL, and Redis
● Hands-on with AWS services like EC2, S3, Lambda, etc.
● Real-world experience integrating with AI/LLM APIs
● Strong debugging and profiling skills
Bonus Skills
● Familiarity with Rust, Golang, or Next.js
● Prior experience in video tech, data platforms, or SaaS products
● Knowledge of message queues (Kafka, RabbitMQ, etc.)
● Exposure to containerization and orchestration (Docker/Kubernetes)

Position: Python Developer
Location: Andheri East, Mumbai
Work Mode: 5 Days WFO
Availability: Immediate joiners only (or notice period completed)
What We're Looking For:
✅ 2+ years of solid Python development experience
✅ Django framework expertise - must have!
✅ FastAPI framework knowledge - essential!
✅ Database skills in MongoDB OR PostgreSQL
✅ Ready to work from office 5 days a week

The Opportunity
We’re looking for a Senior Data Engineer to join our growing Data Platform team. This role is a hybrid of data engineering and business intelligence, ideal for someone who enjoys solving complex data challenges while also building intuitive and actionable reporting solutions.
You’ll play a key role in designing and scaling the infrastructure and pipelines that power analytics, dashboards, machine learning, and decision-making across Sonatype. You’ll also be responsible for delivering clear, compelling, and insightful business intelligence through tools like Looker Studio and advanced SQL queries.
What You’ll Do
- Design, build, and maintain scalable data pipelines and ETL/ELT processes.
- Architect and optimize data models and storage solutions for analytics and operational use.
- Create and manage business intelligence reports and dashboards using tools like Looker Studio, Power BI, or similar.
- Collaborate with data scientists, analysts, and stakeholders to ensure datasets are reliable, meaningful, and actionable.
- Own and evolve parts of our data platform (e.g., Airflow, dbt, Spark, Redshift, or Snowflake).
- Write complex, high-performance SQL queries to support reporting and analytics needs.
- Implement observability, alerting, and data quality monitoring for critical pipelines.
- Drive best practices in data engineering and business intelligence, including documentation, testing, and CI/CD.
- Contribute to the evolution of our next-generation data lakehouse and BI architecture.
What We’re Looking For
Minimum Qualifications
- 5+ years of experience as a Data Engineer or in a hybrid data/reporting role.
- Strong programming skills in Python, Java, or Scala.
- Proficiency with data tools such as Databricks, data modeling techniques (e.g., star schema, dimensional modeling), and data warehousing solutions like Snowflake or Redshift.
- Hands-on experience with modern data platforms and orchestration tools (e.g., Spark, Kafka, Airflow).
- Proficient in SQL with experience in writing and optimizing complex queries for BI and analytics.
- Experience with BI tools such as Looker Studio, Power BI, or Tableau.
- Experience in building and maintaining robust ETL/ELT pipelines in production.
- Understanding of data quality, observability, and governance best practices.
Bonus Points
- Experience with dbt, Terraform, or Kubernetes.
- Familiarity with real-time data processing or streaming architectures.
- Understanding of data privacy, compliance, and security best practices in analytics and reporting.
Why You’ll Love Working Here
- Data with purpose: Work on problems that directly impact how the world builds secure software.
- Full-spectrum impact: Use both engineering and analytical skills to shape product, strategy, and operations.
- Modern tooling: Leverage the best of open-source and cloud-native technologies.
- Collaborative culture: Join a passionate team that values learning, autonomy, and real-world impact.

About the Role
We’re hiring a Data Engineer to join our Data Platform team. You’ll help build and scale the systems that power analytics, reporting, and data-driven features across the company. This role works with engineers, analysts, and product teams to make sure our data is accurate, available, and usable.
What You’ll Do
- Build and maintain reliable data pipelines and ETL/ELT workflows.
- Develop and optimize data models for analytics and internal tools.
- Work with team members to deliver clean, trusted datasets.
- Support core data platform tools like Airflow, dbt, Spark, Redshift, or Snowflake.
- Monitor data pipelines for quality, performance, and reliability.
- Write clear documentation and contribute to test coverage and CI/CD processes.
- Help shape our data lakehouse architecture and platform roadmap.
What You Need
- 2–4 years of experience in data engineering or a backend data-related role.
- Strong skills in Python or another backend programming language.
- Experience working with SQL and distributed data systems (e.g., Spark, Kafka).
- Familiarity with NoSQL stores like HBase or similar.
- Comfortable writing efficient queries and building data workflows.
- Understanding of data modeling for analytics and reporting.
- Exposure to tools like Airflow or other workflow schedulers.
Bonus Points
- Experience with DBT, Databricks, or real-time data pipelines.
- Familiarity with cloud infrastructure tools like Terraform or Kubernetes.
- Interest in data governance, ML pipelines, or compliance standards.
Why Join Us?
- Work on data that supports meaningful software security outcomes.
- Use modern tools in a cloud-first, open-source-friendly environment.
- Join a team that values clarity, learning, and autonomy.
If you're excited about building impactful software and helping others do the same, this is an opportunity to grow as a technical leader and make a meaningful impact.


What You’ll Do
- Build & tune models: embeddings, transformers, retrieval pipelines, evaluation frameworks.
- Architect Python services (FastAPI/Flask) to embed ML/LLM workflows end-to-end.
- Translate AI research into production features for data extraction, document reasoning, and risk analytics.
- Own the full user flow: back-end → front-end (React/TS) → CI/CD on Azure & Docker.
- Leverage AI coding tools (Copilot, Cursor, Jules) to meet our 1 dev = 4 devs productivity bar.
Core Tech Stack:
- Primary:
Python · FastAPI/Flask · Pandas · SQL/NoSQL · Hugging Face · LangChain/RAG · REST/GraphQL · Azure · Docker
- Bonus:
React.js · Vector Databases · Kubernetes
You Bring:
- Proven track record shipping Python features and training/serving ML or LLM models.
- Comfort reading research papers/blogs, prototyping ideas, and measuring model performance.
- 360° product mindset: tests, reviews, secure code, quick iterations.
- Strong ownership and output focus — impact beats years of experience.
Why Join Intain?
- Small, expert team where your code and models hit production fast.
- Work on real-world AI problems powering billions in structured-finance transactions.
- Compensation & ESOPs tied directly to the value you ship.
📍 About Us
Intain is transforming structured finance using AI — from data ingestion to risk analytics. Our platform, powered by IntainAI and Ida, helps institutions manage and scale transactions seamlessly.
We are seeking a Software Engineer in Test to join our Quality Engineering team. In this role, you will be responsible for designing, developing, and maintaining automation frameworks to enhance our test coverage and ensure the delivery of high-quality software. You will collaborate closely with developers, product managers, and other stakeholders to drive test automation strategies and improve software reliability.
Key Responsibilities
● Design, develop, and maintain robust test automation frameworks for web, API, and backend services.
● Implement automated test cases to improve software quality and test coverage.
● Develop and execute performance and load tests to ensure the application behaves reliably in self-hosted environment
environments.
● Integrate automated tests into CI/CD pipelines to enable continuous testing.
● Collaborate with software engineers to define test strategies, acceptance criteria, and quality standards.
● Conduct performance, security, and regression testing to ensure application stability.
● Investigate test failures, debug issues, and work with development teams to resolve defects.
● Advocate for best practices in test automation, code quality, and software reliability.
● Stay updated with industry trends and emerging technologies in software testing.
Qualifications & Experience
● Bachelor's or Master’s degree in Computer Science, Engineering, or a related field.
● 3+ years of experience in software test automation.
● Proficiency in programming languages such as Java, Python, or JavaScript.
● Hands-on experience with test automation tools like Selenium, Cypress, Playwright, or similar.
● Strong knowledge of API testing using tools such as Postman, RestAssured, or Karate.
● Experience with CI/CD tools such as Jenkins, GitHub Actions, or GitLab CI/CD.
● Understanding of containerization and cloud technologies (Docker, Kubernetes, AWS, or similar).
● Familiarity with performance testing tools like JMeter or Gatling is a plus.
● Excellent problem-solving skills and attention to detail.
● Strong communication and collaboration skills.

Job Type : Contract
Location : Bangalore
Experience : 5+yrs
The role focuses on cloud security engineering with a strong emphasis on GCP, while also covering AWS and Azure.
Required Skills:
- 5+ years of experience in software and/or cloud platform engineering, particularly focused on GCP environment.
- Knowledge of the Shared Responsibility Model; keen understanding of the security risks inherent in hosting cloud-based applications and data.
- Experience developing across the security assurance lifecycle (including prevent, detect, respond, and remediate controls)?Experience in configuring Public Cloud native security tooling and capabilities with a focus on Google Cloud Organizational policies/constraints, VPC SC, IAM policies and GCP APIs.
- Experience with Cloud Security Posture Management (CSPM) 3rd Party tools such as Wiz, Prisma, Check Point CloudGuard, etc.
- Experience in Policy-as-code (Rego) and OPA platform.
- Experience solutioning and configuring event-driven serverless-based security controls in Azure, including but not limited to technologies such as Azure Function, Automation Runbook, AWS Lambda and Google Cloud Functions.
- Deep understanding of DevOps processes and workflows.
- Working knowledge of the Secure SDLC process
- Experience with Infrastructure as Code (IaC) tooling, preferably Terraform.
- Familiarity with Logging and data pipeline concepts and architectures in cloud.
- Strong in scripting languages such as PowerShell or Python or Bash or Go.
- Knowledge of Agile best practices and methodologies
- Experience creating technical architecture documentation.? Excellent communication, written and interpersonal skills.
- Practical experience in designing and configuring CICD pipelines. Practical experience in GitHub Actions and Jenkins.
- Experience in ITSM.
- Ability to articulate complex technical concepts to non-technical stakeholders.
- Experience with risk control frameworks and engagements with risk and regulatory functions
- Experience in the financial industry would be a plus.

Job Title: Python Backend Engineer (Full-time)
Location: Gurgaon, Onsite
Working Days: 5 days
Experience Required: 4+ Years
Interview: Screening and Face-to-face Interview
Job Summary
We are looking for a Python Backend Engineer with strong experience in FastAPI/Django and hands-on exposure to MLOps and LLMOps. You will work on building APIs, deploying ML/LLM models, and enabling intelligent systems using modern AI tools and frameworks.
Responsibilities
- Build and maintain scalable backend systems using Python, FastAPI, or Django
- Deploy and manage ML models (e.g., OpenCV, scikit-learn) and LLMs in production
- Develop APIs for ML/LLM inference and support RAG pipelines
- Implement model versioning, CI/CD, and monitoring for production use
- Collaborate with AI teams to integrate models into applications
Skills Required
- Strong Python backend development (FastAPI/Django)
- Experience with ML libraries: scikit-learn, OpenCV, NumPy, Pandas
- MLOps tools: MLflow, DVC, Airflow, Docker, Kubernetes
- LLMOps tools: LangChain, LlamaIndex, RAG pipelines, FAISS, Pinecone
- Integration experience with OpenAI, Hugging Face, or other LLM APIs
- Familiarity with CI/CD tools (GitHub Actions, Jenkins)

AccioJob is conducting a Walk-In Hiring Drive with MakunAI Global for the position of Python Engineer.
To apply, register and select your slot here: https://go.acciojob.com/cE8XQy
Required Skills: DSA, Python, Django, Fast API
Eligibility:
- Degree: All
- Branch: All
- Graduation Year: 2022, 2023, 2024, 2025
Work Details:
- Work Location: Noida (Hybrid)
- CTC: 3.2 LPA to 3.5 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Skill Centers located in Noida, Greater Noida, and Delhi
Further Rounds (for shortlisted candidates only):
- Profile & Background Screening Round
- Technical Interview Round 1
- Technical Interview Round 2
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/cE8XQy

Job Title: Senior Software Engineer - Backend
About the firm:
Sustainability lies at the core of Stantech AI. Our vision is to empower organizations to derive actionable insights—effectuating a smarter way of working. We operate on the premise that each client is unique and as such requires their own idiosyncratic solutions. Putting this principle into practice, we deliver tailor-made solutions to digitalize, optimize, and strategize fundamental processes underpinning client organizations. For more information, please refer to our website: www.stantech.ai
Job Description:
As a Senior Software Engineer at Stantech AI, you will play a pivotal role in designing, developing, and maintaining enterprise-grade backend services and APIs that cater to the unique needs of our clients. You will be a key member of our engineering team and will contribute to the success of projects by leveraging your expertise in Python, SQL, and modern DevOps practices.
Key Responsibilities:
- Design, develop, and maintain high-performance backend applications and RESTful APIs using Python FastAPI framework.
- Optimize and maintain relational databases with SQL (data modeling, query optimization, and sharding) to ensure data integrity and scalability.
- Create, configure, and manage CI/CD pipelines using GitLab CI for automated build, test, and deployment workflows.
- Collaborate with cross-functional teams (data scientists, frontend engineers, DevOps) to gather requirements and deliver robust, scalable, and user-friendly solutions.
- Participate in architectural and technical decisions to drive innovation, ensure reliability, and improve system performance.
- Conduct code reviews, enforce best practices, and mentor junior engineers.
- Troubleshoot, diagnose, and resolve production issues in a timely manner.
- Stay up-to-date with industry trends, emerging technologies, and best practices.
- Bonus: Hands-on experience with server-level configuration and infrastructure—setting up load balancers, API gateways, and reverse proxies.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Minimum 3 years of professional experience in backend development, with strong expertise in Python and SQL.
- Proven track record building and maintaining CI/CD pipelines using GitLab CI.
- Familiarity with containerization and orchestration technologies: Docker, Kubernetes.
- Solid understanding of software development lifecycle (SDLC) best practices, design patterns, and version control (Git).
- Excellent problem-solving, debugging, and communication skills.
- Ability to work independently and collaboratively in a fast-paced environment.
- Plus: Experience with front-end technologies (HTML, CSS, JavaScript) and cloud platforms (AWS, GCP, Azure).
Financial Package:
Competitive salary in line with experience: ₹10–20 Lakhs per annum, contingent on qualifications and experience.

Position Overview:
We are looking for a talented and self-motivated Back-end Developer to join our development team. The ideal candidate will be responsible for writing clean, efficient, and maintainable code that enhances workflow organization and automates various internal processes within the organization. The role involves continuous improvement of our software solutions to save man-hours and ensure overall organizational efficiency. Key tasks include development, testing, debugging, troubleshooting, and maintenance of both new and existing programs.
Key responsibilities:
1) Software Development: Write clean, efficient, and maintainable code/flows that automate internal processes and improve workflow efficiency.
2) Testing and Maintenance: Perform testing, debugging, troubleshooting, and daily maintenance of created or integrated programs.
3) Adherence to Standards: Follow preferred development methodologies and adhere to organizational development standards.
4) Team work: To work closely with other team members to ensure the successful implementation of projects. Maintain clear and concise documentation of code, APIs, and software components to aid in knowledge sharing and future development.
5) Stay Current: Keep up to date with the latest developments in the Python, RPA ecosystem and engage in best practices of software engineering.

Position Overview:
We are seeking a skilled Software Developer with a focus on Front-End Development with Strong proficiency in HTML, JavaScript, CSS, Sass, Bootstrap and modern JavaScript frameworks including ReactJS and jQuery to join our team. The successful candidate will be responsible for writing clean, efficient, and maintainable code that enhances workflow organization and automates various internal processes within the organization. The role involves continuous improvement of our software solutions to save man-hours and ensure overall organizational efficiency. Key tasks include development, testing, debugging, troubleshooting, and maintenance of both new and existing programs.
Key responsibilities:
1) Software Development: Write clean, efficient, and maintainable code/flows that automate internal processes and improve workflow efficiency.
2) Testing and Maintenance: Perform testing, debugging, troubleshooting, and daily maintenance of created or integrated programs.
3) UI/UX Development and Design: Design and develop intuitive and visually appealing user interfaces in the software.
4) Web Services Integration: Integrate UI with web services to ensure seamless functionality.
5) Adherence to Standards: Follow preferred development methodologies and adhere to organizational development standards.
6) Collaboration: To work closely with other team members to ensure the successful implementation of projects. Maintain clear and concise documentation of code, APIs, and software components to aid in knowledge sharing and future development.
7) Stay Current: Keep up to date with the latest developments in the Frond end development ecosystem and engage in best practices of software engineering.

Position: General Cloud Automation Engineer/General Cloud Engineer
Location-Balewadi High Street,Pune
Key Responsibilities:
- Strategic Automation Leadership
- Drive automation to improve deployment speed and reduce manual work.
- Promote scalable, long-term automation solutions.
- Infrastructure as Code (IaC) & Configuration Management
- Develop IaC using Terraform, CloudFormation, Ansible.
- Maintain infrastructure via Ansible, Puppet, Chef.
- Scripting in Python, Bash, PowerShell, JavaScript, GoLang.
- CI/CD & Cloud Optimization
- Enhance CI/CD using Jenkins, GitHub Actions, GitLab CI/CD.
- Automate across AWS, Azure, GCP, focusing on performance, networking, and cost-efficiency.
- Integrate monitoring tools such as Prometheus, Grafana, Datadog, ELK.
- Security Automation
- Enforce security with tools like Vault, Snyk, Prisma Cloud.
- Implement automated compliance and access controls.
- Innovation & Continuous Improvement
- Evaluate and adopt emerging automation tools.
- Foster a forward-thinking automation culture.
Required Skills & Tools:
Strong background in automation, DevOps, and cloud engineering.
Expert in:
IaC: Terraform, CloudFormation, Azure ARM, Bicep
Config Mgmt: Ansible, Puppet, Chef
Cloud Platforms: AWS, Azure, GCP
CI/CD: Jenkins, GitHub Actions, GitLab CI/CD
Scripting: Python, Bash, PowerShell, JavaScript, GoLang
Monitoring & Security: Prometheus, Grafana, ELK, Vault, Prisma Cloud
Network Automation: Private Endpoints, Transit Gateways, Firewalls, etc.
Certifications Preferred:
AWS DevOps Engineer
Terraform Associate
Red Hat Certified Engineer


At Dolat Capital, we blend cutting-edge technology with quantitative finance to drive high-performance trading across Equities, Futures, and Options. We're a fast-moving team of traders, engineers, and data scientists building ultra-low latency systems and intelligent trading strategies.
🎯 What You’ll Work On
1. Designing and deploying high-frequency, high-sharpe trading strategies
2. Building low-latency, high-throughput trading infrastructure (C++/Python/Linux).
3. Leveraging AI/ML to detect alpha and market patterns from large datasets
Real-time risk systems, simulation tools, and performance optimization
4. Collaborating across tech and trading teams to push innovation in live markets.
🧠 What We’re Looking For
1. Master’s (U.S.) in CS or Computational Finance (MANDATORY)
2. 1–2 years of experience in a quant/tech-heavy role
3. Strong in C++, Python, algorithms, Linux, TCP/UDP
4. Experience with AI/ML tools like TensorFlow, PyTorch, or Scikit-learn
5. Passion for high-performance systems and market innovation.

Job Title : Python Backend Engineer (with MLOps & LLMOps Experience)
Experience : 4 to 8 Years
Location : Gurgaon Sector - 43
Employment Type : Full-time
Job Summary :
We are looking for an experienced Python Backend Engineer with a strong background in FastAPI, Django, and hands-on exposure to MLOps and LLMOps practices.
The ideal candidate will be responsible for building scalable backend solutions, integrating AI/ML models into production environments, and implementing efficient pipelines for machine learning and large language model operations.
Mandatory Skills : Python, FastAPI, Django, MLOps, LLMOps, REST API development, Docker, Kubernetes, Cloud (AWS/Azure/GCP), CI/CD.
Key Responsibilities :
- Develop, optimize, and maintain backend services using Python (FastAPI, Django).
- Design and implement API endpoints for high-performance and secure data exchange.
- Collaborate with data science teams to deploy ML/LLM models into production using MLOps/LLMOps best practices.
- Build and manage CI/CD pipelines for ML models and ensure seamless integration with backend systems.
- Implement model monitoring, versioning, and retraining workflows for machine learning and large language models.
- Optimize backend performance for scalability and reliability in AI-driven applications.
- Work with Docker, Kubernetes, and cloud platforms (AWS/Azure/GCP) for deployment and orchestration.
- Ensure best practices in code quality, testing, and security for all backend and model deployment workflows.
Required Skills & Qualifications :
- 4 to 8 years of experience as a Backend Engineer with strong expertise in Python.
- Proficient in FastAPI and Django frameworks for API and backend development.
- Hands-on experience with MLOps and LLMOps workflows (model deployment, monitoring, scaling).
- Familiarity with machine learning model lifecycle and integration into production systems.
- Strong knowledge of RESTful APIs, microservices architecture, and asynchronous programming.
- Experience with Docker, Kubernetes, and cloud environments (AWS, Azure, or GCP).
- Exposure to CI/CD pipelines and DevOps tools.
- Good understanding of Git, version control, and testing frameworks.
Nice to Have :
- Experience with LangChain, Hugging Face, or similar LLM frameworks.
- Knowledge of data pipelines, feature engineering, and ML frameworks (TensorFlow, PyTorch, etc.).
- Understanding of vector databases (Pinecone, Chroma, etc.).
Education :
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.

About Us
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.
Role Overview:
As a Senior Data Scientist / AI Engineer, you will be a key player in our technical leadership. You will be responsible for designing, developing, and deploying sophisticated AI and Machine Learning solutions, with a strong emphasis on Generative AI and Large Language Models (LLMs). You will architect and manage scalable AI microservices, drive research into state-of-the-art techniques, and translate complex business requirements into tangible, high-impact products. This role requires a blend of deep technical expertise, strategic thinking, and leadership.
Key Responsibilities:
- Architect & Develop AI Solutions: Design, build, and deploy robust and scalable machine learning models, with a primary focus on Natural Language Processing (NLP), Generative AI, and LLM-based Agents.
- Build AI Infrastructure: Create and manage AI-driven microservices using frameworks like Python FastAPI, ensuring high performance and reliability.
- Lead AI Research & Innovation: Stay abreast of the latest advancements in AI/ML. Lead research initiatives to evaluate and implement state-of-the-art models and techniques for performance and cost optimization.
- Solve Business Problems: Collaborate with product and business teams to understand challenges and develop data-driven solutions that create significant business value, such as building business rule engines or predictive classification systems.
- End-to-End Project Ownership: Take ownership of the entire lifecycle of AI projects—from ideation, data processing, and model development to deployment, monitoring, and iteration on cloud platforms.
- Team Leadership & Mentorship: Lead learning initiatives within the engineering team, mentor junior data scientists and engineers, and establish best practices for AI development.
- Cross-Functional Collaboration: Work closely with software engineers to integrate AI models into production systems and contribute to the overall system architecture.
Required Skills and Qualifications
- Master’s (M.Tech.) or Bachelor's (B.Tech.) degree in Computer Science, Artificial Intelligence, Information Technology, or a related field.
- 6+ years of professional experience in a Data Scientist, AI Engineer, or related role.
- Expert-level proficiency in Python and its core data science libraries (e.g., PyTorch, Huggingface Transformers, Pandas, Scikit-learn).
- Demonstrable, hands-on experience building and fine-tuning Large Language Models (LLMs) and implementing Generative AI solutions.
- Proven experience in developing and deploying scalable systems on cloud platforms, particularly AWS. Experience with GCS is a plus.
- Strong background in Natural Language Processing (NLP), including experience with multilingual models and transcription.
- Experience with containerization technologies, specifically Docker.
- Solid understanding of software engineering principles and experience building APIs and microservices.
Preferred Qualifications
- A strong portfolio of projects. A track record of publications in reputable AI/ML conferences is a plus.
- Experience with full-stack development (Node.js, Next.js) and various database technologies (SQL, MongoDB, Elasticsearch).
- Familiarity with setting up and managing CI/CD pipelines (e.g., Jenkins).
- Proven ability to lead technical teams and mentor other engineers.
- Experience developing custom tools or packages for data science workflows.

Sr. Software Engineer
Role expects you to
• Design, code, test, debug, deploy Web App and Mobile Apps
• Collaborate with SMEs to resolve Technical Issues and Achieve Goals
• Lead Projects, teams
• Strong problem solving skills
• Open to learn new technologies
Qualification
▪ 4+ years of IT Development Experience
▪ Comprehensive Programming Experience in Web and Mobile app development using
MEAN or MERN stack
▪ Knowledge of Python, Ruby
▪ Knowledge of RDBMS like MS SQL, PostgreSQL, MySQL
▪ Understanding of Dev-Ops Tools like git, Jenkins, Azure Dev-Ops
▪ Exposure Cloud technologies, Agile development process, Estimation techniques
▪ Understanding of End to End Development process

Company Overview:
We are a small, dynamic, and growing AI-focused company delivering cutting-edge AI solutions to our clients. We specialize in implementing real-world applications using Large Language Models (LLMs), Small Language Models (SLMs), Retrieval-Augmented Generation (RAG), Agentic AI systems, and cloud-native services across AWS, Azure, and GCP.
Job Summary:
We are looking for a passionate AI Developer & Solution Provider who thrives on solving real client problems using AI. The ideal candidate will have solid experience with Python, language models, chatbot development, and building scalable solutions using cloud services and modern databases. You will work directly with clients to understand their needs and deliver end-to-end AI-powered solutions.
Key Responsibilities:
- Design, build, and deploy AI/ML solutions using LLMs (e.g., GPT-4, Claude), SLMs, and open-source models.
- Develop and fine-tune chatbots, intelligent agents, and Agentic AI workflows for various client use-cases.
- Implement Retrieval-Augmented Generation (RAG) pipelines to enhance LLM capabilities.
- Build secure, scalable backends using Python and integrate AI components with APIs, databases, and cloud systems.
- Understand client requirements and translate them into technically sound and scalable AI solutions.
- Deploy applications and services using cloud platforms (AWS, Azure, GCP).
- Work with structured and unstructured data, including setting up and managing databases (SQL, NoSQL).
- Stay updated with the latest trends in AI/ML and help bring innovative solutions to the team and clients.
- Document solutions and provide technical support during client handover and implementation.
Required Skills and Experience:
- Strong Python programming experience, especially for AI/ML development.
- Hands-on experience with LLMs, SLMs, OpenAI, Hugging Face, etc.
- Practical knowledge of RAG, vector databases (e.g., FAISS, Pinecone, Weaviate, Chroma).
- Experience with Agentic AI systems, frameworks (e.g., LangChain, AutoGen, CrewAI), and chatbot development.
- Good understanding of databases – both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis).
- Solid experience with cloud platforms: AWS (SageMaker, Lambda), Azure (OpenAI, ML Studio), GCP (Vertex AI, Functions).
- Comfortable interacting with clients and delivering tailored AI solutions.
- Excellent problem-solving and communication skills.
Nice to Have:
- Experience with fine-tuning/custom training of models.
- Experience in deploying scalable APIs (e.g., FastAPI, Flask).
- Background in data engineering or MLOps.
- Familiarity with DevOps, Docker, Kubernetes is a plus.
Why Join Us?
- Opportunity to work on cutting-edge AI solutions with real-world impact.
- Collaborative and flexible work culture.
- Direct exposure to diverse clients and industries.
- Fast-paced environment with learning and growth opportunities.


Job Requirement :
- 3-5 Years of experience in Data Science
- Strong expertise in statistical modeling, machine learning, deep learning, data warehousing, ETL, and reporting tools.
- Bachelors/ Masters in Data Science, Statistics, Computer Science, Business Intelligence,
- Experience with relevant programming languages and tools such as Python, R, SQL, Spark, Tableau, Power BI.
- Experience with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn
- Ability to think strategically and translate data insights into actionable business recommendations.
- Excellent problem-solving and analytical skills
- Adaptability and openness towards changing environment and nature of work
- This is a startup environment with evolving systems and procedures, the ideal candidate will be comfortable working in a fast-paced, dynamic environment and will have a strong desire to make a significant impact on the business.
Job Roles & Responsibilities:
- Conduct in-depth analysis of large-scale datasets to uncover insights and trends.
- Build and deploy predictive and prescriptive machine learning models for various applications.
- Design and execute A/B tests to evaluate the effectiveness of different strategies.
- Collaborate with product managers, engineers, and other stakeholders to drive data-driven decision-making.
- Stay up-to-date with the latest advancements in data science and machine learning.

Basic Qualifications :
● Experience: 4+ years.
● Hands-on development experience with a broad mix of languages such as JAVA, Python, JavaScript, etc.
● Server-side development experience mainly in JAVA, (Python and NodeJS can be considerable)
● UI development experience in ReactJS or AngularJS or PolymerJS or EmberJS, or jQuery, etc., is good to have.
● Passion for software engineering and following the best coding concepts.
● Good to great problem solving and communication skills.

Basic Qualifications :
● Experience: 4+ years.
● Hands-on development experience with a broad mix of languages such as JAVA, Python, JavaScript, etc.
● Server-side development experience mainly in JAVA, (Python and NodeJS can be considerable)
● UI development experience in ReactJS or AngularJS or PolymerJS or EmberJS, or jQuery, etc., is good to have.
● Passion for software engineering and following the best coding concepts.
● Good to great problem solving and communication skills.
Nice to have Qualifications :
● Product and customer-centric mindset.
● Great OO skills, including design patterns.
● Experience with devops, continuous integration & deployment.
● Exposure to big data technologies, Machine Learning and NLP will be a plus.

Role Overview
We're looking for experienced Data Engineers who can independently design, build, and manage scalable data platforms. You'll work directly with clients and internal teams to develop robust data pipelines that support analytics, AI/ML, and operational systems.
You’ll also play a mentorship role and help establish strong engineering practices across our data projects.
Key Responsibilities
- Design and develop large-scale, distributed data pipelines (batch and streaming)
- Implement scalable data models, warehouses/lakehouses, and data lakes
- Translate business requirements into technical data solutions
- Optimize data pipelines for performance and reliability
- Ensure code is clean, modular, tested, and documented
- Contribute to architecture, tooling decisions, and platform setup
- Review code/design and mentor junior engineers
Must-Have Skills
- Strong programming skills in Python and advanced SQL
- Solid grasp of ETL/ELT, data modeling (OLTP & OLAP), and stream processing
- Hands-on experience with frameworks like Apache Spark, Flink, etc.
- Experience with orchestration tools like Airflow
- Familiarity with CI/CD pipelines and Git
- Ability to debug and scale data pipelines in production
Preferred Skills
- Experience with cloud platforms (AWS preferred, GCP or Azure also fine)
- Exposure to Databricks, dbt, or similar tools
- Understanding of data governance, quality frameworks, and observability
- Certifications (e.g., AWS Data Analytics, Solutions Architect, Databricks) are a bonus
What We’re Looking For
- Problem-solver with strong analytical skills and attention to detail
- Fast learner who can adapt across tools, tech stacks, and domains
- Comfortable working in fast-paced, client-facing environments
- Willingness to travel within India when required
At WeAssemble, we connect global businesses with top-tier talent to build dedicated offshore teams. Our mission is to deliver exceptional services through innovation, collaboration, and transparency. We pride ourselves on a vibrant work culture and are constantly on the lookout for passionate professionals to join our journey.
Job Description:
We are looking for a highly skilled Automation Tester with 3 years of experience to join our dynamic team in Mumbai. The ideal candidate should be proactive, detail-oriented, and ready to hit the ground running. If you’re passionate about quality assurance and test automation, we’d love to meet you!
Key Responsibilities:
Design, develop, and execute automated test scripts using industry-standard tools and frameworks.
Collaborate with developers, business analysts, and other stakeholders to understand requirements and ensure quality.
Maintain and update automation test suites as per application changes.
Identify, record, document, and track bugs.
Ensure the highest quality of deliverables with minimal supervision.
Contribute to the continuous improvement of QA processes and automation strategies.
Skills & Qualifications:
Minimum 3 years of hands-on experience in automation testing.
Proficiency in automation tools such as Selenium, TestNG, JUnit, etc.
Solid knowledge of programming/scripting languages (Java, Python, etc.).
Familiarity with CI/CD tools like Jenkins, Git, etc.
Good understanding of software development lifecycle and agile methodologies.
Excellent analytical and problem-solving skills.
Strong communication and teamwork abilities.
Location: Mumbai (Work from Office)
Notice Period: Candidates who can join immediately or within 15 days will be preferred.

Role Overview
As an AI/ML Engineer at Hotelzify, you will be at the forefront of designing and deploying AI components that power our conversational agent (voice + chat). You’ll be responsible for NLP/NLU pipelines, real-time inference optimization, dialogue management, retrieval-augmented generation (RAG), and LLM integration, helping the agent understand user intents, answer questions, and complete bookings live.
Key Responsibilities
- Design and implement NLP/NLU models to understand real-time user intents from text and voice.
- Build and fine-tune LLM-based conversational flows using RAG, prompt engineering, and retrieval mechanisms.
- Integrate external tools for hotel availability, pricing APIs, CRM data, and transactional workflows.
- Develop efficient real-time inference pipelines with latency under 300ms for voice and chat.
- Collaborate with frontend/backend teams to ensure seamless LLM API orchestration.
- Optimize prompt logic, dialogue memory, and fallback strategies for natural conversations.
- Conduct A/B experiments and continuous learning pipelines for feedback-driven improvement.
- Use vector databases (e.g., FAISS, Pinecone, Weaviate) for retrieval over hotel-related data.
- Work on voice-specific challenges: STT (Speech-to-Text), TTS, and intent detection over audio streams.
Tech Stack
- Languages: Python, Node.js
- AI/ML: LangChain, Transformers (Hugging Face), OpenAI APIs, LlamaIndex, RAG, Whisper, NVIDIA NeMo
- Infra: AWS (EC2, RDS, EKS, Lambda), Docker, Redis
- Databases: PostgreSQL, MongoDB, Pinecone / Weaviate / Qdrant
- Voice APIs: Plivo, Twilio, Google Speech, AssemblyAI
What We’re Looking For
- 3+ years in AI/ML/NLP-focused roles, preferably in production environments.
- Strong understanding of modern LLM pipelines, LangChain/RAG architectures.
- Experience building or integrating real-time conversational AI systems.
- Comfortable with voice-based systems: STT, TTS, and real-time latency tuning.
- Hands-on experience with fine-tuning or prompt-tuning transformer models.
- Bonus: Experience working in travel, hospitality, or e-commerce domains.
Nice to Have
- Prior work with agents that use [tool calling] / [function calling] paradigms.
- Knowledge of reinforcement learning for dialogue optimization (e.g., RLHF).
- Experience deploying on GPU-based infrastructure (e.g., AWS EC2 with NVIDIA).
Why Join Us?
- Work on a real product used by thousands of guests every day.
- Build India’s first real-time AI agent for hotel sales.
- Flexible work environment with deep ownership and autonomy.
- Get to experiment and deploy bleeding-edge ML/AI tech in production.


The requirements are as follows:
1) Familiar with the the Django REST API Framework.
2) Experience with the FAST API framework will be a plus
3) Strong grasp of basic python programming concepts ( We do ask a lot of questions on this on our interviews :) )
4) Experience with databases like MongoDB , Postgres , Elasticsearch , REDIS will be a plus
5) Experience with any ML library will be a plus.
6) Familiarity with using git , writing unit test cases for all code written and CI/CD concepts will be a plus as well.
7) Familiar with basic code patterns like MVC.
8) Grasp on basic data structures.
You can contact me on nine three one six one two zero one three two


Job Title: Python Developer (Full-time)
Location: Gurgaon, Onsite
Working Days: 5 days
Experience Required: 5+ Years
About the Role
We are seeking a Senior Backend Developer with strong expertise in Python-based web frameworks such as Django, Flask, or Starlette. The ideal candidate should have experience in designing and building scalable APIs and services in a high-performance environment. This is a great opportunity to work on backend systems powering critical products at scale.
Key Responsibilities
- Design, develop, and maintain robust backend services using Python
- Build and optimize RESTful APIs and microservices
- Architect and implement scalable, secure, and maintainable solutions
- Collaborate with frontend, DevOps, and QA teams to ensure smooth delivery
- Write clean, testable, and efficient code
- Troubleshoot and debug production systems
Requirements
- Minimum 5 years of experience in backend development using Python
- Strong hands-on knowledge of Django, Flask, or Starlette/FastAPI
- Proven experience in API development and integration
- Experience building scalable and high-performance systems
- Familiarity with async programming (especially for Starlette/FastAPI)
- Good understanding of database design (SQL/NoSQL)
- Knowledge of containerization tools like Docker is a plus
- Strong problem-solving and debugging skills
Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence.
The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
- Design, build, and maintain scalable data pipelines for structured and unstructured data sources
- Develop ETL processes to collect, clean, and transform data from internal and external systems
- Support integration of data into dashboards, analytics tools, and reporting systems
- Collaborate with data analysts and software developers to improve data accessibility and performance
- Document workflows and maintain data infrastructure best practices
- Assist in identifying opportunities to automate repetitive data tasks

Job Title: Sr Dev Ops Engineer
Location: Bengaluru- India (Hybrid work type)
Reports to: Sr Engineer manager
About Our Client :
We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure
About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you.
What You'll Do 🛠️
- Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
- Billing & Cost Optimization: Monitor and optimize cloud spending.
- Containerization & Orchestration: Deploy and manage applications and orchestrate them.
- Database Management: Deploy, manage, and optimize database instances and their lifecycles.
- Authentication Solutions: Implement and manage authentication systems.
- Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
- Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
- Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
- Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
- Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks.
What You'll Bring 💼
- Minimum of 4 years of experience in a DevOps or SRE role.
- Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
- Solid understanding of Linux fundamentals and command-line tools.
- Extensive experience with CI/CD tools, GitLab CI.
- Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
- Proven experience deploying and managing microservices.
- Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
- Experience with Identity and Access management solutions like Keycloak.
- Experience implementing backup and recovery solutions.
- Familiarity with optimizing scaling, ideally with Karpenter.
- Proficiency in scripting (Python, Bash).
- Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
- Excellent problem-solving and communication skills.
Bonus Points ➕
- Basic understanding of MQTT or general IoT concepts and protocols.
- Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
- Knowledge of specific AWS services relevant to application stacks.
- Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
- AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).
Why this role:
•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.
Job Title: Sr Dev Ops Engineer
Location: Bengaluru- India (Hybrid work type)
Reports to: Sr Engineer manager
About Our Client :
We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure
About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you.
What You'll Do 🛠️
- Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
- Billing & Cost Optimization: Monitor and optimize cloud spending.
- Containerization & Orchestration: Deploy and manage applications and orchestrate them.
- Database Management: Deploy, manage, and optimize database instances and their lifecycles.
- Authentication Solutions: Implement and manage authentication systems.
- Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
- Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
- Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
- Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
- Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks.
What You'll Bring 💼
- Minimum of 4 years of experience in a DevOps or SRE role.
- Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
- Solid understanding of Linux fundamentals and command-line tools.
- Extensive experience with CI/CD tools, GitLab CI.
- Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
- Proven experience deploying and managing microservices.
- Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
- Experience with Identity and Access management solutions like Keycloak.
- Experience implementing backup and recovery solutions.
- Familiarity with optimizing scaling, ideally with Karpenter.
- Proficiency in scripting (Python, Bash).
- Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
- Excellent problem-solving and communication skills.
Bonus Points ➕
- Basic understanding of MQTT or general IoT concepts and protocols.
- Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
- Knowledge of specific AWS services relevant to application stacks.
- Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
- AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).
Why this role:
•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.
🚀 Why join Bound AI (OIP Insurtech)?
We build real-world AI workflows that transform insurance operations—from underwriting to policy issuance. You’ll join a fast-growing, global team of engineers and innovators tackling the toughest problems in document intelligence and agent orchestration. We move fast, ship impact, and value autonomy over bureaucracy.
🧭 What You'll Be Doing
- Design and deliver end‑to‑end AI solutions: from intake of SOVs, loss runs, and documents to deployed intelligent agent workflows.
- Collaborate closely with stakeholders (product, operations, engineering) to architect scalable ML & GenAI systems that solve real insurance challenges.
- Translate business needs into architecture diagrams, data flows, and system integrations.
- Choose and integrate components such as RAG pipelines, LLM orchestration (LangChain, DSPy), vector databases, and MLOps tooling.
- Oversee technical proof-of-concepts, pilot projects, and production rollout strategies.
- Establish governance and best practices for model lifecycle, monitoring, error handling, and versioning.
- Act as a trusted advisor and technical leader—mentor engineers and evangelize design principles across teams.
🎯 What We’re Looking For
- 6+ years of experience delivering technical solutions in machine learning, AI engineering or solution architecture.
- Proven track record leading design, deployment, and integration of GenAI-based systems (LLM tuning, RAG, multi-agent orchestration).
- Fluency with Python production code, cloud platforms (AWS, GCP, Azure), and container orchestration tools.
- Excellent communication skills—able to bridge technical strategy and business outcomes with clarity.
- Startup mindset—resourceful, proactive, and hands‑on when needed.
- Bonus: experience with insurance-specific workflows or document intelligence domains (SOVs, loss runs, ACORD forms).
🛠️ Core Skills & Tools
- Foundation models, LLM pipelines, and vector-based retrieval (embedding search, RAG).
- Architecture modeling and integration: APIs, microservices, orchestration frameworks (LangChain, Haystack, DSPy).
- MLOps: CI/CD for models, monitoring, feedback loops, and retraining pipelines.
- Data engineering: preprocessing, structured/unstructured data integration, pipelines.
- Infrastructure: Kubernetes, Docker, cloud deployment, serverless components.
📈 Why This Role Matters
As an AI Solution Architect, you’ll shape the blueprint for how AI transforms insurance workflows—aligning product strategy, operational impact, and technical scalability. You're not just writing code; you’re orchestrating systems that make labor-intensive processes smarter, faster, and more transparent.

Job Description: Senior Full-Stack Engineer (MERN + Python )
Location: Noida (Onsite)
Experience: 5 to 10 years
We are hiring a Senior Full-Stack Engineer with proven expertise in MERN technologies and Python backend frameworks to deliver scalable, efficient, and maintainable software solutions. You will design and build web applications and microservices, leveraging FastAPI and advanced asynchronous programming techniques to ensure high performance and reliability.
Key Responsibilities:
- Develop and maintain web applications using the MERN stack alongside Python backend microservices.
- Build efficient and scalable APIs with Python frameworks like FastAPI and Flask, utilizing AsyncIO, multithreading, and multiprocessing for optimal performance.
- Lead architecture and technical decisions spanning both MERN frontend and Python microservices backend.
- Collaborate with UX/UI designers to create intuitive and responsive user interfaces.
- Mentor junior developers and conduct code reviews to ensure adherence to best practices.
- Manage and optimize databases such as MongoDB and PostgreSQL for application and microservices needs.
- Deploy, monitor, and maintain applications and microservices on AWS cloud infrastructure (EC2, Lambda, S3, RDS).
- Implement CI/CD pipelines to automate integration and deployment processes.
- Participate in Agile development practices including sprint planning and retrospectives.
- Ensure application scalability, security, and performance across frontend and backend systems.
- Design cloud-native microservices architectures focused on high availability and fault tolerance.
Required Skills and Experience:
- Strong hands-on experience with the MERN stack: MongoDB, Express.js, React.js, Node.js.
- Proven Python backend development expertise with FastAPI and Flask.
- Deep understanding of asynchronous programming using AsyncIO, multithreading, and multiprocessing.
- Experience designing and developing microservices and RESTful/GraphQL APIs.
- Skilled in database design and optimization for MongoDB and PostgreSQL.
- Familiar with AWS services such as EC2, Lambda, S3, and RDS.
- Experience with Git, CI/CD tools, and automated testing/deployment workflows.
- Ability to lead teams, mentor developers, and make key technical decisions.
- Strong problem-solving, debugging, and communication skills.
- Comfortable working in Agile environments and collaborating cross-functionally.


Job Title : Senior Python Backend Developer
Experience Required : 5+ Years
Location : Gurgaon
Joining : Immediate Joiner Preferred
Employment Type : Full-Time
Job Summary :
We are looking for a highly skilled Senior Python Backend Developer with a minimum of 5 years of experience in Python and its modern web frameworks.
The ideal candidate will be responsible for developing scalable backend services, designing robust APIs, and ensuring optimal performance and security of backend systems.
Mandatory Skills : Python, Django, Flask, Streamlit, Starlette, REST API development, scalable backend services.
Key Responsibilities :
- Design, build, and maintain RESTful APIs and backend systems using Python.
- Work with frameworks such as Django, Flask, Streamlit, Starlette.
- Develop scalable and high-performance backend services.
- Collaborate with frontend developers and product teams to deliver seamless integrations.
- Write clean, maintainable, and testable code.
- Troubleshoot and resolve performance and scalability issues.
- Ensure code quality through automated testing and code reviews.
Required Skills :
- Minimum 5 years of backend development experience in Python.
- Strong expertise in Django, Flask, Streamlit, and/or Starlette.
- Proven experience with API design and development.
- Strong understanding of system architecture, data modeling, and scalability best practices.
- Familiarity with CI/CD pipelines, Docker, and cloud environments is a plus.
Nice to Have :
- Experience with async programming (e.g., using FastAPI, Starlette).
- Familiarity with PostgreSQL, MongoDB, or other relational/noSQL databases.
- Exposure to microservices architecture.

About Us
1E9 Advisors is a technology company delivering strategy and solutions. We create and manage software products and provide support with related services across industries.
We have a deep understanding of energy and commodity markets, risk management, and technology, which we weave together to solve problems simply and innovatively. Our team builds modern, reliable systems that help businesses operate efficiently and make smarter decisions.
About the Role
We are looking for highly motivated Python Developer Interns who are eager to learn, take ownership, and contribute meaningfully from day one. This role provides a high-responsibility, high-learning environment where you'll work closely with experienced engineers to build real products and systems. Successful interns would be given a chance to continue in a full time role.
Important Note: This position is not open to applicants who are currently enrolled in full-time degree programs. This internship is designed to transition into a full-time role for successful candidates, so we are seeking candidates who are available for immediate full-time employment upon completion of the internship.
Key Responsibilities
- Develop and maintain backend applications using Python and Django
- Collaborate with the team to design, build, test, and deploy features
- Debug issues and participate in daily problem-solving
- Take ownership of assigned modules or tasks
- Write clear and clean code with documentation
- Contribute to internal discussions and product planning
What we’re looking for:
- Strong programming fundamentals, especially in Python (3.7+)
- Knowledge of data structures and algorithms
- Familiarity with Git and GitHub workflows
- Experience with general-purpose languages and basic web development
- Curiosity, analytical thinking, and attention to detail
- Clear communication and a proactive, collaborative mindset
Preferred Skills (Nice to Have)
- Django
- Django REST Framework (DRF) / FastAPI (for REST APIs)
- Frontend technologies: HTML5, CSS3, Bootstrap
- JavaScript frameworks (ReactJS / VueJS / AngularJS)
- Linux environment experience
- Shell scripting and Pandas
- Data visualization tools (D3.js / Observable)
What you’ll gain:
- Real-world experience solving critical problems in the energy space and enterprise application development
- Mentorship from an experienced team and continuous hands-on learning
- Ownership of live modules and contributions to production systems
- A fast-paced, collaborative, and impact-driven work culture
- Potential pathway to a full-time opportunity


Are you passionate about the clean energy transition and looking to build real-world experience at the intersection of energy, data, and technology?
1E9 Advisors is seeking motivated Energy Analyst Interns to join our team. We’re a technology company that delivers strategy and software solutions across industries, with deep expertise in energy markets, commodities, and risk management.
This internship is ideal for candidates who are analytical, detail-oriented, and eager to explore how battery storage and market optimization work in real-world settings.
Important Note: This position is not open to applicants who are currently enrolled in full-time degree programs. This internship is designed to transition into a full-time role for successful candidates, so we are seeking candidates who are available for immediate full-time employment upon completion of the internship.
What You’ll Work On:
- Analyze and interpret energy market data, including pricing, generation, and capacity
- Track evolving US electricity market rules and support policy analysis
- Assist in valuation and optimization models for battery energy storage
- Collaborate on internal product development and customer-facing insights
What We’re Looking For:
- Strong analytical and communication skills
- Proficiency in Python and/or Excel
- Interest in energy markets, clean technology, or battery storage
- Attention to detail and a proactive mindset
Preferred Skills:
- Understanding of US electricity markets
- Hands-on experience with battery storage valuation
- Effective written and verbal communication
What You’ll Gain:
- Practical exposure to energy markets and clean tech analytics
- Mentorship and hands-on project ownership
- Experience contributing to a production-grade software platform (BatteryOS)
- Potential pathway to a full-time opportunity