50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!



Mandatory Skills
- Efficiently able to design and implement software features.
- Expertise in at least one Object Oriented Programming language (Python, typescript, Java, Node.js, Angular, react.js C#, C++).
- Good knowledge on Data Structure and their correct usage.
- Open to learn any new software development skill if needed for the project.
- Alignment and utilisation of the core enterprise technology stacks and integration capabilities throughout the transition states.
- Participate in planning, definition, and high-level design of the solution and exploration of solution alternatives.
- Identify bottlenecks and bugs, and devise appropriate solutions.
- Define, explore, and support the implementation of enablers to evolve solution intent, working directly with Agile teams to implement them.
- Good knowledge on the implications of Cyber Security on the production.
- Experience architecting & estimating deep technical custom solutions & integrations.

Profile: AWS Data Engineer
Mode- Hybrid
Experience- 5+7 years
Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram
Roles and Responsibilities
- Design and maintain ETL pipelines using AWS Glue and Python/PySpark
- Optimize SQL queries for Redshift and Athena
- Develop Lambda functions for serverless data processing
- Configure AWS DMS for database migration and replication
- Implement infrastructure as code with CloudFormation
- Build optimized data models for performance
- Manage RDS databases and AWS service integrations
- Troubleshoot and improve data processing efficiency
- Gather requirements from business stakeholders
- Implement data quality checks and validation
- Document data pipelines and architecture
- Monitor workflows and implement alerting
- Keep current with AWS services and best practices
Required Technical Expertise:
- Python/PySpark for data processing
- AWS Glue for ETL operations
- Redshift and Athena for data querying
- AWS Lambda and serverless architecture
- AWS DMS and RDS management
- CloudFormation for infrastructure
- SQL optimization and performance tuning

Role: Data Scientist
Location: Bangalore (Remote)
Experience: 4 - 15 years
Skills Required - Radiology, visual images, text, classical model, LLM multi model, Primarily Generative AI, Prompt Engineering, Large Language Models, Speech & Text Domain AI, Python coding, AI Skills, Real world evidence, Healthcare domain
JOB DESCRIPTION
We are seeking an experienced Data Scientist with a proven track record in Machine Learning, Deep Learning, and a demonstrated focus on Large Language Models (LLMs) to join our cutting-edge Data Science team. You will play a pivotal role in developing and deploying innovative AI solutions that drive real-world impact to patients and healthcare providers.
Responsibilities
• LLM Development and Fine-tuning: fine-tune, customize, and adapt large language models (e.g., GPT, Llama2, Mistral, etc.) for specific business applications and NLP tasks such as text classification, named entity recognition, sentiment analysis, summarization, and question answering. Experience in other transformer-based NLP models such as BERT, etc. will be an added advantage.
• Data Engineering: collaborate with data engineers to develop efficient data pipelines, ensuring the quality and integrity of large-scale text datasets used for LLM training and fine-tuning
• Experimentation and Evaluation: develop rigorous experimentation frameworks to evaluate model performance, identify areas for improvement, and inform model selection. Experience in LLM testing frameworks such as TruLens will be an added advantage.
• Production Deployment: work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.
• Predictive Model Design and Implementation: leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models)
• Cross-functional Collaboration: partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions
• Knowledge Sharing: mentor junior team members and stay up to date with the latest advancements in machine learning and LLMs
Qualifications Required
• Doctoral or master’s degree in computer science, Data Science, Artificial Intelligence, or related field
• 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models
• 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).
• Experience working with cloud-based platforms (AWS, GCP, Azure)
Additional Skills
• Excellent problem-solving and analytical abilities
• Strong communication skills, both written and verbal
• Ability to thrive in a collaborative and fast-paced environment

About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Senior Software Development Engineer – Data Engineering with 5-8 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions.
Key Roles & Responsibilities:
- Design, develop, and optimize ETL/ELT pipelines using Apache Spark, PySpark, Databricks, and Snowflake.
- Implement real-time and batch data processing workflows in cloud environments (AWS, Azure, GCP).
- Develop high-performance, scalable data pipelines for structured, semi-structured, and unstructured data.
- Work with Delta Lake and Lakehouse architectures to improve data reliability and efficiency.
- Optimize Snowflake and Databricks performance, including query tuning, caching, partitioning, and cost optimization.
- Implement data governance, security, and compliance best practices.
- Build and maintain data models, transformations, and data marts for analytics and reporting.
- Collaborate with data scientists, analysts, and business teams to define data engineering requirements.
- Automate infrastructure and deployments using Terraform, Airflow, or dbt.
- Monitor and troubleshoot data pipeline failures, performance issues, and bottlenecks.
- Develop and enforce data quality and observability frameworks using Great Expectations, Monte Carlo, or similar tools.
Basic Qualifications:
- Bachelor’s or Master’s Degree in Computer Science or Data Science.
- 5–8 years of experience in data engineering, big data processing, and cloud-based data platforms.
- Hands-on expertise in Apache Spark, PySpark, and distributed computing frameworks.
- Strong experience with Snowflake (Warehouses, Streams, Tasks, Snowpipe, Query Optimization).
- Experience in Databricks (Delta Lake, MLflow, SQL Analytics, Photon Engine).
- Proficiency in SQL, Python, or Scala for data transformation and analytics.
- Experience working with data lake architectures and storage formats (Parquet, Avro, ORC, Iceberg).
- Hands-on experience with cloud data services (AWS Redshift, Azure Synapse, Google BigQuery).
- Experience in workflow orchestration tools like Apache Airflow, Prefect, or Dagster.
- Strong understanding of data governance, access control, and encryption strategies.
- Experience with CI/CD for data pipelines using GitOps, Terraform, dbt, or similar technologies.
Preferred Qualifications:
- Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub).
- Experience in BI and analytics tools (Tableau, Power BI, Looker).
- Familiarity with data observability tools (Monte Carlo, Great Expectations).
- Experience with machine learning feature engineering pipelines in Databricks.
- Contributions to open-source data engineering projects.


Python Developer
We are looking for an enthusiastic and skilled Python Developer with a passion for AI-based application development to join our growing technology team. This position offers the opportunity to work at the intersection of software engineering and data analytics, contributing to innovative AIdriven solutions that drive business impact. If you have a strong foundation in Python, a flair for problem-solving, and an eagerness to build intelligent systems, we would love to meet you!
Key Responsibilities
• Develop and deploy AI-focused applications using Python and associated frameworks.
• Collaborate with Developers, Product Owners, and Business Analysts to design and implement machine learning pipelines.
• Create interactive dashboards and data visualizations for actionable insights.
• Automate data collection, transformation, and processing tasks.
• Utilize SQL for data extraction, manipulation, and database management.
• Apply statistical methods and algorithms to derive insights from large datasets.
Required Skills and Qualifications
• 2–3 years of experience as a Python Developer, with a strong portfolio of relevant projects.
• Bachelor’s degree in Computer Science, Data Science, or a related technical field.
• In-depth knowledge of Python, including frameworks and libraries such as NumPy, Pandas, SciPy, and PyTorch.
• Proficiency in front-end technologies like HTML, CSS, and JavaScript.
• Familiarity with SQL and NoSQL databases and their best practices.
• Excellent communication and team-building skills.
• Strong problem-solving abilities with a focus on innovation and self-learning.
• Knowledge of cloud platforms such as AWS is a plus.
Additional Requirements This opportunity enhances your work life balance with allowance for remote work.
To be successful your computer hardware and internet must meet these minimum requirements:
1. Laptop or Desktop: • Operating System: Windows • Screen Size: 14 Inches • Screen Resolution: FHD (1920×1080) • Processor: I5 or higher • RAM: Minimum 8GB (Must) • Type: Windows Laptop • Software: AnyDesk • Internet Speed: 100 MBPS or higher
About ARDEM
ARDEM is a leading Business Process Outsourcing and Business Process Automation Service provider. For over twenty years ARDEM has successfully delivered business process outsourcing and business process automation services to our clients in USA and Canada. We are growing rapidly. We are constantly innovating to become a better service provider for our customers. We continuously strive for excellence to become the Best Business Process Outsourcing and Business Process Automation company

We are looking for a passionate and motivated React Frontend Developer Intern to join our development team. As an intern, you will work closely with senior developers to build and enhance modern web applications. This is a great opportunity to gain hands-on experience with React and frontend technologies in a collaborative, real-world environment.
Selected intern's day-to-day responsibilities include:
1. Develop UI Components - Build reusable, responsive React components.
2. State Management - Handle app state using tools like Redux or Context API.
3. API Integration - Fetch and display data from backend services.
4. Bug Fixing - Identify and resolve UI/UX issues.
5. Code Optimization - Ensure performance and maintainable code.
6. Cross-Browser Testing - Ensure consistent UI across browsers.
7. Version Control - Use Git for code collaboration.
8. Responsive Design - Implement mobile-friendly layouts.
9. Collaboration - Work with designers, backend devs, and QA.
10. Documentation - Write clear code comments and technical notes.
Skill(s) require:
* React.js
* Git
* Docker
* Understanding of APIs
* Bonus: Python/Django/figma
* Should be familiar with shad-on UI components

Job Overview:
We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.
Key Responsibilities:
- Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
- Integrate data from diverse sources and ensure its quality, consistency, and reliability.
- Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
- Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
- Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
- Automate data validation, transformation, and loading processes to support real-time and batch data processing.
- Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.
Required Skills:
- 5 to 7 years of hands-on experience in data engineering roles.
- Strong proficiency in Python and PySpark for data transformation and scripting.
- Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
- Solid understanding of SQL and database optimization techniques.
- Experience working with large-scale data pipelines and high-volume data environments.
- Good knowledge of data modeling, warehousing, and performance tuning.
Preferred/Good to Have:
- Experience with workflow orchestration tools like Airflow or Step Functions.
- Familiarity with CI/CD for data pipelines.
- Knowledge of data governance and security best practices on AWS.


Knowledge of Gen AI technology ecosystem including top tier LLMs, prompt engineering, knowledge of development frameworks such as LLMaxindex and LangChain, LLM fine tuning and experience in architecting RAGs and other LLM based solution for enterprise use cases. 1. Strong proficiency in programming languages like Python and SQL. 2. 3+ years of experience of predictive/prescriptive analytics including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks such as Regression , classification, ensemble model,RNN,LSTM,GRU. 3. 2+ years of experience in NLP, Text analytics, Document AI, OCR, sentiment analysis, entity recognition, topic modeling 4. Proficiency in LangChain and Open LLM frameworks to perform summarization, classification, Name entity recognition, Question answering 5. Proficiency in Generative techniques prompt engineering, Vector DB, LLMs such as OpenAI,LlamaIndex, Azure OpenAI, Open-source LLMs will be important 6. Hands-on experience in GenAI technology areas including RAG architecture, fine tuning techniques, inferencing frameworks etc 7. Familiarity with big data technologies/frameworks 8. Sound knowledge of Microsoft Azure

Job Title : Backend Developer (Node.js or Python/Django)
Experience : 2 to 5 Years
Location : Connaught Place, Delhi (Work From Office)
Job Summary :
We are looking for a skilled and motivated Backend Developer (Node.js or Python/Django) to join our in-house engineering team.
Key Responsibilities :
- Design, develop, test, and maintain robust backend systems using Node.js or Python/Django.
- Build and integrate RESTful APIs including third-party Authentication APIs (OAuth, JWT, etc.).
- Work with data stores like Redis and Elasticsearch to support caching and search features.
- Collaborate with frontend developers, product managers, and QA teams to deliver complete solutions.
- Ensure code quality, maintainability, and performance optimization.
- Write clean, scalable, and well-documented code.
- Participate in code reviews and contribute to team best practices.
Required Skills :
- 2 to 5 Years of hands-on experience in backend development.
- Proficiency in Node.js and/or Python (Django framework).
- Solid understanding and experience with Authentication APIs.
- Experience with Redis and Elasticsearch for caching and full-text search.
- Strong knowledge of REST API design and best practices.
- Experience working with relational and/or NoSQL databases.
- Must have completed at least 2 end-to-end backend projects.
Nice to Have :
- Experience with Docker or containerized environments.
- Familiarity with CI/CD pipelines and DevOps workflows.
- Exposure to cloud platforms like AWS, GCP, or Azure.

Company Overview:
Softlink Global is the global leading software provider for Freight Forwarding, Logistics, and Supply Chain industry. Our comprehensive product portfolio includes superior technology solutions for the strategic and operational aspects of the logistics & freight forwarding business. At present, Softlink caters to more 5,000+ logistics & freight companies spread across 45+ countries. Our global operations are handled by more than 300+ highly experienced employees.
Company Website - https://softlinkglobal.com/
Role Overview:
Are you a testing ninja with a knack for Selenium Python Hybrid Frameworks? LogiBUILD is calling for an Automation Tester with 2–3 years of magic in test automation, Jenkins, GitHub, and all things QA! You’ll be the hero ensuring our software is rock-solid, crafting automated test scripts, building smart frameworks, and keeping everything running smooth with CI and version control. If “breaking things to make them unbreakable” sounds like your jam, we’ve got the perfect spot for you!
Key Responsibilities:
- Automation Framework Development: Design, develop, and maintain Selenium-based automated test scripts using Python, focusing on creating a hybrid automation framework to handle a variety of test scenarios.
- Framework Optimization & Maintenance: Continuously optimize and refactor automation frameworks for performance, reliability, and maintainability. Provide regular updates and improvements to automation processes.
- Test Automation & Execution: Execute automated tests for web applications, analyze results, and report defects, collaborating closely with QA engineers and developers for continuous improvement.
- Version Control Management: Manage source code repositories using GitHub, including branching, merging, and maintaining proper version control processes for test scripts and frameworks.
- Collaborative Work: Work closely with developers, QA, and other team members to ensure smooth collaboration between manual and automated testing efforts. Help in integrating automated tests into the overall SDLC/STLC.
- Documentation: Document the test strategy, framework design, and test execution reports to ensure clear communication and knowledge sharing across the team.
- Test Automation Knowledge: Experience in test automation for web-based applications, including functional, regression, and integration tests.
- Debugging & Troubleshooting: Strong problem-solving skills to debug and troubleshoot issues in automation scripts, Jenkins pipelines, and test environments.
Requirements:
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 2-3 years of experience in similar role, with hands on experience of mentions tools & frameworks.
- Certifications in Selenium, Python, or automation testing.
- Familiarity with Agile or Scrum methodologies.
- Excellent problem-solving and communication skills.

What We’re Looking For
- 4+ years of backend development experience in scalable web applications.
- Strong expertise in Python, Django ORM, and RESTful API design.
- Familiarity with relational databases like PostgreSQL and MySQL databases
- Comfortable working in a startup environment with multiple priorities.
- Understanding of cloud-native architectures and SaaS models.
- Strong ownership mindset and ability to work with minimal supervision.
- Excellent communication and teamwork skills.


Job Title: AI & Machine Learning Developer
Location: Surat, near Railway Station
Experience: 1-2 Years
Responsibilities:
- Develop and optimize machine learning models for core product features.
- Collaborate with product and engineering teams to integrate AI solutions.
- Work with data pipelines, model training, and deployment workflows.
- Continuously improve models using feedback and new data.
Requirements:
- 1+ years of experience in ML or AI development.
- Strong Python skills; hands-on with libraries like scikit-learn, TensorFlow, or PyTorch.
- Experience in data preprocessing, model evaluation, and basic deployment.
- Familiarity with APIs and integrating ML into production (e.g., Flask/FastAPI).

We are looking for passionate developers with 4 - 8 years of experience in software development to join Metron Security team as Software Engineer.
Metron Security provides automation and integration services to leading Cyber Security companies. Our engineering team works on leading security platforms including - Splunk, IBM’s QRadar, ServiceNow, Crowdstrike, Cybereason, and other SIEM and SOAR platforms.
Software Engineer is a challenging role within Cyber Security Engineering integration development. The role involves developing a product/service that achieves high performance data exchange between two or more Cyber Security platforms. A Software Engineer is responsible for End-to-End delivery of the project, right from getting the requirements from customer to deploying the project for them on prem or on cloud, depending on the nature of the project. We follow the best practices of Engineering and keep evolving, we are agile. The Software Engineer is at the core of the evolution process.
Each integration needs reskilling yourself with the required technology for that project. If you are passionate about programming and believe in the best practices of software engineering, following are the skills we are looking for:
- Developer-centric culture - No bureaucracy and red-tapes
- Chance to work on 200+ security platform and more
- Opportunity to engage with end-users (customers) and just a cog in the wheel
Position: Senior Software Engineer
Location: Pune
Mandatory Skills
- Efficiently able to design and implement software features.
- Expertise in at least one Object Oriented Programming language (Python, typescript, Java, Node.js, Angular, react.js C#, C++).
- Good knowledge on Data Structure and their correct usage.
- Open to learn any new software development skill if needed for the project.
- Alignment and utilisation of the core enterprise technology stacks and integration capabilities throughout the transition states.
- Participate in planning, definition, and high-level design of the solution and exploration of solution alternatives.
- Identify bottlenecks and bugs, and devise appropriate solutions.
- Define, explore, and support the implementation of enablers to evolve solution intent, working directly with Agile teams to implement them.
- Good knowledge on the implications of Cyber Security on the production.
- Experience architecting & estimating deep technical custom solutions & integrations.
Added advantage:
- You have experience in Cyber Security domain.
- You have developed software using web technologies.
- You have handled a project from start to end.
- You have worked in an Agile Development project and have experience of writing and estimating User Stories.
- Contribution to open source - Please share your link in the application/resume.

🚀 We’re Hiring! | AI/ML Engineer – Computer Vision
📍 Location: Noida | 🕘 Full-Time
🔍 What We’re Looking For:
• 4+ years in AI/ML (Computer Vision)
• Python, OpenCV, TensorFlow, PyTorch, etc.
• Hands-on with object detection, face recognition, classification
• Git, Docker, Linux experience
• Curious, driven, and ready to build impactful products
💡 Be part of a fast-growing team, build products used by brands like Biba, Zivame, Costa Coffee & more!
Sr. Devops Engineer – 12+ Years of Experience
Key Responsibilities:
Design, implement, and manage CI/CD pipelines for seamless deployments.
Optimize cloud infrastructure (AWS, Azure, GCP) for high availability and scalability.
Manage and automate infrastructure using Terraform, Ansible, or CloudFormation.
Deploy and maintain Kubernetes, Docker, and container orchestration tools.
Ensure security best practices in infrastructure and deployments.
Implement logging, monitoring, and alerting solutions (Prometheus, Grafana, ELK, Datadog).
Troubleshoot and resolve system and network issues efficiently.
Collaborate with development, QA, and security teams to streamline DevOps processes.
Required Skills:
Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD).
Hands-on experience with cloud platforms (AWS, GCP, or Azure).
Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible).
Experience with containerization and orchestration (Docker, Kubernetes).
Knowledge of networking, security, and monitoring tools.
Proficiency in scripting languages (Python, Bash, Go).
Strong troubleshooting and performance tuning skills.
Preferred Qualifications:
Certifications in AWS, Kubernetes, or DevOps.
Experience with service mesh, GitOps, and DevSecOps.
Job Title: AI & Machine Learning Developer
Location: Surat, near Railway Station
Experience: 1-2 Years
Responsibilities:
- Develop and optimise machine learning models for core product features.
- Collaborate with product and engineering teams to integrate AI solutions.
- Work with data pipelines, model training, and deployment workflows.
- Continuously improve models using feedback and new data.
Requirements:
- 1+ years of experience in ML or AI development.
- Strong Python skills; hands-on with libraries like scikit-learn, TensorFlow, or PyTorch.
- Experience in data preprocessing, model evaluation, and basic deployment.
- Familiarity with APIs and integrating ML into production (e.g., Flask/FastAPI)

- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Minimum of 3 years of professional experience in developing and deploying AI/ML solutions.
- Strong proficiency in Python and relevant libraries such as TensorFlow, PyTorch, scikit-learn, and Transformers.
- Hands-on experience with at least one Python web framework (FastAPI or Flask) for building APIs and backend services.
- Solid understanding of machine learning algorithms, deep learning architectures, and GenAI concepts.
- Experience with containerization technologies (Docker) and orchestration platforms (Kubernetes).
- Proven experience working with the Azure cloud platform and its AI/ML and container services.
- Familiarity with data engineering concepts and tools for data processing and preparation.
- Experience with CI/CD pipelines and DevOps practices.
- Strong problem-solving and analytical skills.
- Excellent communication and collaboration skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with MLOps practices and tools for managing the ML lifecycle.
- Familiarity with other cloud platforms (e.g., AWS, GCP).
- Experience with specific AI/ML application domains (e.g., NLP, computer vision, time series analysis).
- Contributions to open-source AI/ML projects.
- Relevant certifications in AI/ML or cloud technologies.

Junior Python Developer – Web Scraping
Mumbai, Maharashtra
Work Type: Full Time
We’re looking for a Junior Python Developer who is passionate about web scraping and data extraction. If you love automating the web, navigating anti-bot mechanisms, and writing clean, efficient code, this role is for you!
Key Responsibilities:
- Design and build robust web scraping scripts using Python.
- Work with tools like Selenium, BeautifulSoup, Scrapy, and Playwright.
- Handle challenges like dynamic content, captchas, IP blocking, and rate limiting.
- Ensure data accuracy, structure, and cleanliness during extraction.
- Optimize scraping scripts for performance and scale.
- Collaborate with the team to align scraping outputs with project goals.
Requirements:
- 6 months to 2 years of experience in web scraping using Python.
- Hands-on with requests, Selenium, BeautifulSoup, Scrapy, etc.
- Strong understanding of HTML, DOM, and browser behavior.
- Good coding practices and ability to write clean, maintainable code.
- Strong communication skills and ability to explain scraping strategies clearly.
- Based in Mumbai and ready to join immediately.
Nice to Have:
- Familiarity with headless browsers, proxy handling, and rotating user agents.
- Experience storing scraped data in JSON, CSV, or databases.
- Understanding of anti-bot protection techniques and how to bypass them.

🏢 Mega Style Apartments
🌍 Remote (India) | 🕓 ≈ 40 h / week | 💸 ₹17 000 – ₹24 000/m (net)
Software Engineer — Full-Stack (Next.js / React / TypeScript)
About Us
Mega Style Apartments operates premium 1–4-bedroom furnished flats for business travellers, families, students and professionals.
Technology is now core to our growth, and you’ll be our first in-house developer.
You’ll work directly with:
- our director (acting CTO) for product vision & architecture, and
- a senior operations manager for day-to-day priorities & domain insight.
If you thrive on green-field builds, decisive feedback and taking technical ownership (no equity implied), read on.
What You’ll Do
- Ship guest-facing features with Next.js 15 + React 19
- Build internal tools that automate operations & surface data (LLMs where useful)
- Own the cycle — idea → code → deploy → measure → iterate
- Set up CI/CD on Vercel and keep tech-debt low
- Use AI coding assistants (Cursor, Windsurf, v0 …) to move faster without sacrificing quality
- Post daily WhatsApp updates and demo something usable every Friday
What “Great” Looks Like
✓ Plan your week on our task board, hit the deadlines you set
✓ Surface blockers early — no chasing needed
✓ Measure output (features shipped, conversion uplift, load-time cuts)
✓ Keep the codebase clean, tested and documented
Must-Have Skills
- 2 + yrs delivering production web apps end-to-end
- Next.js App Router / RSC / Server Actions (or similar)
- React, TypeScript, Tailwind CSS, shadcn/ui, TanStack Query / Zustand / Jotai
- Node.js APIs, Prisma or Drizzle ORM, solid SQL design
- Auth.js / Better-Auth, security best practices
- GitHub flow, CI pipelines, Vercel deploys
- Clear English communication; able to explain trade-offs to non-tech peers
- Self-starter comfortable being the only dev (for now)
Nice-to-Haves
- A/B testing & CRO
- Python / ML
- ETL pipelines
- Advanced SEO & CWV
- Payment APIs
- Workflow automation with n8n
Benefits
- 100 % remote
- Flexible hours (~40 h/week)
- 12 paid days off
- Health-insurance reimbursement ₹1 700/m after probation
- Performance-based bonuses
- 6-month paid probation → permanent role & full benefits
- Chance to shape the entire tech stack and engineering culture from day one
Hiring Process — What to Expect
- 5-min application form and questionnaire
- Stage 1 test (≈ 1 h, online) – tests reasoning, instructions-reading and TypeScript coding fundamentals.
- Stage 2 test (≈ 1 h, online) – digs into Next.js / React depth plus Node & SQL basics.
- 30-min live Zoom call – meet the hiring manager.
Mega Style Apartments is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.


Job Title : Multi Stack Trainer
Skills: JavaScript, React.js, Angular, HTML5
Node.js (Express, NestJS)Python (Django


Job Title : Full Stack Senior SME
Job Role :Heading the full stack training department
Skills :HTML, Java Script, Python, React.js, SQL Node.js.
Location: Coimbatore
At RtBrick, we are at the forefront of revolutionizing the internet's backbone through cutting-edge software development techniques. The internet and, more specifically, broadband networks are among the most significant technological advancements given the OTT Media content, Multi-Player gaming and AI interactions. Rtbrick is reshaping the way these networks are constructed, moving away from traditional monolithic routing systems to a more agile, disaggregated infrastructure and distributed edge network functions. This shift mirrors transformations seen in computing and cloud technologies, marking the most profound change in networking since the inception of IP technology.
We're pioneering a cloud-native approach, harnessing the power of container-based software, microservices, a devops philosophy, and warehouse scale tools to drive innovation.
We are seeking a qualified and driven IT Infrastructure Engineer to become a member of our team in Bangalore (BLR). The ideal candidate will have a solid background in network infrastructure, Linux, CI/CD, any scripting language or Python, and virtualization. This position involves managing and optimizing our network operations to ensure secure and efficient connectivity across all systems.
Requirements:
· Expertise in configuring network routers and switches
·Skilled in using the Linux CLI (Command Line Interface)
·Practical experience utilizing open-source networking tools along with basic scripting skills, such as those in Python.
·Demonstrated expertise in network engineering for two years, showcasing a solid grasp of network protocols and infrastructure. Additionally, possesses a commendable understanding of CI/CD tools. This combination of skills ensures effective management and optimization of network systems.
·Bachelor's degree in Computer Science, Information Technology, or a related field.
·Demonstrated expertise in managing Linux systems and utilizing virtualization technologies effectively. Possesses substantial experience in the realm of virtualization.
·Aspirant to master Automation techniques.
·Team-player, can-do attitude, will work well in a group environment while being able to contribute well on an individual basis
Responsibilities:
Network Infrastructure Management:
Configure, manage, and troubleshoot routers, switches, firewalls, and wireless networks.
Maintain and optimize network performance to ensure reliability and security.
System Administration:
Manage and maintain Linux-based systems, ensuring high availability and performance.
Implement and support virtualization solutions
Open-Source Tools & Scripting:
Utilize open-source tools for various network management and monitoring tasks.
Develop and maintain light scripts for automation and task optimization.


We are looking for a Senior AI/ML Engineer with expertise in Generative AI (GenAI) integrations, APIs, and Machine Learning (ML) algorithms who should have strong hands-on experience in Python and statistical and predictive modeling.
Key Responsibilities:
• Develop and integrate GenAI solutions using APIs and custom models.
• Design, implement, and optimize ML algorithms for predictive modeling and data-driven insights.
• Leverage statistical techniques to improve model accuracy and performance.
• Write clean, well-documented, and testable code while adhering to
coding standards and best practices.
Required Skills:
• 4+ years of experience in AI/ML, with a strong focus on GenAI integrations and APIs.
• Proficiency in Python, including libraries like TensorFlow, PyTorch, Scikit-learn, and Pandas.
• Strong expertise in statistical modeling and ML algorithms (Regression, Classification, Clustering, NLP, etc.).
• Hands-on experience with RESTful APIs and AI model deployment.
Job Overview:
We are looking for a skilled Senior Backend Engineer to join our team. The ideal candidate will have a strong foundation in Java and Spring, with proven experience in building scalable microservices and backend systems. This role also requires familiarity with automation tools, Python development, and working knowledge of AI technologies.
Responsibilities:
- Design, develop, and maintain backend services and microservices.
- Build and integrate RESTful APIs across distributed systems.
- Ensure performance, scalability, and reliability of backend systems.
- Collaborate with cross-functional teams and participate in agile development.
- Deploy and maintain applications on AWS cloud infrastructure.
- Contribute to automation initiatives and AI/ML feature integration.
- Write clean, testable, and maintainable code following best practices.
- Participate in code reviews and technical discussions.
Required Skills:
- 4+ years of backend development experience.
- Strong proficiency in Java and Spring/Spring Boot frameworks.
- Solid understanding of microservices architecture.
- Experience with REST APIs, CI/CD, and debugging complex systems.
- Proficient in AWS services such as EC2, Lambda, S3.
- Strong analytical and problem-solving skills.
- Excellent communication in English (written and verbal).
Good to Have:
- Experience with automation tools like Workato or similar.
- Hands-on experience with Python development.
- Familiarity with AI/ML features or API integrations.
- Comfortable working with US-based teams (flexible hours).
We are hiring a skilled Backend Developer to design and manage server-side applications, APIs, and database systems.
Key Responsibilities:
- Develop and manage APIs with Node.js and Express.js.
- Work with MongoDB and Mongoose for database management.
- Implement secure authentication using JWT.
- Optimize backend systems for performance and scalability.
- Deploy backend services on VPS and manage servers.
- Collaborate with frontend teams and use Git/GitHub for version control.
Required Skills:
- Node.js, Express.js
- MongoDB, Mongoose
- REST API, JWT
- Git, GitHub, VPS hosting
Qualifications:
- Bachelor’s degree in Computer Science or related field.
- Strong portfolio or GitHub profile preferred.

Job Description:
Wissen Technology is looking for a skilled Automation Anywhere Engineer to join our dynamic team in Bangalore. The ideal candidate will have hands-on experience in Automation Anywhere , Document Automation , SQL , and Python , with a strong background in designing and implementing automation solutions.
Key Responsibilities:
- Design, develop, and deploy automation solutions using Automation Anywhere.
- Work on Document Automation to extract, process, and validate structured/unstructured data.
- Develop scripts and automation solutions using Python for enhanced process efficiency.
- Optimize data processing workflows and database queries using SQL.
- Collaborate with cross-functional teams to identify automation opportunities and enhance business processes.
- Perform unit testing, debugging, and troubleshooting of automation scripts.
- Ensure adherence to industry best practices and compliance standards in automation processes.
- Provide support, maintenance, and enhancements to existing automation solutions.
Required Skills & Qualifications:
- 4 to 8 years of experience in RPA development using Automation Anywhere.
- Strong expertise in Automation Anywhere A360(preferable).
- Hands-on experience with Document Automation tools and technologies.
- Proficiency in Python for scripting and automation.
- Strong knowledge of SQL for data processing and querying.
- Experience in troubleshooting, debugging, and optimizing automation workflows.
- Ability to work in an Agile environment and collaborate with cross-functional teams.
- Excellent problem-solving skills and attention to detail.
We are looking for passionate people who love solving interesting and complex technology challenges, who are enthusiastic about building an industry first innovative product to solve new age real world problems. This role requires strategic leadership, the ability to manage complex technical challenges, and the ability to drive innovation while ensuring operational excellence. As a Backend SDE-2, you will collaborate with key stakeholders across the business, product management, and operations to ensure alignment with the organization's goals and play a critical role in shaping the technology roadmap and engineering culture.
Key Responsibilities
- Strategic Planning: Work closely with senior leadership to develop and implement engineering strategies that support business objectives. Understand broader organization goals and prepare technology roadmaps.
- Technical Excellence: Guide the team in designing and implementing scalable, extensible and secure software systems. Drive the adoption of best practices in technical architecture, coding standards, and software testing to ensure product delivery with highest speed AND quality.
- Project and Program Management: Setting up aggressive as well as realistic timelines with all the stakeholders, ensure the successful delivery of engineering projects as per the defined timelines with best quality standards ensuring budget constraints are met. Use agile methodologies to manage the development process and resolve bottlenecks.
- Cross-functional collaboration: Collaborate with Product Management, Design, Business, and Operations teams to define project requirements and deliverables. Ensure the smooth integration of engineering efforts across the organization.
- Risk Management: Anticipate and mitigate technical risks and roadblocks. Proactively identify areas of technical debt and drive initiatives to reduce it.
Required Qualifications
- Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
- 1-3 years of experience in software engineering
- Excellent problem-solving skills, with the ability to diagnose and resolve complex technical challenges.
- Proven track record of successfully delivering large-scale, high-impact software projects.
- Strong understanding of software design principles and patterns.
- Expertise in multiple programming languages and modern development frameworks.
- Experience with cloud infrastructure (AWS), microservices, and distributed systems.
- Experience with releational and non-relational databases.
- Experience with Redis, ElasticSearch.
- Experience in DevOps, CI/CD pipelines, and infrastructure automation.
- Strong communication and interpersonal skills, with the ability to influence and inspire teams and stakeholders at all levels.
Skills:- MySQL, Python, Django, AWS, NoSQL, Kafka, Redis, ElasticSearch

Role Description
This is a full-time on-site role for a Full Stack Developer at Zenius IT Services in Hyderabad. The Full Stack Developer will be responsible for back-end and front-end web development, software development, and utilizing CSS to enhance user interfaces.
Experience Required: 0-3 Years
Required Technical and Professional Expertise
- Bachelor’s degree in computer science, Engineering, or a related field
- Strong Experience of front-end technologies such as HTML, CSS, JavaScript, and popular frameworks like React, Angular, or Vue.js
- Proven experience in database modeling and design using SQL
- Experience with back-end technologies such as Node.js, Python, .NET.
- Experience working with RESTful APIs and building scalable web applications
- Strong understanding of software development principles and best practices
- Ability to work independently and as part of a team.
- Passion for learning new technologies and solving challenging problems.
- Strong Problem-Solving skills with curiosity to learn, step out of comfort zones and explore new areas.
- Strong execution discipline.
- Able to deliver his/her jobs with complete ownership Strong communication and collaboration skills.

Job Description: Data Engineer
Position Overview:
Role Overview
We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.
Key Responsibilities
· Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
· Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).
· Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
· Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
· Ensure data quality and consistency by implementing validation and governance practices.
· Work on data security best practices in compliance with organizational policies and regulations.
· Automate repetitive data engineering tasks using Python scripts and frameworks.
· Leverage CI/CD pipelines for deployment of data workflows on AWS.
We're seeking passionate, next-gen minded engineers who are excited about solving complex technical challenges and building innovative, first-of-its-kind products which make a tangible difference for our customers. As a Backend SDE-1, you will play a key role in driving strategic initiatives, collaborating with cross-functional teams across business, product, and operations to solve exciting problems. This role demands strong technical acumen, leadership capabilities, and a mindset focused on innovation and operational excellence.
We value individuals who think independently, challenge the status quo, and bring creativity and curiosity to the table—not those who simply follow instructions. If you're passionate about solving problems and making an impact, we'd love to hear from you.
Key Responsibilities
- Strategic Planning: Work closely with senior leadership to develop and implement engineering strategies that support business objectives. Understand broader organization goals and constantly prioritise your own work.
- Technical Excellence: Understand the onground problems, explore and design various possible solutions to conclude and implement scalable, extensible and secure software systems. Implement and learn best practices in technical architecture, coding standards, and software testing to ensure product delivery with highest speed AND quality.
- Project and Program Management: Setting up aggressive as well as realistic timelines with all the stakeholders, ensure the successful delivery of engineering projects as per the defined timelines with best quality standards ensuring budget constraints are met. Use agile methodologies to manage the development process and resolve bottlenecks.
- Cross-functional collaboration: Collaborate with Product Managers, Design, Business, and Operations teams to define project requirements and deliverables. Ensure the smooth integration of engineering efforts across the organization.
- Risk Management: Anticipate and mitigate technical risks and roadblocks. Proactively identify areas of technical debt and drive initiatives to reduce it.
Required Qualifications
- Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
- 1+ years of experience in software engineering
- Excellent problem-solving skills, with the ability to diagnose and resolve complex technical challenges.
- Strong understanding of software design principles and patterns.
- Hands on with multiple programming languages and modern development frameworks.
- Understanding of relational and non-relational databases.
- Experience with Redis, ElasticSearch.
- Strong communication and interpersonal skills, with the ability to influence and inspire teams and stakeholders at all levels.
Skills:- MySQL, Python, Django, AWS, NoSQL, Kafka, Redis, ElasticSearch

Key Responsibilities
- Design, develop, and maintain automated test scripts using Python, pytest, and Selenium for Salesforce and web applications.
- Create and manage test environments using Docker to ensure consistent testing conditions.
- Collaborate with developers, business analysts, and stakeholders to understand requirements and define test scenarios.
- Execute automated and manual tests, analyze results, and report defects using GitLab or other tracking tools.
- Perform regression, functional, and integration testing for Salesforce applications and customizations.
- Ensure test coverage for Salesforce features, including custom objects, workflows, and Apex code.
- Contribute to continuous integration/continuous deployment (CI/CD) pipelines in GitLab for automated testing.
- Document test cases, processes, and results to maintain a comprehensive testing repository.
- Stay updated on Salesforce updates, testing tools, and industry best practices.
Required Qualifications
- 1-3 years of experience in automation testing, preferably with exposure to Salesforce applications.
- Proficiency in Python, pytest, Selenium, Docker, and GitLab for test automation and version control.
- Understanding of software testing methodologies, including functional, regression, and integration testing.
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Strong problem-solving skills and attention to detail.
- Excellent verbal and written communication skills.
- Ability to work in a collaborative, fast-paced team environment.
Preferred Qualifications
- Experience with Salesforce platform testing, including Sales Cloud, Service Cloud, or Marketing Cloud.
- Active Salesforce Trailhead profile with demonstrated learning progress (please include Trailhead profile link in application).
- Salesforce certifications (e.g., Salesforce Administrator or Platform Developer) are a plus.
- Familiarity with testing Apex code, Lightning components, or Salesforce integrations.
- Experience with Agile/Scrum methodologies.
- Knowledge of Webkul’s product ecosystem or e-commerce platforms is an advantage.
Role: Senior Software Engineer - Backend
Location: In-Office, Bangalore, Karnataka, India
Job Summary:
We are seeking a highly skilled and experienced Senior Backend Engineer with a minimum of 3 years of experience in product building to join our dynamic and innovative team. In this role, you will be responsible for designing, developing, and maintaining robust backend systems that power our applications. You will work closely with cross-functional teams to ensure seamless integration between frontend and backend components, leveraging your expertise to architect scalable, secure, and high-performance solutions. As a senior team member, you will mentor junior developers and lead technical initiatives to drive innovation and excellence.
Annual Compensation: 12-18 LPA
Responsibilities:
- Lead the design, development, and maintenance of scalable and efficient backend systems and APIs.
- Architect and implement complex backend solutions, ensuring high availability and performance.
- Collaborate with product managers, frontend developers, and other stakeholders to deliver comprehensive end-to-end solutions.
- Design and optimize data storage solutions using relational databases and NoSQL databases.
- Mentor and guide junior developers, fostering a culture of knowledge sharing and continuous improvement.
- Implement and enforce best practices for code quality, security, and performance optimization.
- Develop and maintain CI/CD pipelines to automate build, test, and deployment processes.
- Ensure comprehensive test coverage, including unit testing, and implement various testing methodologies and tools to validate application functionality.
- Utilize cloud services (e.g., AWS, Azure, GCP) for infrastructure deployment, management, and optimization.
- Conduct system design reviews and provide technical leadership in architectural discussions.
- Stay updated with industry trends and emerging technologies to drive innovation within the team.
- Implement secure authentication and authorization mechanisms and ensure data encryption for sensitive information.
- Design and develop event-driven applications utilizing serverless computing principles to enhance scalability and efficiency.
Requirements:
- Minimum of 3 years of proven experience as a Backend Engineer, with a strong portfolio of product-building projects.
- Strong proficiency in backend development using Java, Python, and JavaScript, with experience in building scalable and high-performance applications.
- Experience with popular backend frameworks and libraries for Java (e.g., Spring Boot) and Python (e.g., Django, Flask).
- Strong expertise in SQL and NoSQL databases (e.g., MySQL, MongoDB) with a focus on data modeling and scalability.
- Practical experience with caching mechanisms (e.g., Redis) to enhance application performance.
- Proficient in RESTful API design and development, with a strong understanding of API security best practices.
- In-depth knowledge of asynchronous programming and event-driven architecture.
- Familiarity with the entire web stack, including protocols, web server optimization techniques, and performance tuning.
- Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes) is highly desirable.
- Proven experience working with cloud technologies (AWS/GCP/Azure) and understanding of cloud architecture principles.
- Strong understanding of fundamental design principles behind scalable applications and microservices architecture.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work collaboratively in a fast-paced, agile environment and lead projects to successful completion.


Job Summary:
- We are looking for a highly motivated and skilled Software Engineer to join our team.
- This role requires a strong understanding of the software development lifecycle, proficiency in coding, and excellent communication skills.
- The ideal candidate will be responsible for production monitoring, resolving minor technical issues, collecting client information, providing effective client interactions, and supporting our development team in resolving challenges
Key Responsibilities:
- Client Interaction: Serve as the primary point of contact for client queries, provide excellent communication, and ensure timely issue resolution.
- Issue Resolution: Troubleshoot and resolve minor issues related to software applications in a timely manner.
- Information Collection: Gather detailed technical information from clients, understand the problem context, and relay the information to the development leads for further action.
- Collaboration: Work closely with development leads and cross-functional teams to provide timely support and resolution for customer issues.
- Documentation: Document client issues, actions taken, and resolutions for future reference and continuous improvement.
- Software Development Lifecycle: Be involved in maintaining, supporting, and optimizing software through its lifecycle, including bug fixes and enhancements.
- Automating Redundant Support Tasks: (good to have) Should be able to automate the redundant repetitive tasks Required Skills and Qualifications:
Mandatory Skills:
- Expertise in at least one Object Oriented Programming language (Python, Java, C#, C++, Reactjs, Nodejs).
- Good knowledge on Data Structure and their correct usage.
- Open to learn any new software development skill if needed for the project.
- Alignment and utilization of the core enterprise technology stacks and integration capabilities throughout the transition states.
- Participate in planning, definition, and high-level design of the solution and exploration of solution alternatives.
- Define, explore, and support the implementation of enablers to evolve solution intent, working directly with Agile teams to implement them.
- Good knowledge on the implications.
- Experience architecting & estimating deep technical custom solutions & integrations.
Added advantage:
- You have developed software using web technologies.
- You have handled a project from start to end.
- You have worked in an Agile Development project and have experience of writing and estimating User Stories
- Communication Skills: Excellent verbal and written communication skills, with the ability to clearly explain technical issues to non-technical clients.
- Client-Facing Experience: Strong ability to interact with clients, gather necessary information, and ensure a high level of customer satisfaction.
- Problem-Solving: Quick-thinking and proactive in resolving minor issues, with a focus on providing excellent user experience.
- Team Collaboration: Ability to collaborate with development leads, engineering teams, and other stakeholders to escalate complex issues or gather additional technical support when required.
Preferred Skills:
- Familiarity with Cloud Platforms and Cyber Security tools: Knowledge of cloud computing platforms and services (AWS, Azure, Google Cloud) and Cortex XSOAR, SIEM, SOAR, XDR tools is a plus.
- Automation and Scripting: Experience with automating processes or writing scripts to support issue resolution is an advantage.
Work Environment:
- This is a rotational shift position
- During evening shift the timings will be (5 PM to 2 AM), and you will be expected to work independently and efficiently during these hours.
- The position may require occasional weekend shifts depending on the project requirements.
- Additional benefit of night allowance.


Senior Data Engineer
Location: Bangalore, Gurugram (Hybrid)
Experience: 4-8 Years
Type: Full Time | Permanent
Job Summary:
We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command of modern data stacks. You’ll be responsible for building scalable ETL/ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams.
This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools.
Key Responsibilities:
PostgreSQL & Data Modeling
· Design and optimize complex SQL queries, stored procedures, and indexes
· Perform performance tuning and query plan analysis
· Contribute to schema design and data normalization
Data Migration & Transformation
· Migrate data from multiple sources to cloud or ODS platforms
· Design schema mapping and implement transformation logic
· Ensure consistency, integrity, and accuracy in migrated data
Python Scripting for Data Engineering
· Build automation scripts for data ingestion, cleansing, and transformation
· Handle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e.g., Boto3)
· Maintain reusable script modules for operational pipelines
Data Orchestration with Apache Airflow
· Develop and manage DAGs for batch/stream workflows
· Implement retries, task dependencies, notifications, and failure handling
· Integrate Airflow with cloud services, data lakes, and data warehouses
Cloud Platforms (AWS / Azure / GCP)
· Manage data storage (S3, GCS, Blob), compute services, and data pipelines
· Set up permissions, IAM roles, encryption, and logging for security
· Monitor and optimize cost and performance of cloud-based data operations
Data Marts & Analytics Layer
· Design and manage data marts using dimensional models
· Build star/snowflake schemas to support BI and self-serve analytics
· Enable incremental load strategies and partitioning
Modern Data Stack Integration
· Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or Kafka
· Support modular pipeline design and metadata-driven frameworks
· Ensure high availability and scalability of the stack
BI & Reporting Tools (Power BI / Superset / Supertech)
· Collaborate with BI teams to design datasets and optimize queries
· Support development of dashboards and reporting layers
· Manage access, data refreshes, and performance for BI tools
Required Skills & Qualifications:
· 4–6 years of hands-on experience in data engineering roles
· Strong SQL skills in PostgreSQL (tuning, complex joins, procedures)
· Advanced Python scripting skills for automation and ETL
· Proven experience with Apache Airflow (custom DAGs, error handling)
· Solid understanding of cloud architecture (especially AWS)
· Experience with data marts and dimensional data modeling
· Exposure to modern data stack tools (DBT, Kafka, Snowflake, etc.)
· Familiarity with BI tools like Power BI, Apache Superset, or Supertech BI
· Version control (Git) and CI/CD pipeline knowledge is a plus
· Excellent problem-solving and communication skills

At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- You have 1 year of hands-on Python development experience
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Bangalore, Remote
Compensation: Competitive salary + Meaningful equity
If you get excited about solving hard problems that have real-world impact, we should talk.
All the best!!
Read less




Company
Crypto made easy 🚀
We are the bridge between your crypto world and everyday life; trade pairs, book flights and hotels, and purchase gift cards with your favourite currencies. All in one best-in-class digital experience. It's not rocket science.
🔗Apply link at the bottom of this post — don’t miss it!
Why Join?
By joining CryptoXpress, you'll be at the cutting edge of merging digital currency with real-world services and products. We offer a stimulating work environment where innovation and creativity are highly valued. This remote role provides the flexibility to work from any location, promoting a healthy work-life balance. We are dedicated to fostering growth and learning, offering ample opportunities for professional development in the rapidly expanding fields of AI, blockchain technology, cryptocurrency, digital marketing and e-commerce.
Role Description
We are seeking an Application Developer for a full-time remote position at CryptoXpress. In this role, you will be responsible for developing and maintaining state-of-the-art mobile and web applications that integrate seamlessly with our blockchain and API technologies. The ideal candidate will bring a passion for creating exceptional user experiences, a deep understanding of React Native and JavaScript, and experience in building responsive and scalable applications.
Job Requirements:
- Exposure and hands-on experience in mobile application development.
- Significant experience working with React web and mobile along with tools like Flux, Flow, Redux, etc.
- In-depth knowledge of JavaScript, CSS, HTML, and functional programming.
- Strong knowledge of React fundamentals, including Virtual DOM, component lifecycle, and component state.
- Comprehensive understanding of the full mobile app development lifecycle, including prototyping.
- Proficiency in type checking, unit testing, Typescript, PropTypes, and code debugging.
- Experience working with REST APIs, document request models, offline storage, and third-party libraries.
- Solid understanding of user interface design, responsive design, and web technologies.
- Familiarity with React Native tools such as Jest, Enzyme, and ESLint.
- Basic knowledge of blockchain technology.
Essential Skill Set:
- React Native & ReactJS
- Python (Flask)
- Node.js, Next.js
- Web3.js / Ethers.js integration experience
- MongoDB, Strapi, Firebase
- API design and integration
- In-app analytics / messaging tools (e.g., Firebase Messaging)
- Wallet integrations or crypto payment gateways
How to Apply:
Interested candidates must complete the application form at
https://forms.gle/J1giXJeg993fZViX6
Join us and help shape the future of social media marketing in the cryptocurrency space!
💡Pro Tip: Tips for Application Success
- Show your enthusiasm for crypto, travel, and digital innovation
- Mention any self-learning initiatives or personal crypto experiments
- Be honest about what you don’t know — we value growth mindsets
- Explore CryptoXpress before applying — take 2 minutes to download and try the app so you understand what we’re building


Job Description:
Interviews will be scheduled in two days.
We are seeking a highly skilled Scala Developer to join our team on an immediate basis. The ideal candidate will work remotely and collaborate with a US-based client, so excellent communication skills are essential.
Key Responsibilities:
Develop scalable and high-performance applications using Scala.
Collaborate with cross-functional teams to understand requirements and deliver quality solutions.
Write clean, maintainable, and testable code.
Optimize application performance and troubleshoot issues.
Participate in code reviews and ensure adherence to best practices.
Required Skills:
Strong experience in Scala development.
Solid understanding of functional programming principles.
Experience with frameworks like Akka, Play, or Spark is a plus.
Good knowledge of REST APIs, microservices architecture, and concurrency.
Familiarity with CI/CD, Git, and Agile methodologies.
Roles & Responsibilities
- Develop and maintain scalable backend services using Scala.
- Design and integrate RESTful APIs and microservices.
- Collaborate with cross-functional teams to deliver technical solutions.
- Write clean, efficient, and testable code.
- Participate in code reviews and ensure code quality.
- Troubleshoot issues and optimize performance.
- Stay updated on Scala and backend development best practices.
Immediate joiner prefer.


Title: Senior Software Engineer – Python (Remote: Africa, India, Portugal)
Experience: 9 to 12 Years
INR : 40 LPA - 50 LPA
Location Requirement: Candidates must be based in Africa, India, or Portugal. Applicants outside these regions will not be considered.
Must-Have Qualifications:
- 8+ years in software development with expertise in Python
- kubernetes is important
- Strong understanding of async frameworks (e.g., asyncio)
- Experience with FastAPI, Flask, or Django for microservices
- Proficiency with Docker and Kubernetes/AWS ECS
- Familiarity with AWS, Azure, or GCP and IaC tools (CDK, Terraform)
- Knowledge of SQL and NoSQL databases (PostgreSQL, Cassandra, DynamoDB)
- Exposure to GenAI tools and LLM APIs (e.g., LangChain)
- CI/CD and DevOps best practices
- Strong communication and mentorship skills


What you will be doing at Webkul?
- Python Proficiency and API Integration:
- Demonstrate strong proficiency in Python programming language.
- Design and implement scalable, efficient, and maintainable code for machine learning applications.
- Integrate machine learning models with APIs to facilitate seamless communication between different software components.
- Machine Learning Model Deployment, Training, and Performance:
- Develop and deploy machine learning models for real-world applications.
- Conduct model training, optimization, and performance evaluation.
- Collaborate with cross-functional teams to ensure the successful integration of machine learning solutions into production systems.
- Large Language Model Understanding and Integration:
- Possess a deep understanding of large language models (LLMs) and their applications.
- Integrate LLMs into existing systems and workflows to enhance natural language processing capabilities.
- Stay abreast of the latest advancements in large language models and contribute insights to the team.
- Langchain and RAG-Based Systems (e.g., LLamaindex):
- Familiarity with Langchain and RAG-based systems, such as LLamaindex, will be a significant advantage.
- Work on the design and implementation of systems that leverage Langchain and RAG-based approaches for enhanced performance and functionality.
- LLM Integration with Vector Databases (e.g., Pinecone):
- Experience in integrating large language models with vector databases, such as Pinecone, for efficient storage and retrieval of information.
- Optimize the integration of LLMs with vector databases to ensure high-performance and low-latency interactions.
- Natural Language Processing (NLP):
- Expertise in NLP techniques such as tokenization, named entity recognition, sentiment analysis, and language translation.
- Experience with NLP libraries and frameworks like NLTK, SpaCy, Hugging Face Transformers
- Computer Vision:
- Proficiency in computer vision tasks such as image classification, object detection, segmentation, and image generation.
- Experience with computer vision libraries like OpenCV, PIL, and frameworks like TensorFlow, PyTorch, and Keras.
- Deep Learning:
- Strong understanding of deep learning concepts and architectures, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs).
- Proficiency in using deep learning frameworks like TensorFlow, PyTorch, and Keras.
- Experience with model optimization, hyperparameter tuning, and transfer learning.
- Data Manipulation:
- Strong skills in data manipulation and analysis using libraries like Pandas, NumPy, and SciPy.
- Proficiency in data cleaning, preprocessing, and augmentation techniques.


Title: Data Engineer II (Remote – India/Portugal)
Exp: 4- 8 Years
CTC: up to 30 LPA
Required Skills & Experience:
- 4+ years in data engineering or backend software development
- AI / ML is important
- Expert in SQL and data modeling
- Strong Python, Java, or Scala coding skills
- Experience with Snowflake, Databricks, AWS (S3, Lambda)
- Background in relational and NoSQL databases (e.g., Postgres)
- Familiar with Linux shell and systems administration
- Solid grasp of data warehouse concepts and real-time processing
- Excellent troubleshooting, documentation, and QA mindset
If interested, kindly share your updated CV to 82008 31681

Role - MLops Engineer
Location - Pune, Gurgaon, Noida, Bhopal, Bangalore
Mode - Hybrid
Role Overview
We are looking for an experienced MLOps Engineer to join our growing AI/ML team. You will be responsible for automating, monitoring, and managing machine learning workflows and infrastructure in production environments. This role is key to ensuring our AI solutions are scalable, reliable, and continuously improving.
Key Responsibilities
- Design, build, and manage end-to-end ML pipelines, including model training, validation, deployment, and monitoring.
- Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into production systems.
- Develop and manage scalable infrastructure using AWS, particularly AWS Sagemaker.
- Automate ML workflows using CI/CD best practices and tools.
- Ensure model reproducibility, governance, and performance tracking.
- Monitor deployed models for data drift, model decay, and performance metrics.
- Implement robust versioning and model registry systems.
- Apply security, performance, and compliance best practices across ML systems.
- Contribute to documentation, knowledge sharing, and continuous improvement of our MLOps capabilities.
Required Skills & Qualifications
- 4+ years of experience in Software Engineering or MLOps, preferably in a production environment.
- Proven experience with AWS services, especially AWS Sagemaker for model development and deployment.
- Working knowledge of AWS DataZone (preferred).
- Strong programming skills in Python, with exposure to R, Scala, or Apache Spark.
- Experience with ML model lifecycle management, version control, containerization (Docker), and orchestration tools (e.g., Kubernetes).
- Familiarity with MLflow, Airflow, or similar pipeline/orchestration tools.
- Experience integrating ML systems into CI/CD workflows using tools like Jenkins, GitHub Actions, or AWS CodePipeline.
- Solid understanding of DevOps and cloud-native infrastructure practices.
- Excellent problem-solving skills and the ability to work collaboratively across teams.

ABOUT US
We are a fast-growing, excellence-oriented mutual fund distribution and fintech firm delivering exceptional solutions to domestic/NRI/retail and ultra-HNI clients. Cambridge Wealth is a respected brand in the wealth segment, having won awards from BSE and Mutual Fund houses. Learn more about us at www.cambridgewealth.in
JOB OVERVIEW
Drive product excellence through data-backed decisions while ensuring efficient delivery and continuous improvement.
KEY RESPONSIBILITIES
- Sprint & Timeline Management: Drive Agile sprints with clear milestones to prevent scope creep
- Process Optimization: Identify bottlenecks early and implement standardised workflows
- Market Research: Analyze competitive landscape and customer preferences to inform strategy
- Feature Development: Refine product features based on customer feedback and data analysis
- Performance Analysis: Create actionable dashboards tracking KPIs and user behavior metrics
- Risk Management: Proactively identify potential roadblocks and develop contingency plans
- User Testing: Conduct testing sessions and translate feedback into product improvements
- Documentation: Develop comprehensive specs and user stories for seamless implementation
- Cross-Team Coordination: Align stakeholders on priorities and deliverables throughout development
TECHNICAL REQUIREMENTS
- Data Analysis: SQL proficiency for data extraction and manipulation
- Project Management: Expert in Agile methods and tracking tools
- Advanced Excel/Google/Zoho sheets: Expertise in pivot tables, VLOOKUP, and complex formulas
- Analytics Platforms: Experience with Mixpanel, Amplitude, or Google Analytics, Zoho Analytics
- Financial Knowledge: Understanding of mutual funds and fintech industry metrics
QUALIFICATIONS
- 2+ years experience in product analysis or similar role
- Strong analytical skills with the ability to collect, analyse, and interpret data from various sources.
- Basic understanding of user experience (UX) principles and methodologies.
- Excellent verbal and written communication skills for translating complex findings
- Ability to work collaboratively in a team environment and adapt to changing priorities.
- Eagerness to learn, take initiative, and contribute ideas to improve products and processes
READY TO SHAPE THE FUTURE OF FINTECH?
Apply now to join our award-winning team
Our Hiring Process:
- You Apply and answer a couple of quick questions [5 min]
- Recruiter screening phone interview [30 min]
- Online Technical assessment [60 min]
- Technical interview [45 min]
- Founder's interview [30 min]
- We make you an offer and proceed for reference and BGV check


Job Title: Full stack Developer
Location: Coimbatore
Job Type: Full Time
Experience Level: 5-8 Years
Job Title : Senior Backend Engineer – Java, AI & Automation
Experience : 4+ Years
Location : Any Cognizant location (India)
Work Mode : Hybrid
Interview Rounds :
- Virtual
- Face-to-Face (In-person)
Job Description :
Join our Backend Engineering team to design and maintain services on the Intuit Data Exchange (IDX) platform.
You'll work on scalable backend systems powering millions of daily transactions across Intuit products.
Key Qualifications :
- 4+ years of backend development experience.
- Strong in Java, Spring framework.
- Experience with microservices, databases, and web applications.
- Proficient in AWS and cloud-based systems.
- Exposure to AI and automation tools (Workato preferred).
- Python development experience.
- Strong communication skills.
- Comfortable with occasional US shift overlap.


Role: Python Full Stack Developer with React
Hybrid: 2 days in a week (Noida, Bangalore, Chennai, Hyderabad)
Experience: 5+ Years
Contract Duration: 6 Months
Notice0 less than 15 days

A BIT ABOUT US
Appknox is one of the top Mobile Application security companies recognized by Gartner and G2. A profitable B2B SaaS startup headquartered in Singapore & working from Bengaluru.
The primary goal of Appknox is to help businesses and mobile developers secure their mobile applications with a focus on delivery speed and high-quality security audits.
Our business includes Fortune 500 companies with Major brands spread across regions like India, South-East Asia, Middle-East, Japan, US, and expanding rapidly.
The Opportunity:
We are seeking a highly skilled Senior Software Engineer (Backend) to join our dynamic software development team. In this role, you will contribute to key backend projects, collaborate across teams, and play a vital part in delivering robust, scalable, and high-performance software solutions. As a senior engineer, you will work independently, make impactful technical decisions, and help shape the backend architecture while collaborating with a passionate, high-performing team.
You will work hands-on with products primarily built in Python, with opportunities to contribute to Golang. These technologies are at the core of our development stack, and your focus will be on building, scaling, and maintaining distributed services. Distributed systems are integral to our architecture, providing a chance to gain hands-on experience with maintaining and optimizing them in a fast-paced environment.
As a Senior Engineer, you are expected to:
- Write clean, maintainable, and testable code while following best practices.
- Architect solutions, address complex problems, and deliver well-thought-out technical designs.
- Take ownership of assigned modules and features, delivering them with minimal supervision.
- Contribute to code reviews and technical discussions, ensuring high-quality deliverables.
We highly value open source contributions and encourage you to check out our work on GitHub at Appknox GitHub. While no prior experience in security is required, our experienced security professionals are available to support you in understanding the domain.
This role offers a unique opportunity to work on cutting-edge technology, drive impactful solutions, and grow within a collaborative environment that values autonomy, innovation, and technical excellence.
Responsibilities:
- Contribute to backend development for a cutting-edge product in the Security domain, with focus on performance, reliability, and maintainability.
- Implement software components and features based on high-level architectural guidance using Django and Django REST Framework (DRF).
- Collaborate with senior engineers to translate functional and technical requirements into robust backend implementations.
- Write clean, modular, and testable code following industry best practices and team standards.
- Participate in design discussions and code reviews to maintain code quality and continuously learn from peers.
- Work closely with frontend, QA, and security teams to deliver well-integrated, end-to-end solutions.
- Troubleshoot, debug, and resolve issues in existing systems, ensuring stability and efficiency.
- Contribute to the creation of technical documentation for components and modules you build or maintain.
- Follow defined verification plans and contribute to improving test coverage and automation.
- Participate in sprint planning, estimations, and agile ceremonies to support timely and effective project delivery.
- Proactively seek feedback and continuously improve your technical skills and understanding of system architecture.
- Support team success by collaborating effectively, sharing knowledge, and guiding junior team members when needed.
Requirements:
- Solid hands-on experience with Django and Django REST Framework 3-4 years
- Good understanding of relational databases, SQL, and working with ORMs such as Django ORM.
- Ability to contribute to design discussions and implement well-structured, maintainable backend systems.
- Exposure to writing unit and integration tests; familiarity with CI/CD pipelines and version control systems
- Proficiency in debugging, performance optimization, and addressing scalability concerns under guidance.
- Strong fundamentals in data structures, algorithms, and clean coding practices.
- Able to collaborate effectively within a team and take ownership of moderately complex modules.
- Good communication skills, with the ability to document and explain solutions to peers.
- Familiarity with cloud platforms and microservices architecture is a plus.
- Self-driven with a growth mindset and willingness to learn from peers and feedback.
Work Expectations:
Within 1 month-
- Attend KT sessions conducted by the engineering and product teams to gain a deep understanding of the product, its architecture, and workflows.
- Learn about the team's development processes, tools, and key challenges.
- Work closely with the product team to understand product requirements and contribute to the design and development of features.
- Dive deep into the existing backend architecture, including database structures, APIs, and integration points, to fully understand the technical landscape
- Begin addressing minor technical challenges and bugs, while understanding the underlying architecture and tech stack.
- Begin to participate in creating action plans for new features, ensuring that design and implementation are aligned with product goals.
Within 3 months-
- Achieve full autonomy in working on the codebase, demonstrating the ability to independently deliver high-quality features from design to deployment.
- Take complete ownership of critical modules, ensuring they are optimized for performance and maintainability.
- Act as a technical resource for the team, offering support and guidance to peers on complex issues.
- Collaborate with DevOps to optimize deployment pipelines, debug production issues, and improve backend infrastructure.
- Lead discussions for technical solutions and provide recommendations for architectural improvements.
- Contribute to the design of new features by translating functional requirements into detailed technical specifications.
- Prepare regular updates on assigned tasks and communicate effectively with the engineering manager and other stakeholders.
Within 6 months-
- Be fully independent in their development tasks, contributing to key features and solving critical challenges.
- Demonstrate strong problem-solving skills and the ability to take ownership of technical modules.
- Actively participate in code reviews and technical discussions, ensuring high-quality deliverables.
- Collaborate seamlessly with cross-functional teams to align technical solutions with business requirements.
- Establish themselves as a reliable and proactive team member, contributing to the team’s growth and success.
Personality traits we really admire:
- Great attitude to ask questions, learn and suggest process improvements.
- Has attention to details and helps identify edge cases.
- Proactive mindset.
- Highly motivated and coming up with ideas and perspective to help us move towards our goals faster.
- Follows timelines and absolute commitment to deadlines.
Interview Process
- Round 1 Interview - Profile Evaluation (EM)
- Round 2 Interview - Assignment Evaluation & Technical Problem Solving discussion (Tech Team)
- Round 4 Interview - Engineering Team & Technical Founder (CTO)
- Round 5 Interview - HR
Compensation
- Best in industry
We prefer that every employee also holds equity in the company. In this role, you will be awarded equity after 12 months, based on the impact you have created.
Please be aware that all your customers are Enterprises and Fortune 500 companies.
Why Join Us:
- Freedom & Responsibility: If you are a person who enjoys challenging work & pushing your boundaries, then this is the right place for you. We appreciate new ideas & ownership as well as flexibility with working hours.
- Great Salary & Equity: We keep up with the market standards & provide pay packages considering updated standards. Also as Appknox continues to grow, you’ll have a great opportunity to earn more & grow with us. Moreover, we also provide equity options for our top performers.
- Holistic Growth: We foster a culture of continuous learning and take a much more holistic approach to training and developing our assets: the employees. We shall also support you all on that journey of yours.
- Transparency: Being a part of a start-up is an amazing experience one of the reasons being the open communication & transparency at multiple levels. Working with Appknox will give you the opportunity to experience it all first hand.
- Health insurance: We offer health insurance coverage upto 5 Lacs for you and your family including parents.

Job Title: Python Django Microservices Lead
Job Title: Django Backend Lead Developer
Location: Indore/ Pune (Hybrid - Wednesday and Thursday WFO)
Timings - 12.30 to 9.30 PM
Experience Level: 8+ Years
Job Overview: We are seeking an experienced Django Backend Lead Developer to join our team. The ideal candidate will have a strong background in backend development, cloud technologies, and big data
processing. This role involves leading technical projects, mentoring junior developers, and ensuring the delivery of high-quality solutions.
Responsibilities:
Lead the development of backend systems using Django.
Design and implement scalable and secure APIs.
Integrate Azure Cloud services for application deployment and management.
Utilize Azure Databricks for big data processing and analytics.
Implement data processing pipelines using PySpark.
Collaborate with front-end developers, product managers, and other stakeholders to deliver comprehensive solutions.
Conduct code reviews and ensure adherence to best practices.
Mentor and guide junior developers.
Optimize database performance and manage data storage solutions.
Ensure high performance and security standards for applications.
Participate in architecture design and technical decision-making.
Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field.
8+ years of experience in backend development.
8+ years of experience with Django.
Proven experience with Azure Cloud services.
Experience with Azure Databricks and PySpark.
Strong understanding of RESTful APIs and web services.
Excellent communication and problem-solving skills.
Familiarity with Agile methodologies.
Experience with database management (SQL and NoSQL).
Skills: Django, Python, Azure Cloud, Azure Databricks, Delta Lake and Delta tables, PySpark, SQL/NoSQL databases, RESTful APIs, Git, and Agile methodologies

As a Senior Backend & Infrastructure Engineer, you will take ownership of backend systems and cloud infrastructure. You’ll work closely with our CTO and cross-functional teams (hardware, AI, frontend) to design scalable, fault- tolerant architectures and ensure reliable deployment pipelines.
- What You’ll Do :
- Backend Development: Maintain and evolve our Node.js (TypeScript) and Python backend services with a focus on performance and scalability.
- Cloud Infrastructure: Manage our infrastructure on GCP and Firebase (Auth, Firestore, Storage, Functions, AppEngine, PubSub, Cloud Tasks).
- Database Management: Handle Firestore and other NoSQL DBs. Lead database schema design and migration strategies.
- Pipelines & Automation: Build robust real-time and batch data pipelines. Automate CI/CD and testing for backend and frontend services.
- Monitoring & Uptime: Deploy tools for observability (logging, alerts, debugging). Ensure 99.9% uptime of critical services.
- Dev Environments: Set up and manage developer and staging environments across teams.
- Quality & Security: Drive code reviews, implement backend best practices, and enforce security standards.
- Collaboration: Partner with other engineers (AI, frontend, hardware) to integrate backend capabilities seamlessly into our global system.
Must-Haves :
- 5+ years of experience in backend development and cloud infrastructure.
- Strong expertise in Node.js (TypeScript) and/or Python.
- Advanced skills in NoSQL databases (Firestore, MongoDB, DynamoDB...).
- Deep understanding of cloud platforms, preferably GCP and Firebase.
- Hands-on experience with CI/CD, DevOps tools, and automation.
- Solid knowledge of distributed systems and performance tuning.
- Experience setting up and managing development & staging environments.
• Proficiency in English and remote communication.
Good to have :
- Event-driven architecture experience (e.g., Pub/Sub, MQTT).
- Familiarity with observability tools (Prometheus, Grafana, Google Monitoring).
- Previous work on large-scale SaaS products.
- Knowledge of telecommunication protocols (MQTT, WebSockets, SNMP).
- Experience with edge computing on Nvidia Jetson devices.
What We Offer :
- Competitive salary for the Indian market (depending on experience).
- Remote-first culture with async-friendly communication.
- Autonomy and responsibility from day one.
- A modern stack and a fast-moving team working on cutting-edge AI and cloud infrastructure.
- A mission-driven company tackling real-world environmental challenges.

Location: Remote (India only)
About Certa At Certa, we’re revolutionizing process automation for top-tier companies, including Fortune 500 and Fortune 1000 leaders, from the heart of Silicon Valley. Our mission? Simplifying complexity through cutting-edge SaaS solutions. Join our thriving, global team and become a key player in a startup environment that champions innovation, continuous learning, and unlimited growth. We offer a fully remote, flexible workspace that empowers you to excel.
Role Overview
Ready to elevate your DevOps career by shaping the backbone of a fast-growing SaaS platform? As a Senior DevOps Engineer at Certa, you’ll lead the charge in building, automating, and optimizing our cloud infrastructure. Beyond infrastructure management, you’ll actively contribute with a product-focused mindset, understanding customer requirements, collaborating closely with product and engineering teams, and ensuring our AWS-based platform consistently meets user needs and business goals.
What You’ll Do
- Own SaaS Infrastructure: Design, architect, and maintain robust, scalable AWS infrastructure, enhancing platform stability, security, and performance.
- Orchestrate with Kubernetes: Utilize your advanced Kubernetes expertise to manage and scale containerized deployments efficiently and reliably.
- Collaborate on Enterprise Architecture: Align infrastructure strategies with enterprise architectural standards, partnering closely with architects to build integrated solutions.
- Drive Observability: Implement and evolve sophisticated monitoring and observability solutions (DataDog, ELK Stack, AWS CloudWatch) to proactively detect, troubleshoot, and resolve system anomalies.
- Lead Automation Initiatives: Champion an automation-first mindset across the organization, streamlining development, deployment, and operational workflows.
- Implement Infrastructure as Code (IaC): Master Terraform to build repeatable, maintainable cloud infrastructure automation.
- Optimize CI/CD Pipelines: Refine and manage continuous integration and deployment processes (currently GitHub Actions, transitioning to CircleCI), enhancing efficiency and reliability.
- Enable GitOps with ArgoCD: Deliver seamless GitOps-driven application deployments, ensuring accuracy and consistency in Kubernetes environments.
- Advocate for Best Practices: Continuously promote and enforce industry-standard DevOps practices, ensuring consistent, secure, and efficient operational outcomes.
- Innovate and Improve: Constantly evaluate and enhance current DevOps processes, tooling, and methodologies to maintain cutting-edge efficiency.
- Product Mindset: Actively engage with product and engineering teams, bringing infrastructure expertise to product discussions, understanding customer needs, and helping prioritize infrastructure improvements that directly benefit users and business objectives.
What You Bring
- Hands-On Experience: 3-5 years in DevOps roles, ideally within fast-paced SaaS environments.
- Kubernetes Mastery: Advanced knowledge and practical experience managing Kubernetes clusters and container orchestration.
- AWS Excellence: Comprehensive expertise across AWS services, infrastructure management, and security.
- IaC Competence: Demonstrated skill in Terraform for infrastructure automation and management.
- CI/CD Acumen: Proven proficiency managing pipelines with GitHub Actions; familiarity with CircleCI highly advantageous.
- GitOps Knowledge: Experience with ArgoCD for effective continuous deployment and operations.
- Observability Skills: Strong capabilities deploying and managing monitoring solutions such as DataDog, ELK, and AWS CloudWatch.
- Python Automation: Solid scripting and automation skills using Python.
- Architectural Awareness: Understanding of enterprise architecture frameworks and alignment practices.
- Proactive Problem-Solving: Exceptional analytical and troubleshooting skills, adept at swiftly addressing complex technical challenges.
- Effective Communication: Strong interpersonal and collaborative skills, essential for remote, distributed teamwork.
- Product Focus: Ability and willingness to understand customer requirements, prioritize tasks that enhance product value, and proactively suggest infrastructure improvements driven by user needs.
- Startup Mindset (Bonus): Prior experience or enthusiasm for dynamic startup cultures is a distinct advantage.
Why Join Us
- Compensation: Top-tier salary and exceptional benefits.
- Work-Life Flexibility: Fully remote, flexible scheduling.
- Growth Opportunities: Accelerate your career in a company poised for significant growth.
- Innovative Culture: Engineering-centric, innovation-driven work environment.
- Team Events: Annual offsites and quarterly Hackerhouse.
- Wellness & Family: Comprehensive healthcare and parental leave.
- Workspace: Premium workstation setup allowance, providing the tech you need to succeed.


What are we looking for
We are looking for an experienced and exceptional Senior AI/ML Engineer to join our growing team. In this role, you will be involved in the design, development, and optimization of AI and Machine Learning solutions that deliver exceptional user experiences. The ideal candidate will combine strong software engineering skills with deep knowledge of machine learning systems.
Responsibilities
As part of this role you will:
- Design and implement advanced AI/ML systems with a focus on LLMs, embeddings, and retrieval-augmented generation (RAG) architectures.
- Develop and optimize information retrieval solutions including vector databases, semantic search, and knowledge graph implementations.
- Build production-grade AI pipelines for data processing, model training, fine-tuning, and serving at scale.
- Create and maintain graph-based knowledge systems that enhance LLM capabilities through structured relationship data.
- Implement and optimize hybrid RAG architectures combining traditional search, vector embeddings, and knowledge graphs.
- Lead technical initiatives for AI system integration into existing products and services.
- Collaborate with data scientists and ML researchers to implement and productionize new AI approaches and models.
Qualifications / Experience / Technical Skills
- Bachelor's degree in Computer Science or a related field, or equivalent practical experience.
- 5+ years in backend software development using modern programming languages (e.g., Python (strongly preferred!), Golang or Java).
- Demonstrated experience building production systems with LLMs (OpenAI, Anthropic, open-source models) including prompt engineering and fine-tuning.
- In-depth knowledge of vector databases and embedding models for semantic search and retrieval.
- Experience implementing RAG architectures with various retrieval strategies (sparse, dense, hybrid) and context optimization techniques.
- Proficiency with knowledge graph technologies (Neo4j, Neptune, or similar) and graph-based information retrieval.
- Strong background in information retrieval systems including BM25, TF-IDF, and modern neural search approaches.
- Experience with AI/ML infrastructure including containerization, orchestration, and scaling of model inference.
- Expertise in cloud platforms' AI offerings (AWS Bedrock, Azure OpenAI, Vertex AI) and their integration patterns.
- Familiarity with model optimization techniques including quantization, distillation, and efficient serving strategies.
- Experience with streaming data processing for real-time AI applications using technologies like Kafka, Kinesis, or similar.
- Proficiency with AI observability and evaluation tools for tracking model performance, drift, and quality.
- Demonstrated ability to balance technical innovation with production reliability when implementing cutting-edge AI systems.