50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
Review Criteria
- Strong Senior Data Scientist (AI/ML/GenAI) Profile
- 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
- Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
- 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
- Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows
Preferred
- Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
- Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.
Job Specific Criteria
- CV Attachment is mandatory
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.
Responsibilities:
- Own the full ML lifecycle: model design, training, evaluation, deployment
- Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
- Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
- Build agentic workflows for reasoning, planning, and decision-making
- Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
- Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
- Collaborate with product and engineering teams to integrate AI models into business applications
- Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices
Ideal Candidate
- 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
- Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
- Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
- Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
- Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
- Strong software engineering background with experience in testing, version control, and APIs
- Proven ability to balance innovation with scalable deployment
- B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
- Bonus: Open-source contributions, GenAI research, or applied systems at scale
About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML, and advanced decision-making capabilities to drive real-time business insights. Built from the ground up using modern technologies, Hypersonix simplifies data consumption for customers across various industry verticals. We are seeking a well-rounded, hands-on product leader to help manage key capabilities and features in our platform.
Position Overview
We are seeking a highly skilled Web Scraping Architect to join our team. The successful candidate will be responsible for designing, implementing, and maintaining web scraping processes to gather data from various online sources efficiently and accurately. As a Web Scraping Specialist, you will play a crucial role in collecting data for competitor analysis and other business intelligence purposes.
Responsibilities
- Scalability/Performance: Lead and provide expertise in scraping at scale e-commerce marketplaces.
- Data Source Identification: Identify relevant websites and online sources from which data needs to be scraped. Collaborate with the team to understand data requirements and objectives.
- Web Scraping Design: Develop and implement effective web scraping strategies to extract data from targeted websites. This includes selecting appropriate tools, libraries, or frameworks for the task.
- Data Extraction: Create and maintain web scraping scripts or programs to extract the required data. Ensure the code is optimized, reliable, and can handle changes in the website's structure.
- Data Cleansing and Validation: Cleanse and validate the collected data to eliminate errors, inconsistencies, and duplicates. Ensure data integrity and accuracy throughout the process.
- Monitoring and Maintenance: Continuously monitor and maintain the web scraping processes. Address any issues that arise due to website changes, data format modifications, or anti-scraping mechanisms.
- Scalability and Performance: Optimize web scraping procedures for efficiency and scalability, especially when dealing with a large volume of data or multiple data sources.
- Compliance and Legal Considerations: Stay up-to-date with legal and ethical considerations related to web scraping, including website terms of service, copyright, and privacy regulations.
- Documentation: Maintain detailed documentation of web scraping processes, data sources, and methodologies. Create clear and concise instructions for others to follow.
- Collaboration: Collaborate with other teams such as data analysts, developers, and business stakeholders to understand data requirements and deliver insights effectively.
- Security: Implement security measures to ensure the confidentiality and protection of sensitive data throughout the scraping process.
Requirements
- Proven experience of 6+ years as a Web Scraping Specialist or similar role, with a track record of successful web scraping projects
- Expertise in handling dynamic content, user-agent rotation, bypassing CAPTCHAs, rate limits, and use of proxy services
- Knowledge of browser fingerprinting
- Has leadership experience
- Proficiency in programming languages commonly used for web scraping, such as Python, BeautifulSoup, Scrapy, or Selenium
- Strong knowledge of HTML, CSS, XPath, and other web technologies relevant to web scraping and coding
- Knowledge and experience in best-of-class data storage and retrieval for large volumes of scraped data
- Understanding of web scraping best practices, including handling dynamic content, user-agent rotation, and IP address management
- Attention to detail and ability to handle and process large volumes of data accurately
- Familiarity with data cleansing techniques and data validation processes
- Good communication skills and ability to collaborate effectively with cross-functional teams
- Knowledge of web scraping ethics, legal considerations, and compliance with website terms of service
- Strong problem-solving skills and adaptability to changing web environments
Preferred Qualifications
- Bachelor’s degree in Computer Science, Data Science, Information Technology, or related fields
- Experience with cloud-based solutions and distributed web scraping systems
- Familiarity with APIs and data extraction from non-public sources
- Knowledge of machine learning techniques for data extraction and natural language processing is desired but not mandatory
- Prior experience in handling large-scale data projects and working with big data frameworks
- Understanding of various data formats such as JSON, XML, CSV, etc.
- Experience with version control systems like Git
About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML and advanced decision capabilities to drive real-time business insights. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in various industry verticals. Hypersonix.ai is seeking a well-rounded, hands-on product leader to help lead product management of key capabilities and features.
About the Role
We are looking for talented and driven Data Engineers at various levels to work with customers to build the data warehouse, analytical dashboards and ML capabilities as per customer needs.
Roles and Responsibilities
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements; should write complex queries in an optimized way
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
- Run ad-hoc analysis utilizing the data pipeline to provide actionable insights
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
- Work with analytics and data scientist team members and assist them in building and optimizing our product into an innovative industry leader
Requirements
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Experience supporting and working with cross-functional teams in a dynamic environment
- We are looking for a candidate with 4+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Technology or completed MCA.
About Role
We are looking for a highly driven Full Stack Developer who can build scalable, high-performance applications across both frontend and backend. You will be working closely with our engineering team to develop seamless user experiences, robust APIs, and production-ready systems. This role is perfect for someone who wants to work in a fast-growing AI automation company, take ownership of end-to-end development, and contribute to products used by enterprises, agencies, and SMBs globally.
Key Responsibilities
- Develop responsive and scalable frontend applications using React Native and Next.js.
- Build and maintain backend services using Python and Node.js.
- Develop structured, well-documented REST APIs.
- Work with databases such as MongoDB and PostgreSQL for efficient data storage and retrieval.
- Implement clean authentication workflows (JWT preferred).
- Collaborate with UI/UX and product teams to deliver intuitive user experiences.
- Maintain high code quality through modular development, linting, and optimized folder structure.
- Debug, optimize, and enhance existing features and systems.
- Participate in code reviews and ensure best practices.
- Deploy, test, and monitor applications for performance and reliability.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related discipline (or equivalent experience).
- Proven experience as a Full Stack Developer with hands-on work in React Native and Next.js.
- Strong backend experience with Python (Fast API preferred) and Node.js (Express.js preferred).
- Experience working with REST APIs, MongoDB, and PostgreSQL.
- Strong understanding of authentication flows (JWT, OAuth, or similar).
- Ability to write clean, maintainable, and well-documented code.
- Experience with Git/GitHub workflows.
Perks and Benefits
- Opportunity to work at a fast-scaling AI-driven product company.
- Work on advanced growth automation and CRM technologies.
- High ownership and autonomy in product development.
- Flexible remote work for the first 6 months.
- Skill development through real-world, high-impact projects.
- Collaborative culture with mentorship and growth opportunities.
About Vijay Sales
Vijay Sales is one of India’s leading electronics retail brands with 160+ stores nationwide and a fast-growing digital presence. We are on a mission to build the most advanced data-driven retail intelligence ecosystem—using AI, predictive analytics, LLMs, and real-time automation to transform customer experience, supply chain, and omnichannel operations.
Role Overview
We are looking for a highly capable AI Engineer who is passionate about building production-grade AI systems, designing scalable ML architecture, and working with cutting-edge AI/ML tools. This role involves hands-on work with Databricks, SQL, PySpark, modern LLM/GenAI frameworks, and full lifecycle ML system design.
Key Responsibilities
Machine Learning & AI Development
- Build, train, and optimize ML models for forecasting, recommendation, personalization, churn prediction, inventory optimization, anomaly detection, and pricing intelligence.
- Develop GenAI solutions using modern LLM frameworks (e.g., LangChain, LlamaIndex, HuggingFace Transformers).
- Explore and implement RAG (Retrieval Augmented Generation) pipelines for product search, customer assistance, and support automation.
- Fine-tune LLMs on company-specific product and sales datasets (using QLoRA, PEFT, and Transformers).
- Develop scalable feature engineering pipelines leveraging Delta Lake and Databricks Feature Store.
Databricks / Data Engineering
- Build end-to-end ML workflows on Databricks using PySpark, MLflow, Unity Catalog, Delta Live Tables.
- Optimize Databricks clusters for cost, speed, and stability.
- Maintain reusable notebooks and parameterized pipelines for model ingestion, validation, and deployment.
- Use MLflow for tracking experiments, model registry, and lifecycle management.
Data Handling & SQL
- Write advanced SQL for multi-source data exploration, aggregation, and anomaly detection.
- Work on large, complex datasets from ERP, POS, CRM, Website, and Supply Chain systems.
- Automate ingestion of streaming and batch data into Databricks pipelines.
Deployment & MLOps
- Deploy ML models using REST APIs, Databricks Model Serving, Docker, or cloud-native endpoints.
- Build CI/CD pipelines for ML using GitHub Actions, Azure DevOps, or Databricks Workflows.
- Implement model monitoring for drift, accuracy decay, and real-time alerts.
- Maintain GPU/CPU environments for training workflows.
Must-Have Technical Skills
Core AI/ML
- Strong fundamentals in machine learning: regression, classification, time-series forecasting, clustering.
- Experience in deep learning using PyTorch or TensorFlow/Keras.
- Expertise in LLMs, embeddings, vector databases, and GenAI architecture.
- Hands-on experience with HuggingFace, embedding models, and RAG.
Databricks & Big Data
- Hands-on experience with Databricks (PySpark, SQL, Delta Lake, MLflow, Feature Store).
- Strong understanding of Spark execution, partitioning, and optimization.
Programming
- Strong proficiency in Python.
- Experience writing high-performance SQL with window functions, CTEs, and analytical queries.
- Knowledge of Git, CI/CD, REST APIs, and Docker.
MLOps & Production Engineering
- Experience deploying models to production and monitoring them.
- Familiarity with tools like MLflow, Weights & Biases, or SageMaker equivalents.
- Experience in building automated training pipelines and handling model drift/feedback loops.
Preferred Domain Experience
- Retail/e-commerce analytics
- Demand forecasting
- Inventory optimization
- Customer segmentation & personalization
- Price elasticity and competitive pricing
Job Title: Full-Stack Developer
Location: Remote
Job Type: Full-Time
Experience: 3 year’s
Company: PGAGI Consultancy Pvt. Ltd.
Job Overview:
PGAGI Consultancy Pvt. Ltd. is seeking a highly skilled Full-stack projects manager to lead and manage AI projects The ideal candidate will be responsible for overseeing the entire project lifecycle, from planning and architecture to development, deployment, and maintenance. This role requires strong leadership abilities, technical expertise in both front-end and back-end development, and experience in managing cross-functional teams.
Key Responsibilities:
Project Management:
• Lead and manage multiple software development projects, ensuring timely delivery within scope and budget.
• Define project requirements, milestones, and deliverables in collaboration with stakeholders.
• Create and maintain project roadmaps, sprint plans, and technical documentation.
• Oversee project risks, dependencies, and resource allocation to optimize workflow.
• Conduct regular status meetings, report progress to senior management, and ensure alignment with business goals.
• Implement and enforce Agile, Scrum, or Kanban methodologies for efficient project execution.
Technical Leadership & Full-Stack Development:
• Lead a team of frontend and backend developers, providing technical guidance and mentorship.
• Design, develop, and maintain scalable, high-performance web applications.
• Write clean, efficient, and maintainable code for both front-end and back-end systems.
• Develop and optimize RESTful APIs, database schemas, and server-side logic.
• Integrate third-party APIs, cloud services, and microservices architecture.
• Ensure application performance, security, and scalability best practices.
• Troubleshoot and resolve technical issues, ensuring minimal downtime and optimal functionality.
Technical Skills Required:
Front-End Technologies:
• React.js, Next.js, Vue.js, Angular
• HTML5, CSS3, TypeScript, JavaScript
Back-End Technologies:
• python,Node.js, Express.js, Django, Flask, FastAPI
Database Management:
• MongoDB, PostgreSQL, MySQL, Firebase
DevOps & Cloud Technologies:
• AWS, Docker, Kubernetes, CI/CD pipelines
Version Control & Collaboration Tools:
• Git, GitHub/GitLab, Bitbucket
• Jira, Trello, Slack
Preferred Skills
• Experience leading AI/ML projects.
• Knowledge of microservices architecture.
• Previous experience working in a startup environment.
• Strong problem-solving and decision-making skills.
Qualifications & Experience:
• Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field.
• [2+ minimum ] years of experience in full-stack development and project management.
• Proven experience leading and managing development teams.
• Strong understanding of Agile/Scrum methodologies.
Why Join Us?
• Work in a dynamic and innovative environment.
• Opportunity to lead cutting-edge projects.
• Growth-oriented role with leadership opportunities.
If you are passionate about leading software development projects while remaining hands-on with coding, we encourage you to apply!
Job Title: Full-Stack developer
Experience: 5 to 8+ Years
ASAP Start Immediately
Key Responsibilities
Develop and maintain end-to-end web applications, including frontend interfaces and backend services.
Build responsive and scalable UIs using React.js and Next.js.
Design and implement robust backend APIs using Python, FastAPI, Django, or Node.js.
Work with cloud platforms such as Azure (preferred) or AWS for application deployment and scaling.
Manage DevOps tasks, including containerization with Docker, orchestration with Kubernetes, and infrastructure as code with Terraform.
Set up and maintain CI/CD pipelines using tools like GitHub Actions or Azure DevOps.
Design and optimize database schemas using PostgreSQL, MongoDB, and Redis.
Collaborate with cross-functional teams in an agile environment to deliver high-quality features on time.
Troubleshoot, debug, and improve application performance and security.
Take full ownership of assigned modules/features and contribute to technical planning and architecture discussions.
Must-Have Qualifications
Strong hands-on experience with Python and at least one backend framework such as FastAPI, Django, or Flask, Node.js .
Proficiency in frontend development using React.js and Next.js
Experience in building and consuming RESTful APIs
Solid understanding of database design and queries using PostgreSQL, MongoDB, and Redis
Practical experience with cloud platforms, preferably Azure, or AWS
Familiarity with containerization and orchestration tools like Docker and Kubernetes
Working knowledge of Infrastructure as Code (IaC) using Terraform
Experience with CI/CD pipelines using GitHub Actions or Azure DevOps
Ability to work in an agile development environment with cross-functional teams
Strong problem-solving, debugging, and communication skills
Start-up experience preferred – ability to manage ambiguity, rapid iterations, and hands-on leadership.
Technical Stack
Frontend: React.js, Next.js
Backend: Python, FastAPI, Django, Spring Boot, Node.js
DevOps & Cloud: Azure (preferred), AWS, Docker, Kubernetes, Terraform
CI/CD: GitHub Actions, Azure DevOps
Databases: PostgreSQL, MongoDB, Redis
Job Title: Full-Stack developer
Experience: 5 to 8+ Years
ASAP Start Immediately
Key Responsibilities
Develop and maintain end-to-end web applications, including frontend interfaces and backend services.
Build responsive and scalable UIs using React.js and Next.js.
Design and implement robust backend APIs using Python, FastAPI, Django, or Node.js.
Work with cloud platforms such as Azure (preferred) or AWS for application deployment and scaling.
Manage DevOps tasks, including containerization with Docker, orchestration with Kubernetes, and infrastructure as code with Terraform.
Set up and maintain CI/CD pipelines using tools like GitHub Actions or Azure DevOps.
Design and optimize database schemas using PostgreSQL, MongoDB, and Redis.
Collaborate with cross-functional teams in an agile environment to deliver high-quality features on time.
Troubleshoot, debug, and improve application performance and security.
Take full ownership of assigned modules/features and contribute to technical planning and architecture discussions.
Must-Have Qualifications
Strong hands-on experience with Python and at least one backend framework such as FastAPI, Django, or Flask, Node.js .
Proficiency in frontend development using React.js and Next.js
Experience in building and consuming RESTful APIs
Solid understanding of database design and queries using PostgreSQL, MongoDB, and Redis
Practical experience with cloud platforms, preferably Azure, or AWS
Familiarity with containerization and orchestration tools like Docker and Kubernetes
Working knowledge of Infrastructure as Code (IaC) using Terraform
Experience with CI/CD pipelines using GitHub Actions or Azure DevOps
Ability to work in an agile development environment with cross-functional teams
Strong problem-solving, debugging, and communication skills
Start-up experience preferred – ability to manage ambiguity, rapid iterations, and hands-on leadership.
Technical Stack
Frontend: React.js, Next.js
Backend: Python, FastAPI, Django, Spring Boot, Node.js
DevOps & Cloud: Azure (preferred), AWS, Docker, Kubernetes, Terraform
CI/CD: GitHub Actions, Azure DevOps
Databases: PostgreSQL, MongoDB, Redis
Position Overview: The Lead Software Architect - Python & Data Engineering is a senior technical leadership role responsible for designing and owning end-to-end architecture for data-intensive, AI/ML, and analytics platforms, while mentoring developers and ensuring technical excellence across the organization.
Key Responsibilities:
- Design end-to-end software architecture for data-intensive applications, AI/ML pipelines, and analytics platforms
- Evaluate trade-offs between competing technical approaches
- Define data models, API approach, and integration patterns across systems
- Create technical specifications and architecture documentation
- Lead by example through production-grade Python code and mentor developers on engineering fundamentals
- Conduct design and code reviews focused on architectural soundness
- Establish engineering standards, coding practices, and design patterns for the team
- Translate business requirements into technical architecture
- Collaborate with data scientists, analysts, and other teams to design integrated solutions
- Whiteboard and defend system design and architectural choices
- Take responsibility for system performance, reliability, and maintainability
- Identify and resolve architectural bottlenecks proactively
Required Skills:
- 8+ years of experience in software architecture and development
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
- Strong foundations in data structures, algorithms, and computational complexity
- Experience in system design for scale, including caching strategies, load balancing, and asynchronous processing
- 6+ years of Python development experience
- Deep knowledge of Django, Flask, or FastAPI
- Expert understanding of Python internals including GIL and memory management
- Experience with RESTful API design and event-driven architectures (Kafka, RabbitMQ)
- Proficiency in data processing frameworks such as Pandas, Apache Spark, and Airflow
- Strong SQL optimization and database design experience (PostgreSQL, MySQL, MongoDB) Experience with AWS, GCP, or Azure cloud platforms
- Knowledge of containerization (Docker) and orchestration (Kubernetes)
- Hands-on experience designing CI/CD pipelines Preferred (Bonus)
Skills:
- Experience deploying ML models to production (MLOps, model serving, monitoring) Understanding of ML system design including feature stores and model versioning
- Familiarity with ML frameworks such as scikit-learn, TensorFlow, and PyTorch
- Open-source contributions or technical blogging demonstrating architectural depth
- Experience with modern front-end frameworks for full-stack perspective
As a Data Quality Engineer at PalTech, you will be responsible for designing and executing comprehensive test strategies for end-to-end data validation. Your role will ensure data completeness, accuracy, and integrity across ETL processes, data warehouses, and reporting environments. You will automate data validation using Python, validate fact and dimension tables, large datasets, file ingestions, and data exports, while ensuring adherence to data security standards, including encryption and authorization. This role requires strong analytical abilities, proficiency in SQL and Python, and the capability to collaborate effectively with cross-functional teams to drive continuous improvements through automation and best practices.
Key Responsibilities
- Create test strategies, test plans, business scenarios, and data validation scripts for end-to-end data validation.
- Verify data completeness, accuracy, and integrity throughout ETL processes, data pipelines, and reports.
- Evaluate and monitor the performance of ETL jobs to ensure adherence to defined SLAs.
- Automate data testing processes using Python or other relevant technologies.
- Validate various types of fact and dimension tables within data warehouse environments.
- Apply strong data warehousing (DWH) skills to ensure accurate data modeling and validation.
- Validate large datasets and ensure accuracy across relational databases.
- Validate file ingestions and data exports across different data sources.
- Assess and validate implementation of data security standards (encryption, authorization, anonymization).
- Demonstrate proficiency in SQL, Python, and ETL/ELT validation techniques.
- Validate reports and dashboards built on Power BI, Tableau, or similar platforms.
- Write complex scripts to validate business logic and KPIs across datasets.
- Create test data as required based on business use cases and scenarios.
- Identify, validate, and test corner business cases and edge scenarios.
- Prepare comprehensive test documentation including test cases, test results, and test summary reports.
- Collaborate closely with developers, business analysts, data architects, and other stakeholders.
- Recommend enhancements and implement best practices to strengthen and streamline testing processes.
Required Skills and Qualifications
- Education: Bachelor’s degree in Computer Science, Information Technology, or a related discipline.
- Technical Expertise: Strong understanding of ETL processes, data warehousing concepts, SQL, and Python.
- Experience: 4–6 years of experience in ETL testing, data validation, and report/dashboard validation; prior experience in automating data validation processes.
- Tools: Hands-on experience with ETL tools such as ADF, DBT, etc., defect tracking systems like JIRA, and reporting platforms such as Power BI or Tableau.
- Soft Skills: Excellent communication and teamwork abilities, with strong analytical and problem-solving skills.
Why Join PalTech?
- Great Place to Work Certified: We prioritize employee well-being and nurture an inclusive, collaborative environment where everyone can thrive.
- Competitive compensation, strong learning and professional g
As a Data Engineer at PalTech, you will design, develop, and maintain scalable and reliable data pipelines to ensure seamless data flow across systems. You will leverage SQL and leading ETL tools (such as Informatica, ADF, etc.) to support data integration and transformation needs. This role involves building and optimizing data warehouse architectures, performing performance tuning, and ensuring high levels of data quality, accuracy, and consistency throughout the data lifecycle.
You will collaborate closely with cross-functional teams to understand business requirements and translate them into effective data solutions. The ideal candidate should possess strong problem-solving skills, sound knowledge of data architecture principles, and a passion for building clean and efficient data systems.
Key Responsibilities
- Design, develop, and maintain ETL/ELT pipelines using SQL and tools such as Informatica, ADF, etc.
- Build and optimize data warehouse and data lake solutions for reporting, analytics, and operational usage.
- Apply strong understanding of data warehousing concepts to architect scalable data solutions.
- Handle large datasets and design effective load/update strategies.
- Collaborate with data analysts, business users, and data scientists to understand requirements and deliver scalable solutions.
- Implement data quality checks and validation frameworks to ensure data reliability and integrity.
- Perform SQL and ETL performance tuning and optimization.
- Work with structured and semi-structured data from various source systems.
- Monitor, troubleshoot, and resolve issues in data workflows.
- Maintain documentation for data pipelines, data flows, and data definitions.
- Follow best practices in data engineering including security, logging, and error handling.
Required Skills & Qualifications
Education:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
Technical Skills:
- Strong proficiency in SQL and data manipulation.
- Hands-on experience with ETL tools (e.g., Informatica, Talend, ADF).
- Experience with cloud data warehouse platforms such as BigQuery, Redshift, or Snowflake.
- Strong understanding of data warehousing concepts and data modeling.
- Proficiency in Python or a similar programming language.
- Experience working with RDBMS platforms (e.g., SQL Server, Oracle).
- Familiarity with version control systems and job schedulers.
Experience:
- 4 to 8 years of relevant experience in data engineering and ETL development.
Soft Skills:
- Strong problem-solving skills.
- Excellent communication and collaboration abilities.
- Ability to work effectively in a cross-functional team environment.
A BIT ABOUT US
Appknox is one of the top Mobile Application security companies recognized by Gartner and G2. A profitable B2B SaaS startup headquartered in Singapore & working from Bengaluru.
The primary goal of Appknox is to help businesses and mobile developers secure their mobile applications with a focus on delivery speed and high-quality security audits.
Appknox has helped secure mobile apps at Fortune 500 companies with major brands spread across regions like India, South-East Asia, Middle-East, US, and expanding rapidly. We have secured 300+ Enterprises globally.
We are a 60+ incredibly passionate team working to make an impact and helping some of the biggest companies globally. We work in a highly collaborative, very fast-paced work environment. If you have what it takes to be part of the team, we are excited and let’s speak further.
The Opportunity
Appknox AI is building next-generation AI-powered security analysis tools for mobile applications. We use multi-agent systems and large language models to automate complex security workflows that traditionally require manual expert analysis.
We're looking for an AI/ML Engineer who will focus on improving our AI system quality, optimizing prompts, and building evaluation frameworks. You'll work with our engineering team to make our AI systems more accurate, efficient, and reliable.
This is NOT a data scientist role. We need someone who builds production AI systems with LLMs and agent frameworks.
Key Focus:
Primary Focus: AI System Quality
- Prompt Engineering: Design and optimize prompts for complex reasoning tasks
- Quality Improvement: Reduce false positives and improve accuracy of AI-generated outputs
- Evaluation Frameworks: Build systems to measure and monitor AI quality metrics
- Tool Development: Create utilities and tools that enhance AI capabilities
Secondary Focus: Performance & Optimization
- Cost Optimization: Implement strategies to reduce LLM API costs (caching, batching, model selection)
- Metrics & Monitoring: Track system performance, latency, accuracy, and cost
- Research & Experimentation: Evaluate new models and approaches
- Documentation: Create best practices and guidelines for the team
Requirements:
- 2-4 years of professional software engineering experience with Python as primary language
- 1+ years working with LangChain, LangGraph, or similar agent frameworks (AutoGPT, CrewAI, etc.)
- Production LLM experience: You've shipped products using OpenAI, Anthropic, Google Gemini, or similar APIs
- Prompt engineering skills: You understand how to structure prompts for complex multi-step reasoning
- Strong Python: Async/await, type hints, Pydantic, modern Python practices
- Problem-solving mindset: You debug systematically and iterate based on data
Good to Have skill-set:
- Experience with vector search (LanceDB, Pinecone, Weaviate, Qdrant)
- Knowledge of retrieval-augmented generation (RAG) patterns
- Background in security or mobile application development
- Understanding of static/dynamic analysis tools
What We're NOT Looking For:
- Only academic/tutorial LLM experience (we need production systems)
- Pure ML research focus (we're not training foundation models)
- Data analyst/BI background without engineering depth
- No experience with LLM APIs or agent frameworks
Our Tech Stack:
AI/ML Infrastructure:
- Agent Frameworks: LangChain, LangGraph
- LLMs: Google Gemini (primary), with multi-model support
- Observability: Langfuse, DeepEval
- Vector Search: LanceDB, Tantivy
- Embeddings: Hybrid approach (local + cloud APIs)
Platform & Infrastructure:
- Orchestration: Prefect 3.x, Docker, Kubernetes
- Storage: S3-compatible object storage, PostgreSQL
- Languages: Python 3.11+
- Testing: pytest with parallel execution support
Work Expectations & Success Metrics:
Within 1 Month (Onboarding)
- Understand AI system architecture and workflows
- Review existing prompts and evaluation methods
- Run analyses and identify improvement areas
- Collaborate on initial optimizations
Within 3 Months (Initial Contributions)
- Own prompt engineering for specific components
- Build evaluation datasets and quality metrics
- Implement tools that extend AI capabilities
- Contribute to performance optimization experiments
Within 6 Months (Independent Ownership)
- Lead quality metrics implementation and monitoring
- Drive prompt optimization initiatives
- Improve evaluation frameworks
- Research and prototype new capabilities
Within 1 Year (Expanded Scope)
- Mentor team members on best practices
- Lead optimization projects (caching, batching, cost reduction)
- Influence architectural decisions
- Build reusable libraries and internal frameworks
Interview Process:
- Round 0 Interview - Profile Evaluation (15 min)
- Round 1 Interview - Take Home Assignment
- Round 2 Interview - Technical Deep-Dive (90 min)
- Round 3 Interview- Team Fit (45 min)
- Round 4 Interview- HR Round ( 30 min)
Why Join Appknox AI?
Impact & Growth
Work on cutting-edge AI agent systems that power real-world enterprise security. You’ll collaborate with experienced engineers across AI, security, and infrastructure while gaining deep expertise in LangGraph, agent systems, and prompt engineering. As we scale, you’ll have clear opportunities to grow into senior and staff-level roles.
Team & Culture
Join a small, focused product-engineering team that values code quality, collaboration, and knowledge sharing. We’re in a hybrid set-up - based out of Banaglore, flexible, and committed to a sustainable pace - no crunch, no chaos.
Technology
Built with a modern Python stack (3.11+, async, type hints, Pydantic) and the latest AI/ML tools including LangChain, LangGraph, DeepEval, and Langfuse. Ship production-grade features that make a real impact for customers.
Compensation & Benefits:
Competitive Package
We offer strong compensation designed to reward impact.
Flexibility & Lifestyle
Hybrid work setup and enjoy generous time off. You’ll get top-tier hardware and the tools you need to do your best work.
Learning & Development
Access a substantial learning budget, attend major AI/ML conferences, explore new approaches during dedicated research time, and share your knowledge with the team.
Health & Wellness
Comprehensive health coverage, fitness subscription, and family-friendly policies.
Early-Stage Advantages
Help shape the culture, influence product direction, and work directly with founders. Move fast, ship quickly, and see your impact immediately.
We are seeking a highly skilled Senior Data Engineer with expertise in Databricks, Python, Scala, Azure Synapse, and Azure Data Factory to join our data engineering team. The team is responsible for ingesting data from multiple sources, making it accessible to internal stakeholders, and enabling seamless data exchange across internal and external systems.
You will play a key role in enhancing and scaling our Enterprise Data Platform (EDP) hosted on Azure and built using modern technologies such as Databricks, Synapse, Azure Data Factory (ADF), ADLS Gen2, Azure DevOps, and CI/CD pipelines.
Responsibilities
- Design, develop, optimize, and maintain scalable data architectures and pipelines aligned with ETL principles and business goals.
- Collaborate across teams to build simple, functional, and scalable data solutions.
- Troubleshoot and resolve complex data issues to support business insights and organizational objectives.
- Build and maintain data products to support company-wide usage.
- Advise, mentor, and coach data and analytics professionals on standards and best practices.
- Promote reusability, scalability, operational efficiency, and knowledge-sharing within the team.
- Develop comprehensive documentation for data engineering standards, processes, and capabilities.
- Participate in design and code reviews.
- Partner with business analysts and solution architects on enterprise-level technical architectures.
- Write high-quality, efficient, and maintainable code.
Technical Qualifications
- 5–8 years of progressive data engineering experience.
- Strong expertise in Databricks, Python, Scala, and Microsoft Azure services including Synapse & Azure Data Factory (ADF).
- Hands-on experience with data pipelines across multiple source & target systems (Databricks, Synapse, SQL Server, Data Lake, SQL/NoSQL sources, and file-based systems).
- Experience with design patterns, code refactoring, CI/CD, and building scalable data applications.
- Experience developing batch ETL pipelines; real-time streaming experience is a plus.
- Solid understanding of data warehousing, ETL, dimensional modeling, data governance, and handling both structured and unstructured data.
- Deep understanding of Synapse and SQL Server, including T-SQL and stored procedures.
- Proven experience working effectively with cross-functional teams in dynamic environments.
- Experience extracting, processing, and analyzing large / complex datasets.
- Strong background in root cause analysis for data and process issues.
- Advanced SQL proficiency and working knowledge of a variety of database technologies.
- Knowledge of Boomi is an added advantage.
Core Skills & Competencies
- Excellent analytical and problem-solving abilities.
- Strong communication and cross-team collaboration skills.
- Self-driven with the ability to make decisions independently.
- Innovative mindset and passion for building quality data solutions.
- Ability to understand operational systems, identify gaps, and propose improvements.
- Experience with large-scale data ingestion and engineering.
- Knowledge of CI/CD pipelines (preferred).
- Understanding of Python and parallel processing frameworks (MapReduce, Spark, Scala).
- Familiarity with Agile development methodologies.
Education
- Bachelor’s degree in Computer Science, Information Technology, MIS, or an equivalent field.
As a Data Engineer, you will be an integral part of our team, working on data pipelines, data warehousing, and data integration for various analytics and AI use cases. You will collaborate closely with Delivery Managers, ML Engineers and other stakeholders to ensure seamless data flow and accessibility. Your expertise will be crucial in enabling data-driven decision-making for our clients. To thrive in this role, you need to be a quick learner, get excited about innovation and be on the constant lookout to master new technologies as they come up in the Data, AI & Cloud teams.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes to support downstream analytics and AI applications.
- Collaborate with ML Engineers to integrate data solutions into machine learning models and workflows.
- Work closely with clients to understand their data requirements and deliver tailored data solutions.
- Ensure data quality, integrity, and security across all projects.
- Optimize and manage data storage solutions in cloud environments (AWS, Azure, GCP).
- Utilize Databricks for data processing and analytics tasks, leveraging its capabilities to enhance data workflows.
- Monitor the performance of data pipelines, identify bottlenecks or failures, and implement improvements to enhance efficiency and reliability.
- Implement best practices for data engineering, including documentation, testing, and version control.
- Troubleshoot and resolve data-related issues in a timely manner.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
- 3 to 5 years of experience as a Data Engineer or in a similar role.
- Strong proficiency in SQL, Python, and other relevant programming languages.
- Hands-on experience with Databricks and its ecosystem.
- Familiarity with major cloud environments (AWS, Azure, GCP) and their data services.
- Experience with data warehousing solutions like Snowflake, Redshift, or BigQuery.
- Comfortable working with a variety of SQL, NoSQL and graph databases like PostgreSQL and MongoDB;
- Knowledge of data integration tools.
- Understanding of data modelling, data architecture, and database design.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills.
Highly Desirable Skills
- Experience with real-time data processing frameworks (e.g., Apache Kafka, Spark Streaming).
- Knowledge of data visualisation tools (e.g., Tableau, Power BI).
- Familiarity with machine learning concepts and frameworks.
- Experience working in a client-facing role.
Why Middleware? 💡
Sick of the endless waiting?
Waiting on code reviews, QA feedback, or that "quick call"?
At Middleware, we’re all about freeing up engineers like you to do what you love—build.
We’ve created a cockpit that gives engineering leaders the insights they need to unblock teams, cut bottlenecks, and let engineers focus on impact.
What You’ll Do 🎨
- Own the Product: Shape a product that engineers rely on daily.
- Build Stunning UIs: Craft data-rich, intuitive designs that solve real problems.
- Shape Middleware’s Architecture: Make our systems robust, seamless, and introduce mechanisms that allow high visibility into our automated pipelines.
What We’re Looking For 🔍
- React + Typescript: You know your way around these tools and have launched awesome projects.
- Python + Postgres: You've build complete backend systems, not just basic CRUD apps.
- Passionate Builder: Hungry to grow, build, and make an impact.
Bonus Points ⭐️
- Eye for Design: You have a sense for clean, user-friendly visuals.
- Understanding of distributed systems: Not everything runs on a single machine, and you know how to make things work across a lot of those.
- DSA Know-how: Familiarity with data structures (graphs, linked lists, etc.) because our product (even frontend) actually uses DSA concepts.
Why You'll Love Working with Us ❤️
We’re engineers at heart.
Middleware was founded by ex-Uber and Maersk engineers who know what it’s like to be stuck in meeting loops and endless waiting. If you're here to build, to make things happen, and to change the game for engineering teams everywhere, let’s chat!
Ready to jump in? Explore Middleware (https://www.middlewarehq.com/) or check out our demo (https://demo.middlewarehq.com/).
About the Role
We are seeking a highly skilled Tech Lead who can drive the development of high-performance, scalable products while contributing hands-on to coding, architecture, and system design. You will work closely with a small engineering team, guiding them in technology decisions, quality delivery, and best practices while also taking ownership of key modules yourself.
Key Responsibilities
- Lead end-to-end product development from design to deployment.
- Act as both a technical contributor and mentor within a small, agile team.
- Architect scalable, robust, and secure backend and frontend systems.
- Participate in and guide:
- System design & architecture decisions
- Algorithmic analysis and performance optimization
- Database modeling and API design
- Write clean, maintainable, high-quality code.
- Own deployment pipelines and ensure reliable production releases.
- Collaborate with cross-functional stakeholders to translate requirements into actionable technical plans.
- Conduct code reviews, enforce coding standards, and ensure engineering excellence.
- Troubleshoot and solve complex technical challenges in production environments.
Required Technical Skills
- Strong expertise in Node.js and/or Python.
- Hands-on experience with TypeScript and React.js for frontend development.
- Knowledge of Go (Golang) is a strong advantage.
- Deep understanding of:
- Data Structures & Algorithms
- System Design
- Distributed systems concepts
- Microservices architecture
- Experience with deployment using:
- Docker, Kubernetes, or similar orchestration tools
- CI/CD pipelines
- Cloud platforms (AWS / GCP / Azure)
- Strong understanding of databases (SQL and NoSQL).
Soft Skills
- Strong leadership and ability to guide a small development team.
- Clear communication with technical and non-technical stakeholders.
- Ownership mindset with focus on delivering quality, on time.
- Problem-solver with the ability to make quick and correct technical decisions.
Preferred Qualifications
- 4–10 years of experience in software development.
- Prior experience in a startup or fast-moving product environment.
- Experience deploying and managing real-world products at scale.
What We Offer
- Opportunity to build and influence real products end-to-end.
- A collaborative, high-ownership environment.
- Freedom to experiment and drive product and engineering decisions.
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.
Key Responsibilities
CI/CD and Infrastructure Automation
- Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
- Automate deployments using tools such as Terraform, Helm, and Kubernetes
- Improve build and release processes to support high-performance and low-latency trading applications
- Work efficiently with Linux/Unix environments
Cloud and On-Prem Infrastructure Management
- Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
- Ensure system reliability, scalability, and high availability
- Implement Infrastructure as Code (IaC) to standardize and streamline deployments
Performance Monitoring and Optimization
- Monitor system performance and latency using Prometheus, Grafana, and ELK stack
- Implement proactive alerting and fault detection to ensure system stability
- Troubleshoot and optimize system components for maximum efficiency
Security and Compliance
- Apply DevSecOps principles to ensure secure deployment and access management
- Maintain compliance with financial industry regulations such as SEBI
- Conduct vulnerability assessments and maintain logging and audit controls
Required Skills and Qualifications
- 2+ years of experience as a DevOps Engineer in a software or trading environment
- Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
- Proficiency in cloud platforms such as AWS and GCP
- Hands-on experience with Docker and Kubernetes
- Experience with Terraform or CloudFormation for IaC
- Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
- Familiarity with Prometheus, Grafana, and ELK stack
- Proficiency in scripting using Python, Bash, or Go
- Solid understanding of security best practices including IAM, encryption, and network policies
Good to Have (Optional)
- Experience with low-latency trading infrastructure or real-time market data systems
- Knowledge of high-frequency trading environments
- Exposure to FIX protocol, FPGA, or network optimization techniques
- Familiarity with Redis or Nginx for real-time data handling
Why Join Us?
- Work with a team that expects and delivers excellence.
- A culture where risk-taking is rewarded, and complacency is not.
- Limitless opportunities for growth—if you can handle the pace.
- A place where learning is currency, and outperformance is the only metric that matters.
- The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
Job Position: Lead QA Automation Engineer
Desired Skills: Selenium, Java, Python, Software Testing (QA), Test Automation (QA),Bug tracking, Functional testing, Selenium Web driver
Experience Range: 5 - 7 years
Type: Full Time: Permanent
Location: India (Remote)
Job Description:
We are looking for a QA Automation Engineer who has the following expertise -
- Experienced in building automation programs grounds-up
- Must have hands-on automation experience in Java/Python + Selenium for testing web applications
- Strong Functional Testing fundamentals
- Proficient in bug tracking and test management tool sets (JIRA, GIT etc.)
- Hands-on experience in Karate/Cucumber framework. Exposure to Gherkin test script.
- Strong experience and hands-on in Automation using Java/Python & Selenium
- Must have experience of designing and implementing test frameworks (like Data driven, Keyword driven or Hybrid along with custom reporting) and strategy for choosing automated testing tools and creating testing standards
- Must have experience in automating different layers (Front end, backend, web-services) of application using different automation approaches.
- Good to have exposure in SQL for database driven testing
- Strong experience in Test Strategy, test plan, Test design, test execution traceability matrix, test report and ensure usage of tools for optimization
- Team player - highly proactive team player eager to support your colleagues when needed
- Team Lead - should be able to review the deliverables for quality check. Should be able to design the test cases using the DRY principle.
- You are prepared to take on responsibility for tasks and work independently.
- Collaborate with cross-functional teams, including developers, product managers, and other QA engineers, to define test requirements and ensure high-quality deliverables.
- You present excellent organizational and time management skills
- Knowledge or basic experience of Agile development processes (SCRUM) is a great plus
Skills and Experience:
- Good academics
- Good communication skills
- Advanced troubleshooting skills
Ready and immediately available candidates will be preferred.
Job Description:
We are looking for a skilled Backend Developer with 2–5 years of experience in software development, specializing in Python and/or Golang. If you have strong programming skills, enjoy solving problems, and want to work on secure and scalable systems, we'd love to hear from you!
Location - Pune, Baner.
Interview Rounds - In Office
Key Responsibilities:
Design, build, and maintain efficient, reusable, and reliable backend services using Python and/or Golang
Develop and maintain clean and scalable code following best practices
Apply Object-Oriented Programming (OOP) concepts in real-world development
Collaborate with front-end developers, QA, and other team members to deliver high-quality features
Debug, optimize, and improve existing systems and codebase
Participate in code reviews and team discussions
Work in an Agile/Scrum development environment
Required Skills: Strong experience in Python or Golang (working knowledge of both is a plus)
Good understanding of OOP principles
Familiarity with RESTful APIs and back-end frameworks
Experience with databases (SQL or NoSQL)
Excellent problem-solving and debugging skills
Strong communication and teamwork abilities
Good to Have:
Prior experience in the security industry
Familiarity with cloud platforms like AWS, Azure, or GCP
Knowledge of Docker, Kubernetes, or CI/CD tools
Analyse data requirements and develop data pipelines to support reporting and analytics in ETL Datastage.
Should have good knowledge on Unix/Linux Shell Scripting
Should have experience in fine-tuning process and troubleshooting performance issue.
Experience in source code control systems like git, GitHub etc.
Experience in deployment and operationalizing the code, knowledge of scheduling tools like Control-M etc. is preferred.
Experience in CI/CD tools like Jenkins,
CoinFantasy is looking for an experienced Senior AI Architect to lead both the decentralised protocol development and the design of AI-driven applications on this network. As a visionary in AI and distributed computing, you will play a central role in shaping the protocol’s technical direction, enabling efficient task distribution, and scaling AI use cases across a heterogeneous, decentralised infrastructure.
Job Responsibilities
- Architect and oversee the protocol’s development, focusing on dynamic node orchestration, layer-wise model sharding, and secure, P2P network communication.
- Drive the end-to-end creation of AI applications, ensuring they are optimised for decentralised deployment and include use cases with autonomous agent workflows.
- Architect AI systems capable of running on decentralised networks, ensuring they balance speed, scalability, and resource usage.
- Design data pipelines and governance strategies for securely handling large-scale, decentralised datasets.
- Implement and refine strategies for swarm intelligence-based task distribution and resource allocation across nodes. Identify and incorporate trends in decentralised AI, such as federated learning and swarm intelligence, relevant to various industry applications.
- Lead cross-functional teams in delivering full-precision computing and building a secure, robust decentralised network.
- Represent the organisation’s technical direction, serving as the face of the company at industry events and client meetings.
Requirements
- Bachelor’s/Master’s/Ph.D. in Computer Science, AI, or related field.
- 12+ years of experience in AI/ML, with a track record of building distributed systems and AI solutions at scale.
- Strong proficiency in Python, Golang, and machine learning frameworks (e.g., TensorFlow, PyTorch).
- Expertise in decentralised architecture, P2P networking, and heterogeneous computing environments.
- Excellent leadership skills, with experience in cross-functional team management and strategic decision-making.
- Strong communication skills, adept at presenting complex technical solutions to diverse audiences.
About Us
CoinFantasy is a Play to Invest platform that brings the world of investment to users through engaging games. With multiple categories of games, it aims to make investing fun, intuitive, and enjoyable for users. It features a sandbox environment in which users are exposed to the end-to-end investment journey without risking financial losses.
Building on this foundation, we are now developing a groundbreaking decentralised protocol that will transform the AI landscape.
Website:
Benefits
- Competitive Salary
- An opportunity to be part of the Core team in a fast-growing company
- A fulfilling, challenging and flexible work experience
- Practically unlimited professional and career growth opportunities
The ideal candidate will play a key role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and high-performance data and analytics solutions. This role requires strong expertise in Azure, Databricks, and cloud-native DevOps practices.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
- Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.
- Implement best practices across resource groups, virtual networks, storage accounts, etc.
- Ensure cost optimization, high availability, and disaster recovery for business-critical systems.
2. Databricks Platform Management
- Set up, configure, and maintain Databricks workspaces for data engineering, ML, and analytics workloads.
- Automate cluster management, job scheduling, and performance monitoring.
- Integrate Databricks seamlessly with Azure data and analytics services.
3. CI/CD Pipeline Development
- Design and implement CI/CD pipelines for infrastructure, applications, and data workflows.
- Work with Azure DevOps / GitHub Actions (or similar) for automated testing and deployments.
- Drive continuous delivery, versioning, and monitoring best practices.
4. Monitoring & Incident Management
- Implement monitoring and alerting with Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.
- Diagnose and resolve issues to ensure minimal downtime and smooth operations.
5. Security & Compliance
- Enforce IAM, encryption, network security, and secure development practices.
- Ensure compliance with organizational and regulatory cloud standards.
6. Collaboration & Documentation
- Work closely with data engineers, software developers, architects, and business teams to align infrastructure with business goals.
- Maintain thorough documentation for infrastructure, processes, and configurations.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related field.
Must-Have Experience
- 6+ years in DevOps / Cloud Engineering roles.
- Proven expertise in:
- Microsoft Azure (Azure Data Lake, Databricks, ADF, Azure Functions, AKS, Azure AD)
- Databricks for data engineering / analytics workloads.
- Strong experience applying DevOps practices to cloud-based data and analytics platforms.
Technical Skills
- Infrastructure as Code (Terraform, ARM, Bicep).
- Scripting (Python / Bash).
- Containerization & orchestration (Docker, Kubernetes).
- CI/CD & version control (Git, Azure DevOps, GitHub Actions).
Soft Skills
- Strong analytical and problem-solving mindset.
- Excellent communication and collaboration abilities.
- Ability to operate in cross-functional and fast-paced environments.
Responsibilities:
Build and optimize batch and streaming data pipelines using Apache Beam (Dataflow)
Design and maintain BigQuery datasets using best practices in partitioning, clustering, and materialized views
Develop and manage Airflow DAGs in Cloud Composer for workflow orchestration
Implement SQL-based transformations using Dataform (or dbt)
Leverage Pub/Sub for event-driven ingestion and Cloud Storage for raw/lake layer data architecture
Drive engineering best practices across CI/CD, testing, monitoring, and pipeline observability
Partner with solution architects and product teams to translate data requirements into technical designs
Mentor junior data engineers and support knowledge-sharing across the team
Contribute to documentation, code reviews, sprint planning, and agile ceremonies
Requirements
2+ years of hands-on experience in data engineering, with at least 2 years on GCP
Proven expertise in BigQuery, Dataflow (Apache Beam), Cloud Composer (Airflow)
Strong programming skills in Python and/or Java
Experience with SQL optimization, data modeling, and pipeline orchestration
Familiarity with Git, CI/CD pipelines, and data quality monitoring frameworks
Exposure to Dataform, dbt, or similar tools for ELT workflows
Solid understanding of data architecture, schema design, and performance tuning
Excellent problem-solving and collaboration skills
Bonus Skills:
GCP Professional Data Engineer certification
Experience with Vertex AI, Cloud Functions, Dataproc, or real-time streaming architectures
Familiarity with data governance tools (e.g., Atlan, Collibra, Dataplex)
Exposure to Docker/Kubernetes, API integration, and infrastructure-as-code (Terraform)
Experience: 3 to 5 Years
Location: Ahmedabad (WFO)
What We Are Looking For?
- MERN Stack Developer with 3–5 years of experience
- Strong in Node.js and React.js
- Good experience with MongoDB
- Minimum 1 year hands-on experience in Python
- Good understanding of REST APIs
- Able to write clean and reusable code
- Good communication skills
What You Will Do at Digicorp?
- Develop and maintain applications using Node.js, React.js, and MongoDB
- Work on Python-based tasks or modules whenever required
- Collaborate with the team to deliver high-quality features
- Fix bugs and improve overall application performance
- Share ideas, participate in discussions, and follow best coding practices
At BigThinkCode, our technology solves complex problems. We are looking for a talented engineer to join our technology team at Chennai.
This is an opportunity to join a growing team and make a substantial impact at BigThinkCode. We have a challenging workplace where we welcome innovative ideas / talents and offers growth opportunities and positive environment.
Below job description for your reference, if interested please share your profile to connect and discuss.
Company: BigThinkCode Technologies
URL: https://www.bigthinkcode.com/
Job Role: Software Engineer / Senior Software Engineer
Experience: 2 - 5 years
Work location: Chennai
Work Mode: Hybrid
Joining time: Immediate – 4 weeks
Responsibilities
- Build and enhance backend features as part of the tech team.
- Take ownership of features end-to-end in a fast-paced product/startup environment.
- Collaborate with managers, designers, and engineers to deliver user-facing functionality.
- Design and implement scalable REST APIs and supporting backend systems.
- Write clean, reusable, well-tested code and contribute to internal libraries.
- Participate in requirement discussions and translate business needs into technical tasks.
- Support the technical roadmap through architectural input and continuous improvement.
Required Skills:
- Strong understanding of Algorithms, Data Structures, and OOP principles.
- Integrate with third-party systems (payment/SMS APIs, mapping services, etc.).
- Proficiency in Python and experience with at least one framework (Flask / Django / FastAPI).
- Hands-on experience with design patterns, debugging, and unit testing (pytest/unittest).
- Working knowledge of relational or NoSQL databases and ORMs (SQLAlchemy / Django ORM).
- Familiarity with asynchronous programming (async/await, FastAPI async).
- Experience with caching mechanisms (Redis).
- Ability to perform code reviews and maintain code quality.
- Exposure to cloud platforms (AWS/Azure/GCP) and containerization (Docker).
- Experience with CI/CD pipelines.
- Basic understanding of message brokers (RabbitMQ / Kafka / Redis streams).
Benefits:
· Medical cover for employee and eligible dependents.
· Tax beneficial salary structure.
· Comprehensive leave policy
· Competency development training programs.
Job Description: Python Engineer
Role Summary
We are looking for a talented Python Engineer to design, develop, and maintain high-quality backend applications and automation solutions. The ideal candidate should have strong programming skills, familiarity with modern development practices, and the ability to work in a fast-paced, collaborative environment.
Key Responsibilities:
Python Development & Automation
- Design, develop, and maintain Python scripts, tools, and automation frameworks.
- Build automation for operational tasks such as deployment, monitoring, system checks, and maintenance.
- Write clean, modular, and well-documented Python code following best practices.
- Develop APIs, CLI tools, or microservices when required.
Linux Systems Engineering
- Manage, configure, and troubleshoot Linux environments (RHEL, CentOS, Ubuntu).
- Perform system performance tuning, log analysis, and root-cause diagnostics.
- Work with system services, processes, networking, file systems, and security controls.
- Implement shell scripting (bash) alongside Python for system-level automation.
CI/CD & Infrastructure Support
- Support integration of Python automation into CI/CD pipelines (Jenkins).
- Participate in build and release processes for infrastructure components.
- Ensure automation aligns with established infrastructure standards and governance.
- Use Bash scripting together with Python to improve automation efficiency.
Cloud & DevOps Collaboration (if applicable)
- Collaborate with Cloud/DevOps engineers on automation for AWS or other cloud platforms.
- Integrate Python tools with configuration management tools such as Chef or Ansible, or with Terraform modules.
- Contribute to containerization efforts (Docker, Kubernetes) leveraging Python automation.
Strong Software Engineer, Engineering Manager Profiles
Mandatory (Experience) – Must have minimum 4+ years of hands on experience in software development
Mandatory (Tech Skillset) – Must have 3+ years of hands-on experience in backend development using Java / Node.js / Go / Python (any 1) OR 3+ YOE in frontend Development using React / Angular / Vue (any 1)
Mandatory (Mentoring Skillset) – Must have at least 1+ year of experience leading or mentoring a team of software engineers.
Mandatory (Tech Skills) – Must have a solid understanding of microservices architecture, APIs, and cloud platforms (AWS / GCP / Azure).
Mandatory (DevOps Skillset) – Must have hands-on experience working with Docker, Kubernetes, and CI/CD pipelines.
Mandatory (Company) – Top-tier/renowned product-based company (preferred Entreprise B2B SaaS)
Mandatory (Education) – Undergraduation from Tier-1 Engineering College (IIT, BITS, IIIT, NSUT, DTU, etc.)
Mandatory (Note): No hire policy from Sprinklr
Role & Responsibilities:
We are seeking a Software Developer with 5-10 year’s experience with strong foundations in Python, databases, and AI technologies.
The ideal candidate will support the development of AI-powered solutions, focusing on LLM integration, prompt engineering, and database-driven workflows.
This is a hands-on role with opportunities to learn and grow into advanced AI engineering responsibilities.
Key Responsibilities :
• Develop, test, and maintain Python-based applications and APIs.
• Design and optimize prompts for Large Language Models (LLMs) to improve accuracy and performance.
• Work with JSON-based data structures for request/response handling.
• Integrate and manage PostgreSQL (pgSQL) databases, including writing queries and handling data pipelines.
• Collaborate with the product and AI teams to implement new features.
• Debug, troubleshoot, and optimize performance of applications and workflows.
• Stay updated on advancements in LLMs, AI frameworks, and generative AI tools.
Role & Responsibilities:
We are seeking a Software Developer with 2-10 year’s experience with strong foundations in Python, databases, and AI technologies. The ideal candidate will support the development of AI-powered solutions, focusing on LLM integration, prompt engineering, and database-driven workflows. This is a hands-on role with opportunities to learn and grow into advanced AI engineering responsibilities.
Key Responsibilities
• Develop, test, and maintain Python-based applications and APIs.
• Design and optimize prompts for Large Language Models (LLMs) to improve accuracy and performance.
• Work with JSON-based data structures for request/response handling. • Integrate and manage PostgreSQL (pgSQL) databases, including writing queries and handling data pipelines.
• Collaborate with the product and AI teams to implement new features. • Debug, troubleshoot, and optimize performance of applications and workflows.
• Stay updated on advancements in LLMs, AI frameworks, and generative AI tools.
Required Skills & Qualifications
• Strong knowledge of Python (scripting, APIs, data handling).
• Basic understanding of Large Language Models (LLMs) and prompt engineering techniques.
• Experience with JSON data parsing and transformations.
• Familiarity with PostgreSQL or other relational databases.
• Ability to write clean, maintainable, and well-documented code.
• Strong problem-solving skills and eagerness to learn.
• Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
Nice-to-Have (Preferred)
• Exposure to AI/ML frameworks (e.g., LangChain, Hugging Face, OpenAI APIs).
• Experience working in startups or fast-paced environments.
• Familiarity with version control (Git/GitHub) and cloud platforms (AWS, GCP, or Azure).
What We Offer
• The opportunity to define the future of GovTech through AI-powered solutions.
• A strategic leadership role in a fast-scaling startup with direct impact on product direction and market success.
• Collaborative and innovative environment with cross-functional exposure.
• Growth opportunities backed by a strong leadership team.
• Remote flexibility and work-life balance.
🚀 Join GuppShupp: Build Bharat's First AI Lifelong Friend
GuppShupp's mission is nothing short of building Bharat's First AI Lifelong Friend. This is more than just a chatbot—it's about creating a truly personalized, consistently available companion that understands and grows with the user over a lifetime. We are pioneering this deeply personal experience using cutting-edge Generative AI.
We're hiring a Founding AI Engineer (1+ Year Experience) to join our small team of A+ builders and craft the foundational LLM and infrastructure behind this mission.
If you are passionate about:
- Deep personalization and managing complex user state/memory.
- Building high-quality, high-throughput AI tools.
- Next-level infrastructure at an incredible scale (millions of users).
What you'll do (responsibilities)
We're looking for an experienced individual contributor who enjoys working alongside other experienced engineers and iterate on AI
Prompt Engineering & Testing
- Write, test, and iterate numerous prompt variations.
- Identify and fix failures, biases, or edge cases in AI responses.
Advance LLM and Development
- Engineer solutions for long-term conversational memory and statefulness in LLMs.
- Implement techniques (e.g., retrieval-augmented generation (RAG) or summarization) to effectively manage and extend the context window for complex tasks.
Collaboration & Optimization
- Work with product and growth teams to turn feature goals into effective technical prompts.
- Optimize prompts for diverse use cases (e.g., chat, content, personalization).
LLM Fine-Tuning & Management
- Prepare, clean, and format datasets for training.
- Run fine-tuning jobs on smaller, specialized language models.
- Assist in deploying, monitoring, and maintaining these models
What we're looking for (qualifications)
You are an AI Engineer who has successfully shipped systems in this domain for over a year—you won't need ramp-up time. We prioritize continuous learning and hands-on skill development over formal qualifications. Crucially, we are looking for a teammate driven by a sense of duty to the user and a passion for taking full ownership of their contributions.
Role Summary
Our CloudOps/DevOps teams are distributed across India, Canada, and Israel.
As a Manager, you will lead teams of Engineers and champion configuration management, cloud technologies, and continuous improvement. The role involves close collaboration with global leaders to ensure our applications, infrastructure, and processes remain scalable, secure, and supportable. You will work closely with Engineers across Dev, DevOps, and DBOps to design and implement solutions that improve customer value, reduce costs, and eliminate toil.
Key Responsibilities
- Guide the professional development of Engineers and support teams in meeting business objectives
- Collaborate with leaders in Israel on priorities, architecture, delivery, and product management
- Build secure, scalable, and self-healing systems
- Manage and optimize deployment pipelines
- Triage and remediate production issues
- Participate in on-call escalations
Key Qualifications
- Bachelor’s in CS or equivalent experience
- 3+ years managing Engineering teams
- 8+ years as a Site Reliability or Platform Engineer
- 5+ years administering Linux and Windows environments
- 3+ years programming/scripting (Python, JavaScript, PowerShell)
- Strong experience with OS internals, virtualization, storage, networking, and firewalls
- Experience maintaining On-Prem (90%) and Cloud (10%) environments (AWS, GCP, Azure)
Role: Azure AI Tech Lead
Exp-3.5-7 Years
Location: Remote / Noida (NCR)
Notice Period: Immediate to 15 days
Mandatory Skills: Python, Azure AI/ML, PyTorch, TensorFlow, JAX, HuggingFace, LangChain, Kubeflow, MLflow, LLMs, RAG, MLOps, Docker, Kubernetes, Generative AI, Model Deployment, Prometheus, Grafana
JOB DESCRIPTION
As the Azure AI Tech Lead, you will serve as the principal technical expert leading the design, development, and deployment of advanced AI and ML solutions on the Microsoft Azure platform. You will guide a team of engineers, establish robust architectures, and drive end-to-end implementation of AI projects—transforming proof-of-concepts into scalable, production-ready systems.
Key Responsibilities:
- Lead architectural design and development of AI/ML solutions using Azure AI, Azure OpenAI, and Cognitive Services.
- Develop and deploy scalable AI systems with best practices in MLOps across the full model lifecycle.
- Mentor and upskill AI/ML engineers through technical reviews, training, and guidance.
- Implement advanced generative AI techniques including LLM fine-tuning, RAG systems, and diffusion models.
- Collaborate cross-functionally to translate business goals into innovative AI solutions.
- Enforce governance, responsible AI practices, and performance optimization standards.
- Stay ahead of trends in LLMs, agentic AI, and applied research to shape next-gen solutions.
Qualifications:
- Bachelor’s or Master’s in Computer Science or related field.
- 3.5–7 years of experience delivering end-to-end AI/ML solutions.
- Strong expertise in Azure AI ecosystem and production-grade model deployment.
- Deep technical understanding of ML, DL, Generative AI, and MLOps pipelines.
- Excellent analytical and problem-solving abilities; applied research or open-source contributions preferred.
About Ven Analytics
At Ven Analytics, we don’t just crunch numbers — we decode them to uncover insights that drive real business impact. We’re a data-driven analytics company that partners with high-growth startups and enterprises to build powerful data products, business intelligence systems, and scalable reporting solutions. With a focus on innovation, collaboration, and continuous learning, we empower our teams to solve real-world business problems using the power of data.
Role Overview
We’re looking for a Power BI Data Analyst who is not just proficient in tools but passionate about building insightful, scalable, and high-performing dashboards. The ideal candidate should have strong fundamentals in data modeling, a flair for storytelling through data, and the technical skills to implement robust data solutions using Power BI, Python, and SQL..
Key Responsibilities
- Technical Expertise: Develop scalable, accurate, and maintainable data models using Power BI, with a clear understanding of Data Modeling, DAX, Power Query, and visualization principles.
- Programming Proficiency: Use SQL and Python for complex data manipulation, automation, and analysis.
- Business Problem Translation: Collaborate with stakeholders to convert business problems into structured data-centric solutions considering performance, scalability, and commercial goals.
- Hypothesis Development: Break down complex use-cases into testable hypotheses and define relevant datasets required for evaluation.
- Solution Design: Create wireframes, proof-of-concepts (POC), and final dashboards in line with business requirements.
- Dashboard Quality: Ensure dashboards meet high standards of data accuracy, visual clarity, performance, and support SLAs.
- Performance Optimization: Continuously enhance user experience by improving performance, maintainability, and scalability of Power BI solutions.
- Troubleshooting & Support: Quick resolution of access, latency, and data issues as per defined SLAs.
- Power BI Development: Use power BI desktop for report building and service for distribution
- Backend development: Develop optimized SQL queries that are easy to consume, maintain and debug.
- Version Control: Strict control on versions by tracking CRs and Bugfixes. Ensuring the maintenance of Prod and Dev dashboards.
- Client Servicing : Engage with clients to understand their data needs, gather requirements, present insights, and ensure timely, clear communication throughout project cycles.
- Team Management : Lead and mentor a small team by assigning tasks, reviewing work quality, guiding technical problem-solving, and ensuring timely delivery of dashboards and reports..
Must-Have Skills
- Strong experience building robust data models in Power BI
- Hands-on expertise with DAX (complex measures and calculated columns)
- Proficiency in M Language (Power Query) beyond drag-and-drop UI
- Clear understanding of data visualization best practices (less fluff, more insight)
- Solid grasp of SQL and Python for data processing
- Strong analytical thinking and ability to craft compelling data stories
- Client Servicing Background.
Good-to-Have (Bonus Points)
- Experience using DAX Studio and Tabular Editor
- Prior work in a high-volume data processing production environment
- Exposure to modern CI/CD practices or version control with BI tools
Why Join Ven Analytics?
- Be part of a fast-growing startup that puts data at the heart of every decision.
- Opportunity to work on high-impact, real-world business challenges.
- Collaborative, transparent, and learning-oriented work environment.
- Flexible work culture and focus on career development.
The requirements are as follows:
1) Familiar with the the Django REST API Framework.
2) Experience with the FAST API framework will be a plus
3) Strong grasp of basic python programming concepts ( We do ask a lot of questions on this on our interviews :) )
4) Experience with databases like MongoDB , Postgres , Elasticsearch , REDIS will be a plus
5) Experience with any ML library will be a plus.
6) Familiarity with using git , writing unit test cases for all code written and CI/CD concepts will be a plus as well.
7) Familiar with basic code patterns like MVC.
8) Grasp on basic data structures.
You can contact me on nine three one six one two zero one three two
Location: Mumbai / Remote
Department: AI & Automation
Role Objective
Build intelligent, multilingual AI agents that combine LLM reasoning with live communication channels to assist humans in real-time.
Key Responsibilities
- Design and deploy AI agent workflows using LangChain / Bedrock / OpenAI APIs.
- Develop contextual memory, persona, and tone control.
- Integrate agents into Jodo Online, Jodo QA, Jodo Admin, Jodo C3, and Toolbar Apps.
- Optimize latency and conversation flow for real-time interactions.
- Implement compliance and audit hooks within AI pipelines.
Required Skills & Experience
- 3–7 years in AI engineering, NLP, or chatbot frameworks.
- Experience with Python/Node.js LLM APIs.
- Familiar with RAG architectures and vector DBs.
- Understanding of multilingual processing and ethics AI.
What Success Looks Like
- < 2 s response time.
- 95 % contextual accuracy.
Why Join Us
Shape the digital workforce of the borderless economy — where humans and AI execute seamlessly together.
Job Summary:
You will build and manage the entire "digital" side of our solution. This role begins with configuring the on-site gateways to model and send data securely via MQTT. You will then build the "brain" on AWS, using the IoT suite to ingest, process, and store this data. Finally, you will turn this raw data into value by creating the powerful, intuitive dashboards our SME clients will use every day to make critical business decisions.
Key Responsibilities:
● Gateway Configuration & Data Modeling: Remotely configure and manage IoT gateways. You will define the JSON data "payload" (data model) and ensure clean, tagged data is sent securely via MQTT to the cloud.
● AWS IoT Platform Management: Configure and manage AWS IoT Core, including device registration, security (certificates), rules engine, and MQTT message routing.
● Data Pipeline Development: Build a scalable data pipeline using AWS services (e.g., IoT Rules, Kinesis, Lambda functions, S3, and Timestream or DynamoDB) to process and store data.
● Dashboard & Visualization: This is your primary deliverable. Design, build, and maintain client-facing dashboards using tools like AWS Quicksight or Grafana.
● Data Analytics & Alerts: Implement rules and functions to generate real-time alerts (e.g., via SNS, email) and provide business insights.
● Collaboration & Problem Solving: Work seamlessly with the Hardware & OT Engineer to define data requirements and be the lead problem-solver for any data connectivity issues from the gateway to the cloud.
Skills & Qualifications:
● Degree in Computer Science, IT, or a related field.
● Proven experience with the AWS ecosystem, specifically AWS IoT Core.
● Strong experience with data visualization tools, especially Grafana or AWS Quicksight.
● Deep understanding of MQTT protocols and JSON data structures.
● Experience with configuring IoT gateways (e.g., their software, edge logic) is highly desirable.
● Scripting/programming skills (e.g., Python or Node.js) for writing AWS Lambda functions.
● A logical mindset with a passion for turning complex data into simple, actionable insights.
What We Offer You (Our Culture)
● Working Flexibility: We trust you to get your work done. We focus on results, not on rigid 9-to-5 schedules.
● Fair Compensation: We offer a competitive salary and benefits package that respects your skills and the market.
● Real Growth Opportunities: You are joining at the very beginning. You won't just do a job; you will help build the department. Your growth is tied directly to the company's success.
● Work-Life Balance: We are building a marathon, not a sprint. We actively work to create an environment that is more respectful of your personal time compared to traditional companies.
What We Need From You (Our Expectations)
● Enthusiasm & Confidence: We need you to be passionate about what IoT can do for SMEs and confident in your ability to deliver.
● A Standalone Leader: You will have a high degree of autonomy. We need you to take ownership of your domain, make decisions, and drive results without waiting to be told what to do.
● Teamwork is Key: While you'll lead your area, you'll be working in a tight-knit team. You must be able to communicate clearly and "spice things up" with collaboration and shared problem-solving.
● Focus & Clarity: The ability to focus on the most important tasks and communicate with clarity is essential in a fast-moving startup.
● A Challenging Mindset: We want you to respectfully challenge our ideas (and us to challenge yours!) so we always find the best possible solution.
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles
.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
Job Description:
We are looking for a skilled Backend Developer with 2–5 years of experience in software development, specializing in Python and/or Golang. If you have strong programming skills, enjoy solving problems, and want to work on secure and scalable systems, we'd love to hear from you!
Location - Pune, Baner.
Interview Rounds - In Office
Key Responsibilities:
Design, build, and maintain efficient, reusable, and reliable backend services using Python and/or Golang
Develop and maintain clean and scalable code following best practices
Apply Object-Oriented Programming (OOP) concepts in real-world development
Collaborate with front-end developers, QA, and other team members to deliver high-quality features
Debug, optimize, and improve existing systems and codebase
Participate in code reviews and team discussions
Work in an Agile/Scrum development environment
Required Skills: Strong experience in Python or Golang (working knowledge of both is a plus)
Good understanding of OOP principles
Familiarity with RESTful APIs and back-end frameworks
Experience with databases (SQL or NoSQL)
Excellent problem-solving and debugging skills
Strong communication and teamwork abilities
Good to Have:
Prior experience in the security industry
Familiarity with cloud platforms like AWS, Azure, or GCP
Knowledge of Docker, Kubernetes, or CI/CD tools
Requires that any candidate know the M365 Collaboration environment. SharePoint Online, MS Teams. Exchange Online, Entra and Purview. Need developer that possess a strong understanding of Data Structure, Problem Solving abilities, SQL, PowerShell, MS Teams App Development, Python, Visual Basic, C##, JavaScript, Java, HTML, PHP, C.
Need a strong understanding of the development lifecycle, and possess debugging skills time management, business acumen, and have a positive attitude is a must and open to continual growth.
Capability to code appropriate solutions will be tested in any interview.
Knowledge of a wide variety of Generative AI models
Conceptual understanding of how large language models work
Proficiency in coding languages for data manipulation (e.g., SQL) and machine learning & AI development (e.g., Python)
Experience with dashboarding tools such as Power BI and Tableau (beneficial but not essential)
We're seeking an AI/ML Engineer to join our team-
As an AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real world business problems. You will work closely with cross-functional teams, including data scientists, software engineers, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop. Your role will involve researching cutting-edge algorithms, data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.
Responsibilities
- Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI
- AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.
- Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data
- Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics
- Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behavior, and performance metrics
- Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems
- Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes
- Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions
- Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference
Requirements
- Bachelor's, Master's or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus
- Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects
- Proficiency in programming languages commonly used for AI/ML. Preferably Python
- Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.
- Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.
- Strong understanding of machine learning algorithms, statistics, and data structures
- Experience with data preprocessing, data wrangling, and feature engineering
- Knowledge of deep learning architectures, neural networks, and transfer learning
- Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment
- Solid understanding of software engineering principles and best practices for writing maintainable and scalable code
- Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions
- Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders
Description
We are seeking a skilled and detail-oriented Software Developer to automate our internal workflows, develop tools for internal use that are used by our development team.
We follow the following practices: unit testing, continuous integration CI, continuous deployment CD, and DevOps.
We have codebases in go, java, python, vue js, bash and support the development team that develops C code.
You need to like challenges, explore new fields and find solutions for problems.
You will be responsible for coordinating, automating, and validating internal workflows and ensuring operational stability, and system reliability.
Requirements
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 2+ years in professional software development
- Solid understanding of software development patterns like SOLID, GoF or similar.
- Experience automating deployments for different kinds of applications.
- Strong understanding of Git version control, merge/rebase strategies, tagging.
- Familiarity with containerization (Docker) and deployment orchestration (e.g., docker compose).
- Solid scripting experience (bash, or similar).
- Understanding of observability, monitoring, and probing tooling (e.g., Prometheus, Grafana, blackbox exporter).
Preferred Skills
- Experience in SRE
- Proficiency in CI/CD tooling (e.g., GitHub Actions, Jenkins, GitLab).
- Familiarity with build tools like Make, CMake, or similar.
- Exposure to artifact management systems (e.g., aptly, Artifactory, Nexus).
- Experience deploying to Linux production systems with service uptime guarantees.
Responsibilities
- Develop new services that are needed by SRE, Field or Development Team by adopting unit testing, agile, clean code practices.
- Drive the CI/CD pipeline and maintain the workflows, using tools such as GitLab, Jenkins
- Deploy the services and implement and refine the automation for different environments.
- Operate: The services that the SRE Team developed.
- Automate release pipelines: Build and maintain CI/CD workflows using tools such as Jenkins and GitLab.
- Version control: Manage and enforce Git best practices, branching strategies (e.g., Git Flow), tagging, and release versioning.
- Collaboration: Work closely with developers, QA, and product teams to align on release timelines and feature readines
- Success Metrics
- Achieve >99% service up time with minimal rollbacks.
- Delivery in time, hold timelines.
Benefits
Enjoy a great environment, great people, and a great package
- Stock Appreciation Rights - Generous pre series-B stock options
- Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements
- Health Insurance - Premium health insurance for employee, spouse and children
- Working Hours - Flexible working hours with sole focus on enabling a great work environment
- Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills
- Make a Difference - We're here because we want to make an impact on the world - we hope you do too!

is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI. Expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value.
Required Skills
• 12+ years of proven experience in designing large-scale enterprise systems and distributed
architectures.
• Strong expertise in Azure, AWS, Python, Docker, LangChain, Solution Architecture, C#, .Net
• Frontend technologies like React, Angular and, ASP.Net MVC
• Deep knowledge of architecture frameworks (TOGAF).
• Understanding of security principles, identity management, and data protection.
• Experience with solution architecture methodologies and documentation standards
• Deep understanding of databases (SQL and NoSQL), RESTful APIs, and message brokers.
• Excellent communication, leadership, and stakeholder management skills.
We are seeking enthusiastic and motivated fresh graduates with a strong foundation in programming, primarily in Python, and basic knowledge of Java, C#, or JavaScript. This role offers hands-on experience in developing applications, writing clean code, and collaborating on real-world projects under expert guidance.
Key Responsibilities
• Develop and maintain applications using Python as the primary language.
• Assist in coding, debugging, and testing software modules in Java, C#, or JavaScript as needed.
• Collaborate with senior developers to learn best practices and contribute to project deliverables.
• Write clean, efficient, and well-documented code.
• Participate in code reviews and follow standard development processes.
• Continuously learn and adapt to new technologies and frameworks.
Core Expectations
• Eagerness to Learn: Open to acquiring new programming skills and frameworks.
• Adaptability: Ability to work across multiple languages and environments.
• Problem-Solving: Strong analytical skills to troubleshoot and debug issues.
• Team Collaboration: Work effectively with peers and seniors.
• Professionalism: Good communication skills and a positive attitude.
Qualifications
• Bachelor’s degree in Computer Science, IT, or related field.
• Strong understanding of Python (OOP, data structures, basic frameworks like Flask/Django).
• Basic knowledge of Java, C#, or JavaScript.
• Familiarity with version control systems (Git).
• Understanding of databases (SQL/NoSQL) is a plus.
NOTE: Laptop with high speed internet is mandatory
AI based systems design and development, entire pipeline from image/ video ingest, metadata ingest, processing, encoding, transmitting.
Implementation and testing of advanced computer vision algorithms.
Dataset search, preparation, annotation, training, testing, fine tuning of vision CNN models. Multimodal AI, LLMs, hardware deployment, explainability.
Detailed analysis of results. Documentation, version control, client support, upgrades.
Job Description
Location: Mumbai (with short/medium-term travel opportunities within India & foreign location)
Experience: 5 -8 years
Job Type: Full-time
About the Role
We are looking for experienced data engineers who can independently build, optimize, and manage scalable data pipelines and platforms. In this role, you’ll work closely with clients and internal teams to deliver robust data solutions that power analytics, AI/ML, and operational systems. You’ll also help mentor junior engineers and bring engineering discipline into our data engagements.
Key Responsibilities
Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.
Implement scalable data models, data warehouses/lakehouses, and data lakes to support analytics and decision-making.
Collaborate with cross-functional stakeholders to understand business requirements and translate them into technical data solutions.
Drive performance tuning, monitoring, and reliability of data pipelines.
Write clean, modular, and production-ready code with proper documentation and testing.
Contribute to architectural discussions, tool evaluations, and platform setup.
Mentor junior engineers and participate in code/design reviews.
Must-Have Skills
Strong programming skills in Python and advanced SQL expertise.
Deep understanding of data engineering concepts such as ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
Experience with distributed data processing frameworks (e.g., Apache Spark, Flink, or similar).
Exposure with Java is mandate
Experience with building pipelines using orchestration tools like Airflow or similar.
Familiarity with CI/CD pipelines and version control tools like Git.
Ability to debug, optimize, and scale data pipelines in real-world settings.
Good to Have
Experience working on any major cloud platform (AWS preferred; GCP or Azure also welcome).
Exposure to Databricks, dbt, or similar platforms is a plus.
Experience with Snowflake is preferred.
Understanding of data governance, data quality frameworks, and observability.
Certification in AWS (e.g., Data Analytics, Solutions Architect) or Databricks is a plus.
Other Expectations
Comfortable working in fast-paced, client-facing environments.
Strong analytical and problem-solving skills with attention to detail.
Ability to adapt across tools, stacks, and business domains.
Willingness to travel within India for short/medium-term client engagements as needed.
Experience- 6 to 8 years
Location- Bangalore
Job Description-
- Extensive experience with machine learning utilizing the latest analytical models in Python. (i.e., experience in generating data-driven insights that play a key role in rapid decision-making and driving business outcomes.)
- Extensive experience using Tableau, table design, PowerApps, Power BI, Power Automate, and cloud environments, or equivalent experience designing/implementing data analysis pipelines and visualization.
- Extensive experience using AI agent platforms. (AI = data analysis: a required skill for data analysts.)
- A statistics major or equivalent understanding of statistical analysis results interpretation.
We are looking for a Cloud Security Engineer to join our organization. The ideal candidate will have strong hands-on experience in ensuring robust security controls across both applications and organizational data. This candidate is expected to work closely with multiple stakeholders to architect, implement, and monitor effective safeguards. The ideal candidate will champion secure design, conduct risk assessments, drive vulnerability management, and promote data protection best practices for the organization
Responsibilities
- Design and implement security measures for website and API applications.
- Conduct security-first code reviews, vulnerability assessments, and posture audits for business-critical applications.
- Conduct security testing activities like SAST & DAST by integrating them within the project’s CI/CD pipelines and development workflows.
- Manage all penetration testing activities including working with external vendors for security certification of business-critical applications.
- Develop and manage data protection policies and RBAC controls for sensitive organizational data like PII, revenue, secrets, etc.
- Oversee encryption, key management, and secure data storage solutions.
- Monitor threats and responds to incidents involving application and data breaches.
- Collaborate with engineering, data, product and compliance teams to achieve security-by-design principles.
- Ensure compliance with regulatory standards (GDPR, HIPAA, etc.) and internal organizational policies.
- Automate recurrent security tasks using scripts and security tools.
- Maintain documentation around data flows, application architectures, and security controls.
Requirements
- 10+ years’ experience in application security and/or data security engineering.
- Strong understanding of security concepts including zero trust architecture, threat modeling, security frameworks (like SOC 2, ISO 27001), and best practices in corporate security environments.
- Strong knowledge of modern web/mobile application architectures and common vulnerabilities (like OWASP Top 10, etc.)
- Proficiency in secure coding practices and code reviews for major programming languages including Java, .NET, Python, JavaScript, Typescript, React, etc.
- Hands-on experience in at-least two Software tooling in areas of vulnerability scanning and static/dynamic analysis. Software tooling can include Checkmarx, Veracode, SonarQube, Burp Suite, AppScan, etc.
- Advanced understanding of data encryption, key management, and secure storage (SQL, NoSQL, Cloud) and secure transfer mechanisms.
- Working experience in Cloud Environments like AWS & GCP and familiarity with the recommended security best practices.
- Familiarity with regulatory frameworks such as GDPR, HIPAA, PCI DSS and the controls needed to implement them.
- Experience integrating security into DevOps/CI/CD processes.
- Hands-on Experience with automation in any of the scripting languages (Python, Bash, etc.)
- Ability to conduct incident response and forensic investigations related to application/data breaches.
- Excellent communication and documentation skills.
Good To Have:
- Cloud Security certifications in any one of the below
- AWS Certified Security – Specialty
- GCP Professional Cloud Security
- Experience with container security (Docker, Kubernetes) and cloud security tools (AWS, Azure, GCP).
- Experience in safeguard data storage solutions like GCP GCS, BigQuery, etc.
- Hands-on work with any SIEM/SOC platforms for monitoring and alerting.
- Knowledge of data loss prevention (DLP) solutions and IAM (identity and access management) systems.
Perks:
- Day off on the 3rd Friday of every month (one long weekend each month)
- Monthly Wellness Reimbursement Program to promote health and well-being
- Monthly Office Commutation Reimbursement Program
- Paid paternity and maternity leave
Experience Required: 2-5 Years
No. of vacancies: 2
Job Type: Full Time
Vacancy Role: WFO
Job Description
ChicMic Studios is hiring for a highly skilled and experienced Sr. Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential.
Roles & Responsibilities
- Develop, maintain, and scale web applications using Django & DRF.
- Implement and manage payment gateway integrations and ensure secure transaction handling.
- Design and optimize SQL queries, transaction management, and data integrity.
- Work with Redis and Celery for caching, task queues, and background job processing.
- Develop and deploy applications on AWS services (EC2, S3, RDS, Lambda, Cloud Formation).
- Implement strong security practices including CSRF token generation, SQL injection prevention, JWT authentication, and other security mechanisms.
- Build and maintain microservices architectures with scalability and modularity in mind.
- Develop Web Socket-based solutions including real-time chat rooms and notifications.
- Ensure robust application testing with unit testing and test automation frameworks.
- Collaborate with cross-functional teams to analyze requirements and deliver effective solutions.
- Monitor, debug, and optimize application performance, scalability, and reliability.
- Stay updated with emerging technologies, frameworks, and industry best practices.
- Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn.
- Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases.
- Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization.
- Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker.
Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 2-5 years of professional experience as a Python Developer.
- Proficient in Python with a strong understanding of its ecosystem.
- Extensive experience with Django and Flask frameworks.
- Hands-on experience with AWS services for application deployment and management.
- Strong knowledge of Django Rest Framework (DRF) for building APIs.
- Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn.
- Experience with transformer architectures for NLP and advanced AI solutions.
- Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
- Familiarity with MLOps practices for managing the machine learning lifecycle.
- Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
- Excellent problem-solving skills and the ability to work independently and as part of a team.
- Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.
Job description
Job Title: Python Trainer (Workshop Model Freelance / Part-time)
Location: Thrissur & Ernakulam
Program Duration: 30 or 60 Hours (Workshop Model)
Job Type: Freelance / Contract
About the Role:
We are seeking an experienced and passionate Python Trainer to deliver interactive, hands-on training sessions for students under a workshop model in Thrissur and Ernakulam locations. The trainer will be responsible for engaging learners with practical examples and real-time coding exercises.
Key Responsibilities:
Conduct offline workshop-style Python training sessions (30 or 60 hours total).
Deliver interactive lectures and coding exercises focused on Python programming fundamentals and applications.
Customize the curriculum based on learners skill levels and project needs.
Guide students through mini-projects, assignments, and coding challenges.
Ensure effective knowledge transfer through practical, real-world examples.
Requirements:
Experience: 15 years of training or industry experience in Python programming.
Technical Skills: Strong knowledge of Python, including OOPs concepts, file handling, libraries (NumPy, Pandas, etc.), and basic data visualization.
Prior experience in academic or corporate training preferred.
Excellent communication and presentation skills.
Mode: Offline Workshop (Thrissur / Ernakulam)
Duration: Flexible – 30 Hours or 60 Hours Total
Organization: KGiSL Microcollege
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
Key Skills
Data Science,Artificial Intelligence
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
Job description
Job Title: Python Trainer (Workshop Model Freelance / Part-time)
Location: Thrissur & Ernakulam
Program Duration: 30 or 60 Hours (Workshop Model)
Job Type: Freelance / Contract
About the Role:
We are seeking an experienced and passionate Python Trainer to deliver interactive, hands-on training sessions for students under a workshop model in Thrissur and Ernakulam locations. The trainer will be responsible for engaging learners with practical examples and real-time coding exercises.
Key Responsibilities:
Conduct offline workshop-style Python training sessions (30 or 60 hours total).
Deliver interactive lectures and coding exercises focused on Python programming fundamentals and applications.
Customize the curriculum based on learners skill levels and project needs.
Guide students through mini-projects, assignments, and coding challenges.
Ensure effective knowledge transfer through practical, real-world examples.
Requirements:
Experience: 15 years of training or industry experience in Python programming.
Technical Skills: Strong knowledge of Python, including OOPs concepts, file handling, libraries (NumPy, Pandas, etc.), and basic data visualization.
Prior experience in academic or corporate training preferred.
Excellent communication and presentation skills.
Mode: Offline Workshop (Thrissur / Ernakulam)
Duration: Flexible – 30 Hours or 60 Hours Total
Organization: KGiSL Microcollege
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
Key Skills
Data Science,Artificial Intelligence
Role: Other
Industry Type: Education / Training
Department: Other
Employment Type: Full Time, Permanent
Role Category: Other
Education
UG: Any Graduate
























