50+ Google Cloud Platform (GCP) Jobs in India
Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!
If interested please share your resume at ayushi.dwivedi at cloudsufi.com
Note - This role is remote but with quarterly visit to Noida office (1 week in a qarter) if you are ok for that then pls share your resume.
Data Engineer
Position Type: Full-time
About Us
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Job Summary
We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.
Key Responsibilities
ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards.
API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.
Qualifications and Skills
Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.
Core Competencies:
Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
Experience with data validation techniques and tools.
Familiarity with CI/CD practices and the ability to work in an Agile framework.
Strong problem-solving skills and keen attention to detail.
Preferred Qualifications:
Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).
Familiarity with similar large-scale public dataset integration initiatives.
Experience with multilingual data integration.
If interested please send your resume at ayushi.dwivedi at cloudsufi.com
Current location of candidate must be Bangalore (as client office visit is required), also candidate must be open for 1 week in a quarter visit to Noida office.
About Us
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/
Job Summary
We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.
Key Responsibilities
ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards.
API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.
Qualifications and Skills
Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.
Core Competencies:
Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
Experience with data validation techniques and tools.
Familiarity with CI/CD practices and the ability to work in an Agile framework.
Strong problem-solving skills and keen attention to detail.
Preferred Qualifications:
Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).
Familiarity with similar large-scale public dataset integration initiatives.
Experience with multilingual data integration.
Like us, you'll be deeply committed to delivering impactful outcomes for customers.
- 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
- Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
- Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
- Experience writing batch/cron jobs using Python and Shell scripting.
- Experience in web application development using JavaScript and JavaScript libraries.
- Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
- Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
- Understanding of code versioning tools such as Git.
- Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
- Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
What You’ll Do:
As a Sr. Data Scientist, you will work closely across DeepIntent Data Science teams located in New York, India, and Bosnia. The role will focus on building predictive models, implementing data-driven solutions to maximize ad effectiveness. You will also lead efforts in generating analyses and insights related to the measurement of campaign outcomes, Rx, patient journey, and supporting the evolution of the DeepIntent product suite. Activities in this position include developing and deploying models in production, reading campaign results, analyzing medical claims, clinical, demographic and clickstream data, performing analysis and creating actionable insights, summarizing, and presenting results and recommended actions to internal stakeholders and external clients, as needed.
- Explore ways to create better predictive models.
- Analyze medical claims, clinical, demographic and clickstream data to produce and present actionable insights.
- Explore ways of using inference, statistical, and machine learning techniques to improve the performance of existing algorithms and decision heuristics.
- Design and deploy new iterations of production-level code.
- Contribute posts to our upcoming technical blog.
Who You Are:
- Bachelor’s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, or Data Science.
- 5+ years of working experience as a Data Scientist or Researcher in digital marketing, consumer advertisement, telecom, or other areas requiring customer-level predictive analytics.
- Advanced proficiency in performing statistical analysis in Python, including relevant libraries, is required.
- Experience working with data processing, transformation and building model pipelines using tools such as Spark, Airflow, and Docker.
- You have an understanding of the ad-tech ecosystem, digital marketing and advertising data and campaigns or familiarity with the US healthcare patient and provider systems (e.g. medical claims, medications).
- You have varied and hands-on predictive machine learning experience (deep learning, boosting algorithms, inference…).
- You are interested in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicating with less technical audiences orally and in writing.
- You can write production level code, work with Git repositories.
- Active Kaggle participant.
- Working experience with SQL.
- Familiar with medical and healthcare data (medical claims, Rx, preferred).
- Conversant with cloud technologies such as AWS or Google Cloud.
About Borderless Access
Borderless Access is a company that believes in fostering a culture of innovation and collaboration to build and deliver digital-first products for market research methodologies. This enables our customers to stay ahead of their competition.
We are committed to becoming the global leader in providing innovative digital offerings for consumers backed by advanced analytics, AI, ML, and cutting-edge technological capabilities.
Our Borderless Product Innovation and Operations team is dedicated to creating a top-tier market research platform that will drive our organization's growth. To achieve this, we're embracing modern technologies and a cutting-edge tech stack for faster, higher-quality product development.
The Product Development team is the core of our strategy, fostering collaboration and efficiency. If you're passionate about innovation and eager to contribute to our rapidly evolving market research domain, we invite you to join our team.
Key Responsibilities
- Lead, mentor, and grow a cross-functional team of engineers specializing.
- Foster a culture of collaboration, accountability, and continuous learning.
- Oversee the design and development of robust platform architecture with a focus on scalability, security, and maintainability.
- Establish and enforce engineering best practices including code reviews, unit testing, and CI/CD pipelines.
- Promote clean, maintainable, and well-documented code across the team.
- Lead architectural discussions and technical decision-making, with clear and concise documentation for software components and systems.
- Collaborate with Product, Design, and other stakeholders to define and prioritize platform features.
- Track and report on key performance indicators (KPIs) such as velocity, code quality, deployment frequency, and incident response times.
- Ensure timely delivery of high-quality software aligned with business goals.
- Work closely with DevOps to ensure platform reliability, scalability, and observability.
- Conduct regular 1:1s, performance reviews, and career development planning.
- Conduct code reviews and provide constructive feedback to ensure code quality and maintainability.
- Participate in the entire software development lifecycle, from requirements gathering to deployment and maintenance.
Added Responsibilities
- Defining and adhering to the development process.
- Taking part in regular external audits and maintaining artifacts.
- Identify opportunities for automation to reduce repetitive tasks.
- Mentor and coach team members in the teams.
- Continuously optimize application performance and scalability.
- Collaborate with the Marketing team to understand different user journeys.
Growth and Development
The following are some of the growth and development activities that you can look forward to at Borderless Access as an Engineering Manager:
- Develop leadership skills – Enhance your leadership abilities through workshops or coaching from Senior Leadership and Executive Leadership.
- Foster innovation – Become part of a culture of innovation and experimentation within the product development and operations team.
- Drive business objectives – Become part of defining and taking actions to meet the business objectives.
About You
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 8+ years of experience in software development.
- Experience with microservices architecture and container orchestration.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
- Solid understanding of data structures, algorithms, and software design patterns.
- Solid understanding of enterprise system architecture patterns.
- Experience in managing a small to medium-sized team with varied experiences.
- Strong proficiency in back-end development, including programming languages like Python, Java, or Node.js, and frameworks like Spring or Express.
- Strong proficiency in front-end development, including HTML, CSS, JavaScript, and popular frameworks like React or Angular.
- Experience with databases (e.g., MySQL, PostgreSQL, MongoDB).
- Experience with cloud platforms AWS, Azure, or GCP (preferred is Azure).
- Knowledge of containerization technologies Docker and Kubernetes.
Hiring for Data Engineer
Exp : 4 - 6 yrs
Edu : BE/B.Tech
Work Location : Noida WFO
Notice Period : Immediate
Skilla : Pyspark , SQL , AWS/GCP , Hadoop
Key Responsibilities
- Administer and optimize PostgreSQL databases on AWS RDS
- Monitor database performance, health, and alerts using CloudWatch
- Manage backups, restores, upgrades, and high availability
- Support CDC pipelines using Debezium with Kafka & Zookeeper
- Troubleshoot database and replication/streaming issues
- Ensure database security, access control, and compliance
- Work with developers and DevOps teams for production support
- Use tools like DBeaver for database management and analysis
Required Skills
- Strong experience with PostgreSQL DBA activities
- Hands-on experience with AWS RDS
- Knowledge of Debezium, Kafka, and Zookeeper
- Monitoring using AWS CloudWatch
- Proficiency in DBeaver and SQL
- Experience supporting production environments
Hey there budding tech wizard! Are you ready to take on a new challenge?
As a Senior Software Developer 1 (Full Stack) at Techlyticaly, you'll be responsible for solving problems and flexing your tech muscles to build amazing stuff, mentoring and guiding others. You'll work under the guidance of mentors and be responsible for developing high-quality, maintainable code modules that are extensible and meet the technical guidelines provided.
Responsibilities
We want you to show off your technical skills, but we also want you to be creative and think outside the box. Here are some of the ways you'll be flexing your tech muscles:
- Use your superpowers to solve complex technical problems, combining your excellent abstract reasoning ability with problem-solving skills.
- Efficient in at least one product or technology of strategic importance to the organisation, and a true tech ninja.
- Stay up-to-date with emerging trends in the field, so that you can keep bringing fresh ideas to the table.
- Implement robust and extensible code modules as per guidelines. We love all code that's functional (Don’t we?)
- Develop good quality, maintainable code modules without any defects, exhibiting attention to detail. Nothing should look sus!
- Manage assigned tasks well and schedule them appropriately for self and team, while providing visibility to the mentor and understanding the mentor's expectations of work. But don't be afraid to add your own twist to the work you're doing.
- Consistently apply and improve team software development processes such as estimations, tracking, testing, code and design reviews, etc., but do it with a funky twist that reflects your personality.
- Clarify requirements and provide end-to-end estimates. We all love it when requirements are clear (Don’t we?)
- Participate in release planning and design complex modules & features.
- Work with product and business teams directly for critical issue ownership. Isn’t it better when one of us understands what they say?
- Feel empowered by managing deployments and assisting in infra management.
- Act as role model for the team and guide them to brilliance. We all feel secured when we have someone to look up to.
Qualifications
We want to make sure you're a funky, tech-loving person with a passion for learning and growing. Here are some of the things we're looking for:
- You have a Bachelor's or Master’s degree in Computer Science or a related field, but you also have a creative side that you're not afraid to show.
- You have excellent abstract reasoning ability and a strong understanding of core computer science fundamentals.
- You're proficient with web programming languages such as HTML, CSS, JavaScript with at least 5+ years of experience, but you're also open to learning new languages and technologies that might not be as mainstream.
- You’ve 5+ years of experience with backend web framework Django and DRF.
- You’ve 5+ years of experience with frontend web framework React.
- Your knowledge of cloud service providers like AWS, GCP, Azure, etc. will be an added bonus.
- You have experience with testing, code, and design reviews.
- You have strong written and verbal communication skills, but you're also not afraid to show your personality and let your funky side shine through.
- You can work independently and in a team environment, but you're also excited to collaborate with others and share your ideas.
- You've demonstrated your ability to lead a small team of developers.
- And most important, you're also excited to learn about new things and try out new ideas.
Compensation:
We know you're passionate and talented, and we want to reward you for that. That's why we're offering a compensation package of 15 - 17 LPA!
This is an mid-level position, you'll get to flex your coding muscles, work on exciting projects, and grow your skills in a fast-paced, dynamic environment. So, if you're passionate about all things tech and ready to take your skills to the next level, we want YOU to apply! Let's make some magic happen together!
We are located in Delhi. This post may require relocation.
About the Role
Hudson Data is looking for a Senior / Mid-Level SQL Engineer to design, build, optimize, and manage our data platforms. This role requires strong hands-on expertise in SQL, Google Cloud Platform (GCP), and Linux to support high-performance, scalable data solutions.
We are also hiring Python Programers / Software Developers / Front end and Back End Engineers
Key Responsibilities:
1.Develop and optimize complex SQL queries, views, and stored procedures
- Build and maintain data pipelines and ETL workflows on GCP (e.g., BigQuery, Cloud SQL)
- Manage database performance, monitoring, and troubleshooting
- Work extensively in Linux environments for deployments and automation
- Partner with data, product, and engineering teams on data initiatives
Required Skills & Qualifications
Must-Have Skills (Essential)
- Expert GCP mandatory
- Strong Linux / shell scripting mandatory
Nice to Have
- Experience with data warehousing and ETL frameworks
- Python / scripting for automation
- Performance tuning and query optimization experience
Soft Skills
- Strong analytical, problem-solving, and critical-thinking abilities.
- Excellent communication and presentation skills, including data storytelling.
- Curiosity and creativity in exploring and interpreting data.
- Collaborative mindset, capable of working in cross-functional and fast-paced environments.
Education & Certifications
- Bachelors degree in Data Science, Computer Science, Statistics, Mathematics, or a related field.
- Masters degree in Data Analytics, Machine Learning, or Business Intelligence preferred.
⸻
Why Join Hudson Data
At Hudson Data, youll be part of a dynamic, innovative, and globally connected team that uses cutting-edge tools from AI and ML frameworks to cloud-based analytics platforms to solve meaningful problems. Youll have the opportunity to grow, experiment, and make a tangible impact in a culture that values creativity, precision, and collaboration.
The ROLE:
The Enterprise Architect plays a pivotal role in helping organizations leverage cloud technologies to meet their business goals.
Develop Enterprise Cloud strategies, lead delivery teams in customer workshop, coordinate with multiple stakeholders in client hierarchy to engage, understand and articulate requirements to create solution. Has expertise in one or more Public Cloud provider technologies and is capable of embedding these capabilities into existing or new cloud-based infrastructure and platform solutions.
As an enterprise architect, you will engage with clients to define and design the cloud technology architecture and solution architecture. You will also define application and IT Architecture focusing on the mapping of Business requirement’s to IT Capabilities and align the same.
The focus is to define the integrations between, Applications, Data and Technology in the Enterprise and the Transitional Process necessary for migration and modernizing to cloud based architecture that includes APIs, microservices and containers.
You will keep yourself updated with latest technology trends and architectural styles and suggests leveraging of modern digital technologies to cater to business needs. Recognized as an Expert In their technology and architecture field. You will guide cloud objectives and technologies.
Mandatory requirement: TOGAF certified and must have current Insurance domain experience.
Key Responsibilities :
- Define end to end Enterprise and Cloud based Architecture catering to client needs and requirements
- Analyze Complex application landscapes, anticipate potential problems and future trends, assess potential solutions, Impacts, and risks to propose cloud roadmap, solution architecture and associated TCO
- Develop and implement cloud architecture solutions based on AWS , Azure , GCP when assigned to work on delivery projects.
- Analyze client requirements, propose for overall Application modernization, migrations and green field implementations
- Experience in implementing and deploying a DevOps based, end to end cloud application would be a plus point
- Lead teams for a discovery and architecture workshop, influence client architects, and IT personnel
- Guide other architects working with you in the team. Adapt communications and approaches to conclude technical scope discussions with various Partners, resulting in Common Agreements.
- Deliver an optimized infrastructure services design leveraging public, private, and hybrid Cloud architectures and services
- Act as subject matter and implementation expert for the client as related to technical architecture and implementation of proposed solution using Cloud Services
- Implement Solution using Cloud Services, open source and other 3rd party technologies
- Create and develop process frameworks, documents, SOP’s, cook book, automation scripts/tools for Cloud Discovery, Migrations, Application modernization, DevSecOPs and Services transitions to cloud
- Experienced in developing Terraform/Powershell/Cloud Formation/ARM scripts/templates
- Lead internal and external initiative to standardize, automate and develop competitive assets and accelerators
Technical Experience :
- Knowledge on creating IaaS and PaaS cloud solutions that meet customer needs for scalability, reliability and performance
- Experienced in delivering large/medium scale enterprise solution using Microservices, APIs, Containerization, Kubernetes, App modernization, Cloud Iaas & PaaS
- Understanding of WAN, LAN, TCP/IP, VPN, Virtual Networking, Subnet, Routing, Storage account/management and Infrastructure as a Service
- Experience in creating systems logical view, network diagram and able to build automated solution for server build, resource monitoring
- Knowledge and experience in delivering solution using middle ware integration technologies using SOA, ESB, API platforms, API Security
- Experience in design, development and architecture of Java/JEE/.NET based system
- Have ability to assess multiple types of operating systems (AIX, Linux, Windows etc) and database (Postgres, Sybase, MySQL, MS SQL, NoSQL, DB2, Oracle, Cassandra and MongoDB) to develop transformation approaches and solutions
Professional Attributes :
- Strategic thinker, have ability grasp new technologies, innovate, develop and nurture new solutions
- Experience in driving consulting workshops, creating content of workshops in short deadlines to C-levels
- Demonstrate thought leadership, create good impacts in engagements, manage time and demonstrate flexibility by adapting to situations
- An ability to follow processes
- Strong documentation skill
- Good communication skills both written and verbal with the client
Certifications
- TOGAF, ITIL, Azure Solution Architect Associate & Expert/AWS Solution Architect Associate & Professional /GCP Professional
Backend Engineer (Python / Django + DevOps)
Company: SurgePV (A product by Heaven Designs Pvt. Ltd.)
About SurgePV
SurgePV is an AI-first solar design software built from more than a decade of hands-on experience designing and engineering thousands of solar installations at Heaven Designs. After working with nearly every solar design tool in the market, we identified major gaps in speed, usability, and intelligence—particularly for rooftop solar EPCs.
Our vision is to build the most powerful and intuitive solar design platform for rooftop installers, covering fast PV layouts, code-compliant engineering, pricing, proposals, and financing in a single workflow. SurgePV enables small and mid-sized solar EPCs to design more systems, close more deals, and accelerate the clean energy transition globally.
As SurgePV scales, we are building a robust backend platform to support complex geometry, pricing logic, compliance rules, and workflow automation at scale.
Role Overview
We are seeking a Backend Engineer (Python / Django + DevOps) to own and scale SurgePV’s core backend systems. You will be responsible for designing, building, and maintaining reliable, secure, and high-performance services that power our solar design platform.
This role requires strong ownership—you will work closely with the founders, frontend engineers, and product team to make architectural decisions and ensure the platform remains fast, observable, and scalable as global usage grows.
Key Responsibilities
- Design, develop, and maintain backend services and REST APIs that power PV design, pricing, and core product workflows.
- Collaborate with the founding team on system architecture, including authentication, authorization, billing, permissions, integrations, and multi-tenant design.
- Build secure, scalable, and observable systems with structured logging, metrics, alerts, and rate limiting.
- Own DevOps responsibilities for backend services, including Docker-based containerization, CI/CD pipelines, and production deployments.
- Optimize PostgreSQL schemas, migrations, indexes, and queries for computation-heavy and geospatial workloads.
- Implement caching strategies and performance optimizations where required.
- Integrate with third-party APIs such as CRMs, financing providers, mapping platforms, and satellite or irradiance data services.
- Write clean, maintainable, well-tested code and actively participate in code reviews to uphold engineering quality.
Required Skills & Qualifications (Must-Have)
- 2–5 years of experience as a Backend Engineer.
- Strong proficiency in Python and Django / Django REST Framework.
- Solid computer science fundamentals, including data structures, algorithms, and basic distributed systems concepts.
- Proven experience designing and maintaining REST APIs in production environments.
- Hands-on DevOps experience, including:
- Docker and containerized services
- CI/CD pipelines (GitHub Actions, GitLab CI, or similar)
- Deployments on cloud platforms such as AWS, GCP, Azure, or DigitalOcean
- Strong working knowledge of PostgreSQL, including schema design, migrations, indexing, and query optimization.
- Strong debugging skills and a habit of instrumenting systems using logs, metrics, and alerts.
- Ownership mindset with the ability to take systems from spec → implementation → production → iteration.
Good-to-Have Skills
- Experience working in early-stage startups or building 0→1 products.
- Familiarity with Kubernetes or other container orchestration tools.
- Experience with Infrastructure as Code (Terraform, Pulumi).
- Exposure to monitoring and observability stacks such as Prometheus, Grafana, ELK, or similar tools.
- Prior exposure to solar, CAD/geometry, geospatial data, or financial/pricing workflows.
What We Offer
- Real-world impact: every feature you ship helps accelerate solar adoption on real rooftops.
- Opportunity to work across backend engineering, DevOps, integrations, and performance optimization.
- A mission-driven, fast-growing product focused on sustainability and clean energy.
Key Responsibilities
- Automation & Reliability: Automate infrastructure and operational processes to ensure high reliability, scalability, and security.
- Cloud Infrastructure Design: Gather GCP infrastructure requirements, evaluate solution options, and implement best-fit cloud architectures.
- Infrastructure as Code (IaC): Design, develop, and maintain infrastructure using Terraform and Ansible.
- CI/CD Ownership: Build, manage, and maintain robust CI/CD pipelines using Jenkins, ensuring system reliability and performance.
- Container Orchestration: Manage Docker containers and self-managed Kubernetes clusters across multiple cloud environments.
- Monitoring & Observability: Implement and manage cloud-native monitoring solutions using Prometheus, Grafana, and the ELK stack.
- Proactive Issue Resolution: Troubleshoot and resolve infrastructure and application issues across development, testing, and production environments.
- Scripting & Automation: Develop efficient automation scripts using Python and one or more of Node.js, Go, or Shell scripting.
- Security Best Practices: Maintain and enhance the security of cloud services, Kubernetes clusters, and deployment pipelines.
- Cross-functional Collaboration: Work closely with engineering, product, and security teams to design and deploy secure, scalable infrastructure.
Job Description
We are seeking a DevOps Engineer with 3+ years of experience and strong expertise in Google Cloud Platform (GCP) to design, automate, and manage scalable cloud infrastructure. The role involves building CI/CD pipelines, implementing Infrastructure as Code, and ensuring high availability, security, and performance of cloud-native applications.
Key Responsibilities
- Design, deploy, and manage GCP infrastructure using best practices
- Implement Infrastructure as Code (IaC) using Terraform
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI
- Manage containerized workloads using Docker and Kubernetes (GKE)
- Configure and manage GCP networking (VPCs, Subnets, VPN, Cloud Interconnect, Load Balancers, Firewall rules)
- Implement monitoring and logging using Cloud Monitoring and Cloud Logging
- Ensure high availability, scalability, and security of applications
- Troubleshoot production issues and perform root cause analysis
- Collaborate with development and product teams to improve deployment and reliability
- Optimize cloud cost, performance, and security
Required Skills & Qualifications
- 3+ years of experience as a DevOps / Cloud Engineer
- Strong hands-on experience with Google Cloud Platform (GCP)
- Experience with Terraform for GCP resource provisioning
- CI/CD experience with Jenkins / GitHub Actions
- Hands-on experience with Docker and Kubernetes (GKE)
- Good understanding of Linux and shell scripting
- Knowledge of cloud networking concepts (TCP/IP, DNS, Load Balancers)
- Experience with monitoring, logging, and alerting
Good to Have
- Experience with Hybrid or Multi-cloud architectures
- Knowledge of DevSecOps practices
- Experience with SRE concepts
- GCP certifications (Associate Cloud Engineer / Professional DevOps Engineer)
Why Join Us
- Work on modern GCP-based cloud infrastructure
- Opportunity to design and own end-to-end DevOps pipelines
- Learning and growth opportunities in cloud and automation
🚀 RECRUITING BOND HIRING
Role: CLOUD OPERATIONS & MONITORING ENGINEER - (THE GUARDIAN OF UPTIME)
⚡ THIS IS NOT A MONITORING ROLE
THIS IS A COMMAND ROLE
You don’t watch dashboards.
You control outcomes.
You don’t react to incidents.
You eliminate them before they escalate.
This role powers an AI-driven SaaS + IoT platform where:
---> Uptime is non-negotiable
---> Latency is hunted
---> Failures are never allowed to repeat
Incidents don’t grow.
Problems don’t hide.
Uptime is enforced.
🧠 WHAT YOU’LL OWN
(Real Work. Real Impact.)
🔍 Total Observability
---> Real-time visibility across cloud, application, database & infrastructure
---> High-signal dashboards (Grafana + cloud-native tools)
---> Performance trends tracked before growth breaks systems
🚨 Smart Alerting (No Noise)
---> Alerts that fire only when action is required
---> Zero false positives. Zero alert fatigue
Right signal → right person → right time
⚙ Automation as a Weapon
---> End-to-end automation of operational tasks
---> Standardized logging, metrics & alerting
---> Systems that scale without human friction
🧯 Incident Command & Reliability
---> First responder for critical incidents (on-call rotation)
---> Root cause analysis across network, app, DB & storage
Fix fast — then harden so it never breaks the same way again
📘 Operational Excellence
---> Battle-tested runbooks
---> Documentation that actually works under pressure
Every incident → a stronger platform
🛠️ TECHNOLOGIES YOU’LL MASTER
☁ Cloud: AWS | Azure | Google Cloud
📊 Monitoring: Grafana | Metrics | Traces | Logs
📡 Alerting: Production-grade alerting systems
🌐 Networking: DNS | Routing | Load Balancers | Security
🗄 Databases: Production systems under real pressure
⚙ DevOps: Automation | Reliability Engineering
🎯 WHO WE’RE LOOKING FOR
Engineers who take uptime personally.
You bring:
---> 3+ years in Cloud Ops / DevOps / SRE
---> Live production SaaS experience
---> Deep AWS / Azure / GCP expertise
---> Strong monitoring & alerting experience
---> Solid networking fundamentals
---> Calm, methodical incident response
---> Bonus (Highly Preferred):
---> B2B SaaS + IoT / hybrid platforms
---> Strong automation mindset
---> Engineers who think in systems, not tickets
💼 JOB DETAILS
📍 Bengaluru
🏢 Hybrid (WFH)
💰 (Final CTC depends on experience & interviews)
🌟 WHY THIS ROLE?
Most cloud teams manage uptime. We weaponize it.
Your work won’t just keep systems running — it will keep customers confident, operations flawless, and competitors wondering how it all works so smoothly.
📩 APPLY / REFER : 🔗 Know someone who lives for reliability, observability & cloud excellence?
🚀 RECRUITING BOND HIRING
Role: CLOUD OPERATIONS & MONITORING ENGINEER - (THE GUARDIAN OF UPTIME)
⚡ THIS IS NOT A MONITORING ROLE
THIS IS A COMMAND ROLE
You don’t watch dashboards.
You control outcomes.
You don’t react to incidents.
You eliminate them before they escalate.
This role powers an AI-driven SaaS + IoT platform where:
---> Uptime is non-negotiable
---> Latency is hunted
---> Failures are never allowed to repeat
Incidents don’t grow.
Problems don’t hide.
Uptime is enforced.
🧠 WHAT YOU’LL OWN
(Real Work. Real Impact.)
🔍 Total Observability
---> Real-time visibility across cloud, application, database & infrastructure
---> High-signal dashboards (Grafana + cloud-native tools)
---> Performance trends tracked before growth breaks systems
🚨 Smart Alerting (No Noise)
---> Alerts that fire only when action is required
---> Zero false positives. Zero alert fatigue
Right signal → right person → right time
⚙ Automation as a Weapon
---> End-to-end automation of operational tasks
---> Standardized logging, metrics & alerting
---> Systems that scale without human friction
🧯 Incident Command & Reliability
---> First responder for critical incidents (on-call rotation)
---> Root cause analysis across network, app, DB & storage
Fix fast — then harden so it never breaks the same way again
📘 Operational Excellence
---> Battle-tested runbooks
---> Documentation that actually works under pressure
Every incident → a stronger platform
🛠️ TECHNOLOGIES YOU’LL MASTER
☁ Cloud: AWS | Azure | Google Cloud
📊 Monitoring: Grafana | Metrics | Traces | Logs
📡 Alerting: Production-grade alerting systems
🌐 Networking: DNS | Routing | Load Balancers | Security
🗄 Databases: Production systems under real pressure
⚙ DevOps: Automation | Reliability Engineering
🎯 WHO WE’RE LOOKING FOR
Engineers who take uptime personally.
You bring:
---> 3+ years in Cloud Ops / DevOps / SRE
---> Live production SaaS experience
---> Deep AWS / Azure / GCP expertise
---> Strong monitoring & alerting experience
---> Solid networking fundamentals
---> Calm, methodical incident response
---> Bonus (Highly Preferred):
---> B2B SaaS + IoT / hybrid platforms
---> Strong automation mindset
---> Engineers who think in systems, not tickets
💼 JOB DETAILS
📍 Bengaluru
🏢 Hybrid (WFH)
💰 (Final CTC depends on experience & interviews)
🌟 WHY THIS ROLE?
Most cloud teams manage uptime. We weaponize it.
Your work won’t just keep systems running — it will keep customers confident, operations flawless, and competitors wondering how it all works so smoothly.
📩 APPLY / REFER : 🔗 Know someone who lives for reliability, observability & cloud excellence?
Role: Senior Platform Engineer (GCP Cloud)
Experience Level: 3 to 6 Years
Work location: Mumbai
Mode : Hybrid
Role & Responsibilities:
- Build automation software for cloud platforms and applications
- Drive Infrastructure as Code (IaC) adoption
- Design self-service, self-healing monitoring and alerting tools
- Automate CI/CD pipelines (Git, Jenkins, SonarQube, Docker)
- Build Kubernetes container platforms
- Introduce new cloud technologies for business innovation
Requirements:
- Hands-on experience with GCP Cloud
- Knowledge of cloud services (compute, storage, network, messaging)
- IaC tools experience (Terraform/CloudFormation)
- SQL & NoSQL databases (Postgres, Cassandra)
- Automation tools (Puppet/Chef/Ansible)
- Strong Linux administration skills
- Programming: Bash/Python/Java/Scala
- CI/CD pipeline expertise (Jenkins, Git, Maven)
- Multi-region deployment experience
- Agile/Scrum/DevOps methodology
SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)
Key Skills: Software Development Life Cycle (SDLC), CI/CD
About Company: Consumer Internet / E-Commerce
Company Size: Mid-Sized
Experience Required: 6 - 10 years
Working Days: 5 days/week
Office Location: Bengaluru [Karnataka]
Review Criteria:
Mandatory:
- Strong DevSecOps profile
- Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
- Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
- Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
- Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
- Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
- Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
- Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
- Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
- B2B SaaS Product companies
- Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments
Preferred:
- Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
- Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
- Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language
Roles & Responsibilities:
We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.
This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.
If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.
What You’ll Do-
Cloud & Infrastructure Security:
- Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
- Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
- Partner with platform teams to secure VPCs, security groups, and cloud access patterns.
Application & DevSecOps Security:
- Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
- Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
- Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.
Security Monitoring & Incident Response:
- Monitor security alerts and investigate potential threats across cloud and application layers.
- Lead or support incident response efforts, root-cause analysis, and corrective actions.
- Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
- Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
- Continuously improve detection, response, and testing maturity.
Security Tools & Platforms:
- Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
- Ensure tools are well-integrated, actionable, and aligned with operational needs.
Compliance, Governance & Awareness:
- Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
- Promote secure engineering practices through training, documentation, and ongoing awareness programs.
- Act as a trusted security advisor to engineering and product teams.
Continuous Improvement:
- Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
- Continuously raise the bar on a company's security posture through automation and process improvement.
Endpoint Security (Secondary Scope):
- Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.
Ideal Candidate:
- Strong hands-on experience in cloud security across AWS and Azure.
- Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
- Experience securing containerized and Kubernetes-based environments.
- Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
- Solid understanding of network security, encryption, identity, and access management.
- Experience with application security testing tools (SAST, DAST, SCA).
- Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
- Strong analytical, troubleshooting, and problem-solving skills.
Nice to Have:
- Experience with DevSecOps automation and security-as-code practices.
- Exposure to threat intelligence and cloud security monitoring solutions.
- Familiarity with incident response frameworks and forensic analysis.
- Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.
Perks, Benefits and Work Culture:
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.
About the Role
We are looking for a Big Data Engineer with 2–5 years of experience in designing, building, and operating large-scale data processing systems, preferably on Google Cloud Platform
(GCP). This role is suited for engineers who understand modern data architectures and are comfortable working across multiple stages of the data lifecycle.
We do not expect expertise in every GCP data service. Instead, candidates should have
strong hands-on experience in at least one service from each core data area listed below
and the ability to learn and adapt to new tools quickly.
Key Responsibilities
● Design, develop, and maintain scalable data pipelines on GCP.
● Build batch and streaming data processing workflows using managed cloud services.
● Develop and maintain data transformation workflows using SQL and Python.
● Create and manage workflow orchestration using DAG-based schedulers.
● Collaborate with analytics, product, and engineering teams to deliver reliable datasets.
● Optimize data pipelines for performance, cost, and reliability.
● Ensure data quality, monitoring, and observability across pipelines.
● Participate in code reviews and contribute to data engineering best practices.
Core Experience Areas (At Least One From Each)
1. Data Warehousing & Analytics
● BigQuery
● Dataproc (Spark / Hadoop)
● Other cloud data warehouse or analytics platforms
2. Data Processing and Pipelines
● Dataflow (Apache Beam)
● Cloud Run Jobs / Cloud Run Services
● Apache Spark (batch or streaming)
● dbt for transformations
3. Databases & Storage
● Bigtable
● Cloud Storage
● Relational databases (PostgreSQL, MySQL, Cloud SQL)
● NoSQL databases
4. Data Preparation Exploration
● SQL-based data analysis
● Python for data manipulation (Pandas, PySpark)
● Exploratory data analysis on large datasets
Workflow Orchestration & Scheduling
● Cloud Composer (Airflow)
● Cloud Scheduler
● Experience creating and maintaining DAGs in Python
Required Skills & Qualifications
● 2–5 years of experience in data engineering or big data processing.
● Hands-on experience with Google Cloud Platform (preferred).
● Strong proficiency in Python and SQL.
● Understanding of distributed data processing concepts.
● Experience with CI/CD, Git, and production-grade data systems.
● Ability to work across ambiguous problem statements and evolving requirements.
AI & System Mindset
Experience working with AI-powered systems is a strong plus. Candidates should be
comfortable integrating AI agents, third-party APIs, and automation workflows into
applications, and should demonstrate curiosity and adaptability toward emerging AI technologies.
Good to Have
● Experience with streaming data (Pub/Sub, Kafka).
● Cost optimization experience on cloud data platforms.
● Exposure to AI/ML pipelines or feature engineering.
● Experience working in product-driven or startup environments.
Education
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
ROLES AND RESPONSIBILITIES:
You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
IDEAL CANDIDATE:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
PREFERRED:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines
OVERVIEW
We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.
The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.
CORE TECHNICAL REQUIREMENTS
Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.
Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.
CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.
Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.
PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.
Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.
WHAT YOU WILL OWN
Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.
Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.
VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.
Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.
Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.
Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.
WHAT SUCCESS LOOKS LIKE
Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.
ENGINEERING STANDARDS
Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.
Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.
Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.
Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.
CURRENT ENVIRONMENT
GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.
WHAT WE ARE LOOKING FOR
Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.
Calm Under Pressure: When production breaks, you diagnose methodically.
Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.
Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.
EDUCATION
University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.
Role: Full-Time, Long-Term Required: Python, SQL Preferred: Experience with financial or crypto data
OVERVIEW
We are seeking a data engineer to join as a core member of our technical team. This is a long-term position for someone who wants to build robust, production-grade data infrastructure and grow with a small, focused team. You will own the data layer that feeds our machine learning pipeline—from ingestion and validation through transformation, storage, and delivery.
The ideal candidate is meticulous about data quality, thinks deeply about failure modes, and builds systems that run reliably without constant attention. You understand that downstream ML models are only as good as the data they consume.
CORE TECHNICAL REQUIREMENTS
Python (Required): Professional-level proficiency. You write clean, maintainable code for data pipelines—not throwaway scripts. Comfortable with Pandas, NumPy, and their performance characteristics. You know when to use Python versus push computation to the database.
SQL (Required): Advanced SQL skills. Complex queries, query optimization, schema design, execution plans. PostgreSQL experience strongly preferred. You think about indexing, partitioning, and query performance as second nature.
Data Pipeline Design (Required): You build pipelines that handle real-world messiness gracefully. You understand idempotency, exactly-once semantics, backfill strategies, and incremental versus full recomputation tradeoffs. You design for failure—what happens when an upstream source is late, returns malformed data, or goes down entirely. Experience with workflow orchestration required: Airflow, Prefect, Dagster, or similar.
Data Quality (Required): You treat data quality as a first-class concern. You implement validation checks, anomaly detection, and monitoring. You know the difference between data that is missing versus data that should not exist. You build systems that catch problems before they propagate downstream.
WHAT YOU WILL BUILD
Data Ingestion: Pipelines pulling from diverse sources—crypto exchanges, traditional market feeds, on-chain data, alternative data. Handling rate limits, API quirks, authentication, and source-specific idiosyncrasies.
Data Validation: Checks ensuring completeness, consistency, and correctness. Schema validation, range checks, freshness monitoring, cross-source reconciliation.
Transformation Layer: Converting raw data into clean, analysis-ready formats. Time series alignment, handling different frequencies and timezones, managing gaps.
Storage and Access: Schema design optimized for both write patterns (ingestion) and read patterns (ML training, feature computation). Data lifecycle and retention management.
Monitoring and Alerting: Observability into pipeline health. Knowing when something breaks before it affects downstream systems.
DOMAIN EXPERIENCE
Preference for candidates with experience in financial or crypto data—understanding market data conventions, exchange-specific quirks, and point-in-time correctness. You know why look-ahead bias is dangerous and how to prevent it.
Time series data at scale—hundreds of symbols with years of history, multiple frequencies, derived features. You understand temporal joins, windowed computations, and time-aligned data challenges.
High-dimensional feature stores—we work with hundreds of thousands of derived features. Experience managing, versioning, and serving large feature sets is valuable.
ENGINEERING STANDARDS
Reliability: Pipelines run unattended. Failures are graceful with clear errors, not silent corruption. Recovery is straightforward.
Reproducibility: Same inputs and code version produce identical outputs. You version schemas, track lineage, and can reconstruct historical states.
Documentation: Schemas, data dictionaries, pipeline dependencies, operational runbooks. Others can understand and maintain your systems.
Testing: You write tests for pipelines—validation logic, transformation correctness, edge cases. Untested pipelines are broken pipelines waiting to happen.
TECHNICAL ENVIRONMENT
PostgreSQL, Python, workflow orchestration (flexible on tool), cloud infrastructure (GCP preferred but flexible), Git.
WHAT WE ARE LOOKING FOR
Attention to Detail: You notice when something is slightly off and investigate rather than ignore.
Defensive Thinking: You assume sources will send bad data, APIs will fail, schemas will change. You build accordingly.
Self-Direction: You identify problems, propose solutions, and execute without waiting to be told.
Long-Term Orientation: You build systems you will maintain for years.
Communication: You document clearly, explain data issues to non-engineers, and surface problems early.
EDUCATION
University degree in a quantitative/technical field preferred: Computer Science, Mathematics, Statistics, Engineering. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of a data pipeline you built and maintained, (3) Links to relevant work if available, (4) Availability and timezone.
What You’ll Do:
We are looking for a Staff Software Engineer based in Pune, India who can master both DeepIntent’s data architectures and pharma research and analytics methodologies to make significant contributions to how health media is analyzed by our clients. This role requires an Engineer who not only understands DBA functions but also how they impact research objectives and can work with researchers and data scientists to achieve impactful results.
This role will be in the Analytics Organization and will require integration and partnership with the Engineering Organization. The ideal candidate is a self-starter who is inquisitive who is not afraid to take on and learn from challenges and will constantly seek to improve the facets of the business they manage. The ideal candidate will also need to demonstrate the ability to collaborate and partner with others.
- Serve as the Engineering interface between Analytics and Engineering teams.
- Develop and standardize all interface points for analysts to retrieve and analyze data with a focus on research methodologies and data-based decision-making.
- Optimize queries and data access efficiencies, serve as an expert in how to most efficiently attain desired data points.
- Build “mastered” versions of the data for Analytics-specific querying use cases.
- Help with data ETL, table performance optimization.
- Establish a formal data practice for the Analytics practice in conjunction with the rest of DeepIntent
- Build & operate scalable and robust data architectures.
- Interpret analytics methodology requirements and apply them to data architecture to create standardized queries and operations for use by analytics teams.
- Implement DataOps practices.
- Master existing and new Data Pipelines and develop appropriate queries to meet analytics-specific objectives.
- Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.
- Operate between Engineers and Analysts to unify both practices for analytics insight creation.
Who You Are:
- 8+ years of experience in Tech Support (Specialised in Monitoring and maintaining Data pipeline).
- Adept in market research methodologies and using data to deliver representative insights.
- Inquisitive, curious, understands how to query complicated data sets, move and combine data between databases.
- Deep SQL experience is a must.
- Exceptional communication skills with the ability to collaborate and translate between technical and non-technical needs.
- English Language Fluency and proven success working with teams in the U.S.
- Experience in designing, developing and operating configurable Data pipelines serving high-volume and velocity data.
- Experience working with public clouds like GCP/AWS.
- Good understanding of software engineering, DataOps, and data architecture, Agile and DevOps methodologies.
- Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown.
- Proficient with SQL, Python or JVM-based language, Bash.
- Experience with any of Apache open-source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc.
- Ability to think big, take bets and innovate, dive deep, hire and develop the best talent, learn and be curious.
About the role
Webnyay is looking for an experienced Backend Developer to build and scale reliable backend systems for our legal tech platform. You will work on core product architecture, high-performance APIs, and cloud-native services that support AI-driven workflows and large-scale data processing.
Responsibilities
- Develop backend services using Python, Django, and FastAPI
- Build scalable APIs and microservices for product features
- Implement event-driven and asynchronous workflows using Kafka
- Design and maintain backend integrations and data pipelines
- Deploy and manage services on Google Cloud Platform (GCP)
- Ensure performance, security, and reliability of backend systems
- Collaborate with product and engineering teams to deliver production-ready features
Requirements
- 4+ years of backend development experience
- Strong proficiency in Python
- Hands-on experience with Django and FastAPI
- Experience working with Kafka or similar messaging systems
- Working knowledge of GCP and cloud-based deployments
- Solid understanding of backend architecture and API design
- Experience with databases and production systems
- Experience building SaaS or platform-based products
- Exposure to AI-driven or data-intensive applications
Why Webnyay
- Build technology that improves access to justice
- Work on real-world, high-impact legal tech problems
- Collaborative and ownership-driven work culture
- Opportunity to grow with a fast-scaling startup
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.

Global digital transformation solutions provider.
Job Description
We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.
Key Responsibilities:
- Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
- Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
- Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
- Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
- Ensure compliance with security best practices and organizational policies across GCP environments.
- Document processes, configurations, and architectural decisions to maintain operational transparency.
- Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.
Mandatory Skills:
- Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
- Jenkins – Expertise in Declarative Pipeline creation and optimization.
- CI/CD – Strong understanding of automated build, test, and deployment workflows.
- Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
- Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
- Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.
Preferred Skills:
- Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
- Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
- Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
- GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
Skills
Gcp, Jenkins, CICD Aws,
Nice to Haves
Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
******
Notice period - 0 to 15days only
Location – Pune, Trivandrum, Kochi, Chennai
Marketing Data Engineer (Remote)
Full-Time Contractor
Lightningrowth is a U.S.-based marketing company that specializes in Facebook lead generation for home-remodeling businesses. Although all ads run on Facebook, our clients use many different CRMs — which means we must manage, clean, and sync large volumes of lead data across multiple systems.
We’re hiring a Marketing Data Engineer to maintain and improve the Python scripts and data pipelines that keep everything running smoothly.
This is a remote role ideal for a mid-level engineer with strong Python, API, SQL, and communication skills.
What You’ll Do
- Maintain and improve Python scripts for:
- GoHighLevel (GHL) API
- Facebook/Meta Marketing API
- Build new API integrations for client CRMs and software tools
- Extract, clean, and transform data before loading into BigQuery
- Write and update SQL used for dashboards and reporting
- Ensure data accuracy and monitor automated pipeline reliability
- Help optimize automation flows (Make.com or similar)
- Document your work clearly and communicate updates to the team
Required Skills
- Strong Python (requests, pandas, JSON handling)
- Hands-on experience with REST APIs (auth, pagination, rate limits)
- Solid SQL skills (BigQuery experience preferred)
- Experience with ETL / data pipelines
- Ability to build API integrations from documentation
- Good spoken and written English communication
- Comfortable working independently in a fully remote setup
Nice to Have
- Experience with GoHighLevel or CRM APIs
- Familiarity with:
- Google BigQuery
- Google Cloud Functions / Cloud Run
- Make.com automations
- Looker Studio dashboards
- Experience optimizing large datasets or API usage
Experience Level
3–6 years of hands-on data engineering, backend Python work, or API integrations.
Compensation
- $2,500 – $4,500 USD per month (depending on experience)
How to Apply
Please include:
- Your resume
- Links to any Python/API/SQL samples (GitHub, snippets, etc.)
- A short note on why you’re a good fit
Qualified candidates will complete a short Python + API + SQL test.
What You’ll Do:
- Setting up formal data practices for the company.
- Building and running super stable and scalable data architectures.
- Making it easy for folks to add and use new data with self-service pipelines.
- Getting DataOps practices in place.
- Designing, developing, and running data pipelines to help out Products, Analytics, data scientists and machine learning engineers.
- Creating simple, reliable data storage, ingestion, and transformation solutions that are a breeze to deploy and manage.
- Writing and Managing reporting API for different products.
- Implementing different methodologies for different reporting needs.
- Teaming up with all sorts of people – business folks, other software engineers, machine learning engineers, and analysts.
Who You Are:
- Bachelor’s degree in engineering (CS / IT) or equivalent degree from a well-known Institute / University.
- 3.5+ years of experience in building and running data pipelines for tons of data.
- Experience with public clouds like GCP or AWS.
- Experience with Apache open-source projects like Spark, Druid, Airflow, and big data databases like BigQuery, Clickhouse.
- Experience making data architectures that are optimised for both performance and cost.
- Good grasp of software engineering, DataOps, data architecture, Agile, and DevOps.
- Proficient in SQL, Java, Spring Boot, Python, and Bash.
- Good communication skills for working with technical and non-technical people.
- Someone who thinks big, takes chances, innovates, dives deep, gets things done, hires and develops the best, and is always learning and curious.

Global digital transformation solutions provider.
Role Proficiency:
Leverage expertise in a technology area (e.g. Java Microsoft technologies or Mainframe/legacy) to design system architecture.
Knowledge Examples:
- Domain/ Industry Knowledge: Basic knowledge of standard business processes within the relevant industry vertical and customer business domain
- Technology Knowledge: Demonstrates working knowledge of more than one technology area related to own area of work (e.g. Java/JEE 5+ Microsoft technologies or Mainframe/legacy) customer technology landscape multiple frameworks (Struts JSF Hibernate etc.) within one technology area and their applicability. Consider low level details such as data structures algorithms APIs and libraries and best practices for one technology stack configuration parameters for successful deployment and configuration parameters for high performance within one technology stack
- Technology Trends: Demonstrates working knowledge of technology trends related to one technology stack and awareness of technology trends related to least two technologies
- Architecture Concepts and Principles: Demonstrates working knowledge of standard architectural principles models patterns (e.g. SOA N-Tier EDA etc.) and perspective (e.g. TOGAF Zachman etc.) integration architecture including input and output components existing integration methodologies and topologies source and external system non functional requirements data architecture deployment architecture architecture governance
- Design Patterns Tools and Principles: Applies specialized knowledge of design patterns design principles practices and design tools. Knowledge of documentation of design using tolls like EA
- Software Development Process Tools & Techniques: Demonstrates thorough knowledge of end-to-end SDLC process (Agile and Traditional) SDLC methodology programming principles tools best practices (refactoring code code package etc.)
- Project Management Tools and Techniques: Demonstrates working knowledge of project management process (such as project scoping requirements management change management risk management quality assurance disaster management etc.) tools (MS Excel MPP client specific time sheets capacity planning tools etc.)
- Project Management: Demonstrates working knowledge of project governance framework RACI matrix and basic knowledge of project metrics like utilization onsite to offshore ratio span of control fresher ratio SLAs and quality metrics
- Estimation and Resource Planning: Working knowledge of estimation and resource planning techniques (e.g. TCP estimation model) company specific estimation templates
- Working knowledge of industry knowledge management tools (such as portals wiki) company and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT)
- Technical Standards Documentation & Templates: Demonstrates working knowledge of various document templates and standards (such as business blueprint design documents and test specifications)
- Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for ( non functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard) techniques (business analysis process mapping etc.) and requirements management tools (e.g.MS Excel) and basic knowledge of functional requirements gathering. Specifically identify Architectural concerns and to document them as part of IT requirements including NFRs
- Solution Structuring: Demonstrates working knowledge of service offering and products
Additional Comments:
Looking for a Senior Java Architect with 12+ years of experience. Key responsibilities include:
• Excellent technical background and end to end architecture to design and implement scalable maintainable and high performing systems integrating front end technologies with back-end services.
• Collaborate with front-end teams to architect React -based user interfaces that are robust, responsive and aligned with overall technical architecture.
• Expertise in cloud-based applications on Azure, leveraging key Azure services.
• Lead the adoption of DevOps practices, including CI/CD pipelines, automation, monitoring and logging to ensure reliable and efficient deployment cycles.
• Provide technical leadership to development teams, guiding them in building solutions that adhere to best practices, industry standards and customer requirements.
• Conduct code reviews to maintain high quality code and collaborate with team to ensure code is optimized for performance, scalability and security.
• Collaborate with stakeholders to defined requirements and deliver technical solutions aligned with business goals.
• Excellent communication skills
• Mentor team members providing guidance on technical challenges and helping them grow their skill set.
• Good to have experience in GCP and retail domain.
Skills: Devops, Azure, Java
Must-Haves
Java (12+ years), React, Azure, DevOps, Cloud Architecture
Strong Java architecture and design experience.
Expertise in Azure cloud services.
Hands-on experience with React and front-end integration.
Proven track record in DevOps practices (CI/CD, automation).
Notice period - 0 to 15days only
Location: Hyderabad, Chennai, Kochi, Bangalore, Trivandrum
Excellent communication and leadership skills.
Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems
Criteria:
- Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
- Must be strong in one core backend language: Node.js, Go, Java, or Python.
- Deep understanding of distributed systems, caching, high availability, and microservices architecture.
- Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
- Strong command over system design, data structures, performance tuning, and scalable architecture
- Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.
Description
What This Role Is All About
We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.
What You’ll Own
● Architect backend systems that handle India-scale traffic without breaking a sweat.
● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.
● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.
● Partner with Product, Data, and Infra to ship features that are reliable and delightful.
● Set high engineering standards—clean architecture, performance, automation, and testing.
● Lead discussions on system design, performance tuning, and infra choices.
● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.
● Identify gaps proactively and push for improvements instead of waiting for fires.
What Makes You a Great Fit
● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.
● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.
● Deep understanding of distributed systems, caching, high-availability, and microservices.
● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.
● You think data structures and system design are not interviews — they’re daily tools.
● You write code that future-you won’t hate.
● Strong communication and a let’s figure this out attitude.
Bonus Points If You Have
● Built or scaled consumer apps with millions of DAUs.
● Experimented with event-driven architecture, streaming systems, or real-time pipelines.
● Love startups and don’t mind wearing multiple hats.
● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.
Why company Might Be Your Best Move
● Work on products used by real people every single day.
● Ownership from day one—your decisions will shape our core architecture.
● No unnecessary hierarchy; direct access to founders and senior leadership.
● A team that cares about quality, speed, and impact in equal measure.
● Build for Bharat — complex constraints, huge scale, real impact.

Global digital transformation solutions provider.
Job Description – Senior Technical Business Analyst
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
About the Role
We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.
As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.
Key Responsibilities
Business & Analytical Responsibilities
- Partner with business teams to understand one-line problem statements and translate them into detailed business requirements, opportunities, and project scope.
- Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
- Create documentation including Business Requirement Documents (BRDs), user stories, process flows, and analytical models.
- Break down business needs into concise, actionable, and development-ready user stories in Jira.
Data & Technical Responsibilities
- Collaborate with data engineering teams to design, review, and validate data pipelines, data models, and ETL/ELT workflows.
- Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
- Apply foundational data science concepts such as statistical analysis, predictive modeling, and machine learning fundamentals.
- Validate and ensure data quality, consistency, and accuracy across datasets and systems.
Collaboration & Execution
- Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
- Assist in development, testing, and rollout of data-driven solutions.
- Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.
Required Skillsets
Core Technical Skills
- 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
- Data Analytics: SQL, descriptive analytics, business problem framing.
- Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
- Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
- Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.
Soft Skills
- Strong analytical thinking and structured problem-solving capability.
- Ability to convert business problems into clear technical requirements.
- Excellent communication, documentation, and presentation skills.
- High curiosity, adaptability, and eagerness to learn new tools and techniques.
Educational Qualifications
- BE/B.Tech or equivalent in:
- Computer Science / IT
- Data Science
What We Look For
- Demonstrated passion for data and analytics through projects and certifications.
- Strong commitment to continuous learning and innovation.
- Ability to work both independently and in collaborative team environments.
- Passion for solving business problems using data-driven approaches.
- Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.
Why Join Us?
- Exposure to modern data platforms, analytics tools, and AI technologies.
- A culture that promotes innovation, ownership, and continuous learning.
- Supportive environment to build a strong career in data and analytics.
Skills: Data Analytics, Business Analysis, Sql
Must-Haves
Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R
******
Notice period - 0 to 15 days (Max 30 Days)
Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
A DevSecOps Staff Engineer integrates security into DevOps practices, designing secure CI/CD
pipelines, building and automating secure cloud infrastructure and ensuring compliance across
development, operations, and security teams.
Responsibilities
• Design, build and maintain secure CI/CD pipelines utilizing DevSecOps principles and
practices to increase automation and reduce human involvement in the process
• Integrate tools of SAST, DAST, SCA, etc. within pipelines to enable automated application
building, testing, securing and deployment.
• Implement security controls for cloud platforms (AWS, GCP), including IAM, container
security (EKS/ECS), and data encryption for services like S3 or BigQuery, etc.
• Automate vulnerability scanning, monitoring, and compliance processes by collaborating
with DevOps and Development teams to minimize risks in deployment pipelines.
• Suggesting architecture improvements, recommending process improvements.
• Review cloud deployment architectures and implement required security controls.
• Mentor other engineers on security practices and processes.
Requirements
• Bachelor's degree, preferably in CS or a related field, or equivalent experience
• 10+ years of overall industry experience with AWS Certified - Security Specialist.• Must have implementation experience using security tools and processes related to SAST,
DAST and Pen Testing
• AWS-specific: 5+ years’ experience with using a broad range of AWS technologies (e.g.
EC2, RDS, ELB, S3, VPC, CloudWatch) to develop and maintain an Amazon AWS based
cloud solution, with an emphasis on best practice cloud security.
• Experienced with CI/CD tool chain (GitHub Actions, Packages, Jenkins, etc.)
• Passionate about solving security challenges and being informed of available and
emerging security threats and various security technologies.
• Must be familiar with the OWASP Top 10 Security Risks and Controls
• Good skills in at least one or more scripting languages: Python, Bash
• Good knowledge in Kubernetes, Docker Swarm or other cluster management software.
• Willing to work in shifts as required
Good to Have
• AWS Certified DevOps Engineer
• Observability: Experience with system monitoring tools (e.g. CloudWatch, New Relic,
etc.).
• Experience with Terraform/Ansible/Chef/Puppet
• Operating Systems: Windows and Linux system administration.
Perks:
● Day off on the 3rd Friday of every month (one long weekend each month)
● Monthly Wellness Reimbursement Program to promote health well-being
● Monthly Office Commutation Reimbursement Program
● Paid paternity and maternity leaves
We're seeking an experienced Engineer to join our engineering team, handling massive-scale data processing and analytics infrastructure that supports over 1B daily events, 3M+ DAU, and 50k+ hours of content. The ideal candidate will bridge the gap between raw data collection and actionable insights, while supporting our ML initiatives.
Key Responsibilities
- Lead and scale the Infrastructure Pod, setting technical direction for data, platform, and DevOps initiatives.
- Architect and evolve our cloud infrastructure to support 1B+ daily events — ensuring reliability, scalability, and cost efficiency.
- Collaborate with Data Engineering and ML pods to build high-performance pipelines and real-time analytics systems.
- Define and implement SLOs, observability standards, and best practices for uptime, latency, and data reliability.
- Mentor and grow engineers, fostering a culture of technical excellence, ownership, and continuous learning.
- Partner with leadership on long-term architecture and scaling strategy — from infrastructure cost optimization to multi-region availability.
- Lead initiatives on infrastructure automation, deployment pipelines, and platform abstractions to improve developer velocity.
- Own security, compliance, and governance across infrastructure and data systems.
Who You Are
- Previously a Tech Co-founder / Founding Engineer / First Infra Hire who scaled a product from early MVP to significant user or data scale.
- 5–12 years of total experience, with at least 2+ years in leadership or team-building roles.
- Deep experience with cloud infrastructure (AWS/GCP),
- Experience with containers (Docker, Kubernetes), and IaC tools (Terraform, Pulumi, or CDK).
- Hands-on expertise in data-intensive systems, streaming (Kafka, RabbitMQ, Spark Streaming), and distributed architecture design.
- Proven experience building scalable CI/CD pipelines, observability stacks (Prometheus, Grafana, ELK), and infrastructure for data and ML workloads.
- Comfortable being hands-on when needed — reviewing design docs, debugging issues, or optimizing infrastructure.
- Strong system design and problem-solving skills; understands trade-offs between speed, cost, and scalability.
- Passionate about building teams, not just systems — can recruit, mentor, and inspire engineers.
Preferred Skills
- Experience managing infra-heavy or data-focused teams.
- Familiarity with real-time streaming architectures.
- Exposure to ML infrastructure, data governance, or feature stores.
- Prior experience in the OTT / streaming / consumer platform domain is a plus.
- Contributions to open-source infra/data tools or strong engineering community presence.
What We Offer
- Opportunity to build and scale infrastructure from the ground up, with full ownership and autonomy.
- High-impact leadership role shaping our data and platform backbone.
- Competitive compensation + ESOPs.
- Continuous learning budget and certification support.
- A team that values velocity, clarity, and craftsmanship.
Success Metrics
- Reduction in infra cost per active user and event processed.
- Increase in developer velocity (faster pipeline deployments, reduced MTTR).
- High system availability and data reliability SLAs met.
- Successful rollout of infra automation and observability frameworks.
- Team growth, retention, and technical quality.
Job Summary
We are seeking a highly skilled Full Stack Engineer with 2+ years of hands-on experience to join our high-impact engineering team. You will work across the full stack—building scalable, high-performance frontends using Typescript & Next.js and developing robust backend services using Python (FastAPI/Django).
This role is crucial in shaping product experiences and driving innovation at scale.
Mandatory Candidate Background
- Experience working in product-based companies only
- Strong academic background
- Stable work history
- Excellent coding skills and hands-on development experience
- Strong foundation in Data Structures & Algorithms (DSA)
- Strong problem-solving mindset
- Understanding of clean architecture and code quality best practices
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications
- Build responsive, performant, user-friendly UIs using Typescript & Next.js
- Develop APIs and backend services using Python (FastAPI/Django)
- Collaborate with product, design, and business teams to translate requirements into technical solutions
- Ensure code quality, security, and performance across the stack
- Own features end-to-end: architecture, development, deployment, and monitoring
- Contribute to system design, best practices, and the overall technical roadmap
Requirements
Must-Have:
- 2+ years of professional full-stack engineering experience
- Strong expertise in Typescript / Next.js OR Python (FastAPI, Django) — must be familiar with both areas
- Experience building RESTful APIs and microservices
- Hands-on experience with Git, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure)
- Strong debugging, optimization, and problem-solving abilities
- Comfortable working in fast-paced startup environments
Good-to-Have:
- Experience with containerization (Docker/Kubernetes)
- Exposure to message queues or event-driven architectures
- Familiarity with modern DevOps and observability tooling
Job Description – Full Stack Developer (React + Node.js)
Experience: 5–8 Years
Location: Pune
Work Mode: WFO
Employment Type: Full-time
About the Role
We are looking for an experienced Full Stack Developer with strong hands-on expertise in React and Node.js to join our engineering team. The ideal candidate should have solid experience building scalable applications, working with production systems, and collaborating in high-performance tech environments.
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications using React and Node.js.
- Collaborate with cross-functional teams to define, design, and deliver new features.
- Write clean, maintainable, and efficient code following OOP/FP and SOLID principles.
- Work with relational databases such as PostgreSQL or MySQL.
- Deploy and manage applications in cloud environments (preferably GCP or AWS).
- Optimize application performance, troubleshoot issues, and ensure high availability in production systems.
- Utilize containerization tools like Docker for efficient development and deployment workflows.
- Integrate third-party services and APIs, including AI APIs and tools.
- Contribute to improving development processes, documentation, and best practices.
Required Skills
- Strong experience with React.js (frontend).
- Solid hands-on experience with Node.js (backend).
- Good understanding of relational databases: PostgreSQL / MySQL.
- Experience working in production environments and debugging live systems.
- Strong understanding of OOP or Functional Programming, and clean coding standards.
- Knowledge of Docker or other containerization tools.
- Experience with cloud platforms (GCP or AWS).
- Excellent written and verbal communication skills.
Good to Have
- Experience with Golang or Elixir.
- Familiarity with Kubernetes, RabbitMQ, Redis, etc.
- Contributions to open-source projects.
- Previous experience working with AI APIs or machine learning tools.
Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
Required Skills: TypeScript, MVC, Cloud experience (Azure, AWS, etc.), mongodb, Express.js, Nest.js
Criteria:
Need candidates from Growing startups or Product based companies only
1. 4–8 years’ experience in backend engineering
2. Minimum 2+ years hands-on experience with:
- TypeScript
- Express.js / Nest.js
3. Strong experience with MongoDB (or MySQL / PostgreSQL / DynamoDB)
4. Strong understanding of system design & scalable architecture
5. Hands-on experience in:
- Event-driven architecture / Domain-driven design
- MVC / Microservices
6. Strong in automated testing (especially integration tests)
7. Experience with CI/CD pipelines (GitHub Actions or similar)
8. Experience managing production systems
9. Solid understanding of performance, reliability, observability
10. Cloud experience (AWS preferred; GCP/Azure acceptable)
11. Strong coding standards — Clean Code, code reviews, refactoring
Description
About the opportunity
We are looking for an exceptional Senior Software Engineer to join our Backend team. This is a unique opportunity to join a fast-growing company where you will get to solve real customer and business problems, shape the future of a product built for Bharat and build the engineering culture of the team. You will have immense responsibility and autonomy to push the boundaries of engineering to deliver scalable and resilient systems.
As a Senior Software Engineer, you will be responsible for shipping innovative features at breakneck speed, designing the architecture, mentoring other engineers on the team and pushing for a high bar of engineering standards like code quality, automated testing, performance, CI/CD, etc. If you are someone who loves solving problems for customers, technology, the craft of software engineering, and the thrill of building startups, we would like to talk to you.
What you will be doing
- Build and ship features in our Node.js (and now migrating to TypeScript) codebase that directly impact user experience and help move the top and bottom line of the business.
- Collaborate closely with our product, design and data team to build innovative features to deliver a world class product to our customers. At company, product managers don’t “tell” what to build. In fact, we all collaborate on how to solve a problem for our customers and the business. Engineering plays a big part in it.
- Design scalable platforms that empower our product and marketing teams to rapidly experiment.
- Own the quality of our products by writing automated tests, reviewing code, making systems observable and resilient to failures.
- Drive code quality and pay down architectural debt by continuous analysis of our codebases and systems, and continuous refactoring.
- Architect our systems for faster iterations, releasability, scalability and high availability using practices like Domain Driven Design, Event Driven Architecture, Cloud Native Architecture and Observability.
- Set the engineering culture with the rest of the team by defining how we should work as a team, set standards for quality, and improve the speed of engineering execution.
The role could be ideal for you if you
- Experience of 4-8 years of working in backend engineering with at least 2 years of production experience in TypeScript, Express.js (or another popular framework like Nest.js) and MongoDB (or any popular database like MySQL, PostgreSQL, DynamoDB, etc.).
- Well versed with one or more architectures and design patterns such as MVC, Domain Driven Design, CQRS, Event Driven Architecture, Cloud Native Architecture, etc.
- Experienced in writing automated tests (especially integration tests) and Continuous Integration. At company, engineers own quality and hence, writing automated tests is crucial to the role.
- Experience with managing production infrastructure using technologies like public cloud providers (AWS, GCP, Azure, etc.). Bonus: if you have experience in using Kubernetes.
- Experience in observability techniques like code instrumentation for metrics, tracing and logging.
- Care deeply about code quality, code reviews, software architecture (think about Object Oriented Programming, Clean Code, etc.), scalability and reliability. Bonus: if you have experience in this from your past roles.
- Understand the importance of shipping fast in a startup environment and constantly try to find ingenious ways to achieve the same.
- Collaborate well with everyone on the team. We communicate a lot and don’t hesitate to get quick feedback from other members on the team sooner than later.
- Can take ownership of goals and deliver them with high accountability.
Don’t hesitate to try out new technologies. At company, nobody is limited to a role. Every engineer in our team is an expert of at least one technology but often ventures out in adjacent technologies like React.js, Flutter, Data Platforms, AWS and Kubernetes. If you are not excited by this, you will not like working at company. Bonus: if you have experience in adjacent technologies like AWS (or any public cloud provider, Github Actions (or CircleCI), Kubernetes, Infrastructure as Code (Terraform, Pulumi, etc.), etc.
Company Overview:
Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud.
As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.
Position Overview:
As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.
Work Location: Pune
Job Type: Hybrid
Key Responsibilities:
- Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
- Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
- Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
- Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
- Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
- Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
- Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
- Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
- Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
- Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
- Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
- Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
- Solid experience designing and delivering features with high quality on aggressive schedules.
- Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
- Familiarity with performance optimization techniques and principles for backend systems.
- Excellent problem-solving and critical-thinking abilities.
- Outstanding communication and collaboration skills.
Why Join Us:
- Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
- Collaborative and innovative work environment.
- Competitive salary and benefits package.
- Professional growth and development opportunities.
- Chance to work on cutting-edge technology and products that make a real impact.
If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.
Summary:
We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.
Job Type:
Full-Time Contractor (12 months)
Location:
Remote / On-site (Jaipur preferred, as per project needs)
Experience:
5+ years in backend development
Key Responsibilities:
- Design, develop, and maintain robust backend services using Python and FastAPI.
- Implement and manage Prisma ORM for database operations.
- Build scalable APIs and integrate with SQL databases and third-party services.
- Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
- Collaborate with front-end developers and other team members to deliver high-quality web applications.
- Ensure application performance, security, and reliability.
- Participate in code reviews, testing, and deployment processes.
Required Skills:
- Expertise in Python backend development with strong experience in FastAPI.
- Solid understanding of RESTful API design and implementation.
- Proficiency in SQL databases and ORM tools (preferably Prisma)
- Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
- Familiarity with CI/CD pipelines and containerization (Docker).
- Knowledge of cloud architecture best practices.
Added Advantage:
- Front-end development knowledge (React, Angular, or similar frameworks).
- Exposure to AWS/GCP cloud platforms.
- Experience with NoSQL databases.
Eligibility:
- Minimum 5 years of professional experience in backend development.
- Available for full-time engagement.
- Please excuse if you are currently engaged in other projects—we require dedicated availability.
We are seeking an experienced Cloud Penetration Tester to assess and secure our AWS, Azure, and GCP environments. You will simulate real-world cyber-attacks on cloud infrastructure, SaaS platforms, and APIs to identify security weaknesses and help engineering teams strengthen cloud defenses.
🔐 Key Responsibilities
- Conduct cloud penetration testing across AWS, Azure, and GCP
- Identify misconfigurations, exposed services, IAM weaknesses, and attack paths
- Perform web, API, and SaaS application security testing
- Test IAM roles, policies, authentication, and authorization controls
- Perform SSRF, token theft, and cloud metadata attacks
- Assess S3, Azure Blob, and Cloud Storage security
- Test Kubernetes clusters, containers, and CI/CD pipelines
- Simulate real-world attacker techniques and lateral movement
- Produce clear, actionable vulnerability reports with remediation guidance
- Work with DevOps and Engineering teams to improve cloud security
🛠️ Required Skills
☁️ Cloud Platforms
- AWS (IAM, EC2, S3, RDS, Lambda)
- Azure (Azure AD, VMs, Storage Accounts)
- GCP (IAM, Compute Engine, Cloud Storage)
🔓 Offensive Security
- Cloud tools: Pacu, ScoutSuite, CloudMapper, Stormspotter
- Web & API testing: Burp Suite, Nuclei, OWASP ZAP, SQLmap
- Exploitation: Metasploit, Sliver, Cobalt Strike (or equivalent)
- Privilege escalation: LinPEAS, WinPEAS, GTFOBins
🧪 DevSecOps & Containers
- CI/CD security (GitHub, GitLab, Jenkins)
- Secrets scanning (Gitleaks, TruffleHog)
- Container & Kubernetes security (Trivy, Kube-Hunter)
🎓 Preferred Qualifications
- Certifications: OSCP, CEH, CRTO, AWS Security Specialty
- Experience in Red Team, Bug Bounty, or Cloud Security Operations
- Knowledge of MITRE ATT&CK for Cloud
- Experience with SOC2, ISO 27001, or PCI-DSS environments
3-5 years of experience as full stack developer with essential requirements on the following technologies: FastAPI, JavaScript, React.js-Redux, Node.js, Next.js, MongoDB, Python, Microservices, Docker, and MLOps.
Experience in Cloud Architecture using Kubernetes (K8s), Google Kubernetes Engine, Authentication and Authorisation Tools, DevOps Tools and Scalable and Secure Cloud Hosting is a significant plus.
Ability to manage a hosting environment, ability to scale applications to handle the load changes, knowledge of accessibility and security compliance.
Testing of API endpoints.
Ability to code and create functional web applications and optimising them for increasing response time and efficiency. Skilled in performance tuning, query plan/ explain plan analysis, indexing, table partitioning.
Expert knowledge of Python and corresponding frameworks with their best practices, expert knowledge of relational databases, NoSQL.
Ability to create acceptance criteria, write test cases and scripts, and perform integrated QA techniques.
Must be conversant with Agile software development methodology. Must be able to write technical documents, coordinate with test teams. Proficiency using Git version control.
About Us :
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values :
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement :
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace.
About the Role
Job Title: Lead Java Developer
Location: Noida(Hybrid)
Experience: 7-12 years
Education: BTech / BE / ME /MTech/ MCA / MSc Computer Science
Primary Skills - Java 8-17+, Core Java, Design patterns (more than Singleton & Factory), Webservices development,REST/SOAP, XML & JSON manipulation, OAuth 2.0, CI/CD, SQL / NoSQL
Secondary Skills -Kafka, Jenkins, Kubernetes, Google Cloud Platform (GCP), SAP JCo library, Terraform
Certifications (Optional): OCPJP (the Oracle Certified Professional Java Programmer) / Google Professional Cloud
Required Experience:
● Must have integration component development experience using Java 8/9 technologies andservice-oriented architecture (SOA)
● Must have in-depth knowledge of design patterns and integration architecture
● Must have experience in system scalability and maintenance for complex enterprise applications and integration solutions
● Experience with developing solutions on Google Cloud Platform will be an added advantage.
● Should have good hands-on experience with Software Engineering tools viz. Eclipse, NetBeans, JIRA,Confluence, BitBucket, SVN etc.
● Should be very well verse with current technology trends in IT Solutions e.g. Cloud Platform Development,DevOps, Low Code solutions, Intelligent Automation
Good to Have:
● Experience of developing 3-4 integration adapters/connectors for enterprise applications (ERP, CRM, HCM,SCM, Billing etc.) using industry standard frameworks and methodologies following Agile/Scrum
Behavioral competencies required:
● Must have worked with US/Europe based clients in onsite/offshore delivery model
● Should have very good verbal and written communication, technical articulation, listening and presentation skills
● Should have proven analytical and problem solving skills
● Should have demonstrated effective task prioritization, time management and internal/external stakeholder management skills
● Should be a quick learner and team player
● Should have experience of working under stringent deadlines in a Matrix organization structure
● Should have demonstrated appreciable Organizational Citizenship Behavior (OCB) in past organizations
Job Responsibilities:
● Writing the design specifications and user stories for the functionalities assigned.
● Develop assigned components / classes and assist QA team in writing the test cases
● Create and maintain coding best practices and do peer code / solution reviews
● Participate in Daily Scrum calls, Scrum Planning, Retro and Demos meetings
● Bring out technical/design/architectural challenges/risks during execution, develop action plan for mitigation and aversion of identified risks
● Comply with development processes, documentation templates and tools prescribed by CloudSufi or and its clients
● Work with other teams and Architects in the organization and assist them on technical Issues/Demos/POCs and proposal writing for prospective clients
● Contribute towards the creation of knowledge repository, reusable assets/solution accelerators and IPs
● Provide feedback to junior developers and be a coach and mentor for them
● Provide training sessions on the latest technologies and topics to others employees in the organization
● Participate in organization development activities time to time - Interviews, CSR/Employee engagement activities, participation in business events/conferences, implementation of new policies, systems and procedures as decided by Management team
Please note that salary will be based on experience.
Job Title: Full Stack Engineer
Location: Bengaluru (Indiranagar) – Work From Office (5 Days)
Job Summary
We are seeking a skilled Full Stack Engineer with solid hands-on experience across frontend and backend development. You will work on mission-critical features, ensuring seamless performance, scalability, and reliability across our products.
Responsibilities
- Design, develop, and maintain scalable full-stack applications.
- Build responsive, high-performance UIs using Typescript & Next.js.
- Develop backend services and APIs using Python (FastAPI/Django).
- Work closely with product, design, and business teams to translate requirements into intuitive solutions.
- Contribute to architecture discussions and drive technical best practices.
- Own features end-to-end — design, development, testing, deployment, and monitoring.
- Ensure robust security, code quality, and performance optimization.
Tech Stack
Frontend: Typescript, Next.js, React, Tailwind CSS
Backend: Python, FastAPI, Django
Databases: PostgreSQL, MongoDB, Redis
Cloud & Infra: AWS/GCP, Docker, Kubernetes, CI/CD
Other Tools: Git, GitHub, Elasticsearch, Observability tools
Requirements
Must-Have:
- 2+ years of professional full-stack engineering experience.
- Strong expertise in either frontend (Typescript/Next.js) or backend (Python/FastAPI/Django) with familiarity in both.
- Experience building RESTful services and microservices.
- Hands-on experience with Git, CI/CD, and cloud platforms (AWS/GCP/Azure).
- Strong debugging, problem-solving, and optimization skills.
- Ability to thrive in fast-paced, high-ownership startup environments.
Good-to-Have:
- Exposure to Docker, Kubernetes, and observability tools.
- Experience with message queues or event-driven architecture.
Perks & Benefits
- Upskilling support – courses, tools & learning resources.
- Fun team outings, hackathons, demos & engagement initiatives.
- Flexible Work-from-Home: 12 WFH days every 6 months.
- Menstrual WFH: up to 3 days per month.
- Mobility benefits: relocation support & travel allowance.
- Parental support: maternity, paternity & adoption leave.
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.
Key Responsibilities
CI/CD and Infrastructure Automation
- Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
- Automate deployments using tools such as Terraform, Helm, and Kubernetes
- Improve build and release processes to support high-performance and low-latency trading applications
- Work efficiently with Linux/Unix environments
Cloud and On-Prem Infrastructure Management
- Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
- Ensure system reliability, scalability, and high availability
- Implement Infrastructure as Code (IaC) to standardize and streamline deployments
Performance Monitoring and Optimization
- Monitor system performance and latency using Prometheus, Grafana, and ELK stack
- Implement proactive alerting and fault detection to ensure system stability
- Troubleshoot and optimize system components for maximum efficiency
Security and Compliance
- Apply DevSecOps principles to ensure secure deployment and access management
- Maintain compliance with financial industry regulations such as SEBI
- Conduct vulnerability assessments and maintain logging and audit controls
Required Skills and Qualifications
- 2+ years of experience as a DevOps Engineer in a software or trading environment
- Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
- Proficiency in cloud platforms such as AWS and GCP
- Hands-on experience with Docker and Kubernetes
- Experience with Terraform or CloudFormation for IaC
- Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
- Familiarity with Prometheus, Grafana, and ELK stack
- Proficiency in scripting using Python, Bash, or Go
- Solid understanding of security best practices including IAM, encryption, and network policies
Good to Have (Optional)
- Experience with low-latency trading infrastructure or real-time market data systems
- Knowledge of high-frequency trading environments
- Exposure to FIX protocol, FPGA, or network optimization techniques
- Familiarity with Redis or Nginx for real-time data handling
Why Join Us?
- Work with a team that expects and delivers excellence.
- A culture where risk-taking is rewarded, and complacency is not.
- Limitless opportunities for growth—if you can handle the pace.
- A place where learning is currency, and outperformance is the only metric that matters.
- The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
Responsibilities:
- Build and optimize batch and streaming data pipelines using Apache Beam (Dataflow)
- Design and maintain BigQuery datasets using best practices in partitioning, clustering, and materialized views
- Develop and manage Airflow DAGs in Cloud Composer for workflow orchestration
- Implement SQL-based transformations using Dataform (or dbt)
- Leverage Pub/Sub for event-driven ingestion and Cloud Storage for raw/lake layer data architecture
- Drive engineering best practices across CI/CD, testing, monitoring, and pipeline observability
- Partner with solution architects and product teams to translate data requirements into technical designs
- Mentor junior data engineers and support knowledge-sharing across the team
- Contribute to documentation, code reviews, sprint planning, and agile ceremonies
Requirements
- 5+ years of hands-on experience in data engineering, with at least 2 years on GCP
- Proven expertise in BigQuery, Dataflow (Apache Beam), Cloud Composer (Airflow)
- Strong programming skills in Python and/or Java
- Experience with SQL optimization, data modeling, and pipeline orchestration
- Familiarity with Git, CI/CD pipelines, and data quality monitoring frameworks
- Exposure to Dataform, dbt, or similar tools for ELT workflows
- Solid understanding of data architecture, schema design, and performance tuning
- Excellent problem-solving and collaboration skills
Bonus Skills:
- GCP Professional Data Engineer certification
- Experience with Vertex AI, Cloud Functions, Dataproc, or real-time streaming architectures
- Familiarity with data governance tools (e.g., Atlan, Collibra, Dataplex)
- Exposure to Docker/Kubernetes, API integration, and infrastructure-as-code (Terraform)
Senior Software Engineer
Location: Hyderabad, India
Who We Are:
Since our inception back in 2006, Navitas has grown to be an industry leader in the digital transformation space, and we’ve served as trusted advisors supporting our client base within the commercial, federal, and state and local markets.
What We Do:
At our very core, we’re a group of problem solvers providing our award-winning technology solutions to drive digital acceleration for our customers! With proven solutions, award-winning technologies, and a team of expert problem solvers, Navitas has consistently empowered customers to use technology as a competitive advantage and deliver cutting-edge transformative solutions.
What You’ll Do:
Build, Innovate, and Own:
- Design, develop, and maintain high-performance microservices in a modern .NET/C# environment.
- Architect and optimize data pipelines and storage solutions that power our AI-driven products.
- Collaborate closely with AI and data teams to bring machine learning models into production systems.
- Build integrations with external services and APIs to enable scalable, interoperable solutions.
- Ensure robust security, scalability, and observability across distributed systems.
- Stay ahead of the curve — evaluating emerging technologies and contributing to architectural decisions for our next-gen platform.
Responsibilities will include but are not limited to:
- Provide technical guidance and code reviews that raise the bar for quality and performance.
- Help create a growth-minded engineering culture that encourages experimentation, learning, and accountability.
What You’ll Need:
- Bachelor’s degree in Computer Science or equivalent practical experience.
- 8+ years of professional experience, including 5+ years designing and maintaining scalable backend systems using C#/.NET and microservices architecture.
- Strong experience with SQL and NoSQL data stores.
- Solid hands-on knowledge of cloud platforms (AWS, GCP, or Azure).
- Proven ability to design for performance, reliability, and security in data-intensive systems.
- Excellent communication skills and ability to work effectively in a global, cross-functional environment.
Set Yourself Apart With:
- Startup experience - specifically in building product from 0-1
- Exposure to AI/ML-powered systems, data engineering, or large-scale data processing.
- Experience in healthcare or fintech domains.
- Familiarity with modern DevOps practices, CI/CD pipelines, and containerization (Docker/Kubernetes).
Equal Employer/Veterans/Disabled
Navitas Business Consulting is an affirmative action and equal opportunity employer. If reasonable accommodation is needed to participate in the job application or interview process, to perform essential job functions, and/or to receive other benefits and privileges of employment, please contact Navitas Human Resources.
Navitas is an equal opportunity employer. We provide employment and opportunities for advancement, compensation, training, and growth according to individual merit, without regard to race, color, religion, sex (including pregnancy), national origin, sexual orientation, gender identity or expression, marital status, age, genetic information, disability, veteran-status veteran or military status, or any other characteristic protected under applicable Federal, state, or local law. Our goal is for each staff member to have the opportunity to grow to the limits of their abilities and to achieve personal and organizational objectives. We will support positive programs for equal treatment of all staff and full utilization of all qualified employees at all levels within Navita
Job Summary:
We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.
Responsibilities:
- Deploy, configure, and troubleshoot various infrastructure and application environments
- Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
- Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
- Collaborate with application teams on infrastructure design and issues
- Architect solutions that optimally meet business needs
- Implement CI/CD pipelines and automate deployment processes
- Disaster recovery and infrastructure restoration
- Restore/Recovery operations from backups
- Automate routine tasks
- Execute company initiatives in the infrastructure space
- Expertise with observability tools like ELK, Prometheus, Grafana , Loki
Qualifications:
- Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
- Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
- Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
- Experience in architecting solutions that optimally meet business needs
- Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
- Strong understanding of system concepts like high availability, scalability, and redundancy
- Ability to work with application teams on infrastructure design and issues
- Excellent problem-solving and troubleshooting skills
- Experience with automation of routine tasks
- Good communication and interpersonal skills
Education and Experience:
- Bachelor's degree in Computer Science or a related field
- 5 to 10 years of experience as a DevOps Engineer or in a related role
- Experience with observability tools like ELK, Prometheus, Grafana
Working Conditions:
The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.
Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.
Job Title: Sr. DevOps Engineer
Experience Required: 2 to 4 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two
We are looking for highly experienced Senior Java Developers who can architect, design, and deliver high-performance enterprise applications using Spring Boot and Microservices . The role requires a strong understanding of distributed systems, scalability, and data consistency.
Job Description:
Infilect is a GenAI company pioneering the use of Image Recognition in Consumer Packaged Goods retail.
We are looking for a Senior DevOps Engineer to be responsible and accountable for the smooth running of our Cloud, AI workflows, and AI-based Computer Systems. Furthermore, the candidate will supervise the implementation and maintenance of the company’s computing needs including the in-house GPU & AI servers along with AI workloads.
Responsibilities
- Understanding and automating AI based deployment an AI based workflows
- Implementing various development, testing, automation tools, and IT infrastructure
- Manage Cloud, computer systems and other IT assets.
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Design, develop, implement, and coordinate systems, policies, and procedures for Cloud and on-premise systems
- Ensure the security of data, network access, and backup systems
- Act in alignment with user needs and system functionality to contribute to organizational policy
- Identify problematic areas, perform RCA and implement strategic solutions in time
- Preserve assets, information security, and control structures
- Handle monthly/annual cloud budget and ensure cost effectiveness
Requirements and skills
- Well versed in automation tools such as Docker, Kubernetes, Puppet, Ansible etc.
- Working Knowledge of Python, SQL database stack or any full-stack with relevant tools.
- Understanding agile development, CI/CD, sprints, code reviews, Git and GitHub/Bitbucket workflows
- Well versed with ELK stack or any other logging, monitoring and analysis tools
- Proven working experience of 2+ years as an DevOps/Tech lead/IT Manager or relevant positions
- Excellent knowledge of technical management, information analysis, and of computer hardware/software systems
- Hands-on experience with computer networks, network administration, and network installation
- Knowledge in ISO/SOC Type II implementation with be a
- BE/B.Tech/ME/M.Tech in Computer Science, IT, Electronics or a similar field
Google Data Engineer - SSE
Position Description
Google Cloud Data Engineer
Notice Period: Immediate to 30 days serving
Job Description:
We are seeking a highly skilled Data Engineer with extensive experience in Google Cloud Platform (GCP) data services and big data technologies. The ideal candidate will be responsible for designing, implementing, and optimizing scalable data solutions while ensuring high performance, reliability, and security.
Key Responsibilities:
• Design, develop, and maintain scalable data pipelines and architectures using GCP data services.
• Implement and optimize solutions using BigQuery, Dataproc, Composer, Pub/Sub, Dataflow, GCS, and BigTable.
• Work with GCP databases such as Bigtable, Spanner, CloudSQL, AlloyDB, ensuring performance, security, and availability.
• Develop and manage data processing workflows using Apache Spark, Hadoop, Hive, Kafka, and other Big Data technologies.
• Ensure data governance and security using Dataplex, Data Catalog, and other GCP governance tooling.
• Collaborate with DevOps teams to build CI/CD pipelines for data workloads using Cloud Build, Artifact Registry, and Terraform.
• Optimize query performance and data storage across structured and unstructured datasets.
• Design and implement streaming data solutions using Pub/Sub, Kafka, or equivalent technologies.
Required Skills & Qualifications:
• 8-15 years of experience
• Strong expertise in GCP Dataflow, Pub/Sub, Cloud Composer, Cloud Workflow, BigQuery, Cloud Run, Cloud Build.
• Proficiency in Python and Java, with hands-on experience in data processing and ETL pipelines.
• In-depth knowledge of relational databases (SQL, MySQL, PostgreSQL, Oracle) and NoSQL databases (MongoDB, Scylla, Cassandra, DynamoDB).
• Experience with Big Data platforms such as Cloudera, Hortonworks, MapR, Azure HDInsight, IBM Open Platform.
• Strong understanding of AWS Data services such as Redshift, RDS, Athena, SQS/Kinesis.
• Familiarity with data formats such as Avro, ORC, Parquet.
• Experience handling large-scale data migrations and implementing data lake architectures.
• Expertise in data modeling, data warehousing, and distributed data processing frameworks.
• Deep understanding of data formats such as Avro, ORC, Parquet.
• Certification in GCP Data Engineering Certification or equivalent.
Good to Have:
• Experience in BigQuery, Presto, or equivalent.
• Exposure to Hadoop, Spark, Oozie, HBase.
• Understanding of cloud database migration strategies.
• Knowledge of GCP data governance and security best practices.























