
As a Google Cloud Infrastructure / DevOps Engineer, you will design, implement, and maintain cloud infrastructure while enabling efficient development operations. This role bridges development and operations, with a strong focus on automation, scalability, reliability, and collaboration. You will work closely with cross-functional teams to optimize systems and enhance CI/CD pipelines.
Key Responsibilities:
Cloud Infrastructure Management
- Manage and monitor Google Cloud Platform (GCP) services and components.
- Ensure high availability, scalability, and security of cloud resources.
CI/CD Pipeline Implementation
- Design and implement automated pipelines for application releases.
- Build and maintain CI/CD workflows.
- Collaborate with developers to streamline deployment processes.
- Automate testing, deployment, and rollback procedures.
Infrastructure as Code (IaC)
- Use Terraform (or similar tools) to define and manage infrastructure.
- Maintain version-controlled infrastructure code.
- Ensure environment consistency across dev, staging, and production.
Monitoring & Troubleshooting
- Monitor system performance, resource usage, and application health.
- Troubleshoot cloud infrastructure and deployment pipeline issues.
- Implement proactive monitoring and alerting.
Security & Compliance
- Apply cloud security best practices.
- Ensure compliance with industry standards and internal policies.
- Collaborate with security teams to address vulnerabilities.
Collaboration & Documentation
- Work closely with development, operations, and QA teams.
- Document architecture, processes, and configurations.
- Share knowledge and best practices with the team.
Qualifications:
Education
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
Experience
- Minimum 3 years of industry experience.
- At least 1 year designing and managing production systems on GCP.
- Familiarity with GCP services (Compute Engine, GKE, Cloud Storage, etc.).
- Exposure to Docker, Kubernetes, and microservices architecture.
Skills
- Proficiency in Python or Bash for automation.
- Strong understanding of DevOps principles.
- Knowledge of Jenkins or other CI/CD tools.
- Experience with GKE for container orchestration.
- Familiarity with event streaming platforms (Kafka, Google Cloud Pub/Sub).
About the Company: Bits In Glass – India
Industry Leader
- Established for 20+ years with global operations in the US, Canada, UK, and India.
- In 2021, Bits In Glass joined hands with Crochet Technologies, strengthening global delivery capabilities.
- Offices in Pune, Hyderabad, and Chandigarh.
- Specialized Pega Partner since 2017, ranked among the top 30 Pega partners globally.
- Long-standing sponsor of the annual PegaWorld event.
- Elite Appian partner since 2008 with deep industry expertise.
- Dedicated global Pega Center of Excellence (CoE) supporting customers and development teams worldwide.
Employee Benefits
- Career Growth: Clear pathways for advancement and professional development.
- Challenging Projects: Work on innovative, high-impact global projects.
- Global Exposure: Collaborate with international teams and clients.
- Flexible Work Arrangements: Supporting work-life balance.
- Comprehensive Benefits: Competitive compensation, health insurance, paid time off.
- Learning Opportunities: Upskill on AI-enabled Pega solutions, data engineering, integrations, cloud migration, and more.
Company Culture
- Collaborative Environment: Strong focus on teamwork, innovation, and knowledge sharing.
- Inclusive Workplace: Diverse and respectful workplace culture.
- Continuous Learning: Encourages certifications, learning programs, and internal knowledge sessions.
Core Values
- Integrity: Ethical practices and transparency.
- Excellence: Commitment to high-quality work.
- Client-Centric Approach: Delivering solutions tailored to client needs.

About Bits In Glass
About
Bits In Glass (BIG) is an award-winning software consulting firm that helps organizations improve operations and drive better customer experiences. They specialize in business process automation consulting, helping clients unlock the potential of their people, processes, and data.
Tech stack
Candid answers by the company
Bits In Glass (BIG) is an award-winning software consulting firm established in 2002 that specializes in business process automation. The company helps organizations improve their operations and customer experiences by implementing and managing automation solutions using industry-leading platforms like Appian, Pega, MuleSoft, and Blue Prism. Their primary focus is on helping clients across financial services, insurance, logistics, real estate, and healthcare sectors to modernize their operations by unlocking the potential of their people, processes, and data through technological solutions. As a global consulting firm with 500+ employees, they work with market leaders to drive digital transformation and provide innovative solutions for complex business challenges.
Similar jobs
Location: Bangalore, India
Experience: 3 Years
Company: Tradelab Technologies
About Tradelab Technologies:
Tradelab Technologies is a leading fintech solutions provider building high-performance trading platforms, brokerage infrastructure, and financial technology products. Our systems handle real-time market data, order management, and analytics for clients across the trading ecosystem.
Role Overview:
We are looking for a skilled DevOps Engineer to manage, optimize, and scale our trading infrastructure. The ideal candidate should have strong experience with CI/CD pipelines, cloud infrastructure, containerization, and system automation, with an emphasis on reliability and performance in production environments.
Key Responsibilities:
- Design, implement, and maintain CI/CD pipelines for automated deployment and monitoring.
- Manage and scale cloud infrastructure (AWS, GCP, or Azure) for high-availability trading systems.
- Work closely with development and QA teams to ensure smooth integration and release processes.
- Automate provisioning, configuration, and monitoring using tools like Ansible, Terraform, or similar.
- Implement logging, alerting, and monitoring systems for proactive issue detection.
- Ensure system reliability, security, and performance in production environments.
- Manage version control and containerized environments (Git, Docker, Kubernetes).
- Troubleshoot infrastructure issues and optimize deployment performance.
Required Skills & Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or equivalent.
- Minimum 3 years of experience in DevOps, SRE, or Infrastructure Engineering roles.
- Strong hands-on experience with AWS / GCP / Azure.
- Proficiency in CI/CD tools like Jenkins, GitLab CI, or GitHub Actions.
- Expertise in Docker, Kubernetes, and container orchestration.
- Experience with infrastructure-as-code tools like Terraform, Ansible, or CloudFormation.
- Proficient with Linux administration, shell scripting, and Python or Go for automation.
- Knowledge of monitoring tools like Prometheus, Grafana, ELK Stack, or Datadog.
- Familiarity with networking, security, and load balancing concepts.
Nice-to-Have Skills:
- Experience working with trading or low-latency systems.
- Knowledge of message queues (Kafka, RabbitMQ).
- Exposure to microservices architecture and API management.
- Experience with incident management and disaster recovery planning.
Why Join Tradelab Technologies:
- Be part of a fast-paced fintech environment working on scalable trading infrastructure.
- Collaborate with talented teams solving real-world financial technology challenges.
- Competitive pay, flexible work culture, and opportunities for growth.
About GradRight
Our vision is to be the world’s leading Ed-Fin Tech company dedicated to making higher education accessible and affordable to all. Our mission is to drive transparency and accountability in the global higher education sector and create significant impact using the power of technology, data science and collaboration.
GradRight is the world’s first SaaS ecosystem that brings together students, universities and financial institutions in an integrated manner. It enables students to find and fund high return college education, universities to engage and select the best-fit students and banks to lend in an effective and efficient manner.
In the last three years, we have enabled students to get the best deals on a $ 2.8+ Billion of loan requests and facilitated disbursements of more than $ 350+ Million in loans. GradRight won the HSBC Fintech Innovation Challenge supported by the Ministry of Electronics & IT, Government of India & was among the top 7 global finalists in The PIEoneer awards, UK.
GradRight’s team possesses extensive domestic and international experience in the launch and scale-up of premier higher education institutions. It is led by alumni of IIT Delhi, BITS Pilani, IIT Roorkee, ISB Hyderabad and University of Pennsylvania. GradRight is a Delaware, USA registered company with a wholly owned subsidiary in India.
About the Role
We are looking for a passionate DevOps Engineer with hands-on experience in AWS cloud infrastructure, containerization, and orchestration. The ideal candidate will be responsible for building, automating, and maintaining scalable cloud solutions, ensuring smooth CI/CD pipelines, and supporting development and operations teams.
Core Responsibilities
Design, implement, and manage scalable, secure, and highly available infrastructure on AWS.
Build and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or GitHub Actions.
Containerize applications using Docker and manage deployments with Kubernetes (EKS, self-managed, or other distributions).
Monitor system performance, availability, and security using tools like CloudWatch, Prometheus, Grafana, ELK/EFK stack.
Collaborate with development teams to optimize application performance and deployment processes.
Required Skills & Experience
3–4 years of professional experience as a DevOps Engineer or similar role.
Strong expertise in AWS services (EC2, S3, RDS, Lambda, VPC, IAM, CloudWatch, EKS, etc.).
Hands-on experience with Docker and Kubernetes (EKS or self-hosted clusters).
Proficiency in CI/CD pipeline design and automation.
Experience with Infrastructure as Code (Terraform / AWS CloudFormation).
Solid understanding of Linux/Unix systems and shell scripting.
Knowledge of monitoring, logging, and alerting tools.
Familiarity with networking concepts (DNS, Load Balancing, Security Groups, Firewalls).
Basic programming/scripting experience in Python, Bash, or Go.
Nice to Have
Exposure to microservices architecture and service mesh (Istio/Linkerd).
Knowledge of serverless (AWS Lambda, API Gateway).
NOTE- This is a contractual role for a period of 3-6 months.
Responsibilities:
● Set up and maintain CI/CD pipelines across services and environments
● Monitor system health and set up alerts/logs for performance & errors ● Work closely with backend/frontend teams to improve deployment velocity
● Manage cloud environments (staging, production) with cost and reliability in mind
● Ensure secure access, role policies, and audit logging
● Contribute to internal tooling, CLI automation, and dev workflow improvements
Must-Haves:
● 2–3 years of hands-on experience in DevOps, SRE, or Platform Engineering
● Experience with Docker, CI/CD (especially GitHub Actions), and cloud providers (AWS/GCP)
● Proficiency in writing scripts (Bash, Python) for automation
● Good understanding of system monitoring, logs, and alerting
● Strong debugging skills, ownership mindset, and clear documentation habits
● Infra monitoring tools like Grafana dashboards
3-5 years of experience in DevOps, systems administration, or software engineering roles.
B. Tech. in computer science or related field from top tier engineering colleges.
Strong technical skills in software development, systems administration, and cloud infrastructure management.
Experience with infrastructure-as-code tools such as Terra form or Cloud Formation.
Experience with containerization technologies such as Dockers and Kubernetes.
Experience with cloud providers such as AWS or Azure.
Experience with scripting languages such as Bash or Python.
Strong problem-solving and analytical skills.
Strong communication and collaboration skills
The expectation is to set up complete automation of CI/CD pipeline & monitoring and ensure high availability of the pipeline. The automated deployment environment can be on-prem or cloud (virtual instances, containerized and serverless). Complete test automation and ensure Security of Application as well as Infrastructure.
ROLES & RESPONSIBILITIES
Configure Jenkins with load distribution between master/slave Setting up the CI pipeline with Jenkins and Cloud(AWS or Azure) Code Build Static test (Quality & Security) Setting up Dynamic Test configuration with selenium and other tools Setting up Application and Infrastructure scanning for security. Post-deployment security plan including PEN test. Usage of RASP tool. Configure and ensure HA of the pipeline and monitoring Setting up composition analysis in the pipeline Setting up the SCM and Artifacts repository and management for branching, merging and archiving Must work in Agile environment using ALM tool like Jira DESIRED SKILLS
Extensive hands-on Continuous Integration and Continuous Delivery technology experience of .Net, Node, Java and C++ based projects(Web, mobile and Standalone). Experience configuring and managing
- ALM tools like Jira, TFS, etc.
- SCM such as GitHub, GitLab, CodeCommit
- Automation tools such as Terraform, CHEF, or Ansible
- Package repo configuration(Artifactory / Nexus), Package managers like Nuget & Chocholatey
- Database Configuration (sql & nosql), Web/Proxy Setup(IIS, Nginx, Varnish, Apache).
Deep knowledge of multiple monitoring tools and how to mine them for advanced data Prior work with Helm, Postgres, MySQL, Redis, ElasticSearch, microservices, message queues and related technologies Test Automation with Selenium / CuCumber; Setting up of test Simulators. AWS Certified Architect and/or Developer; Associate considered, Professional preferred Proficient in: Bash, Powershell, Groovy, YAML, Python, NodeJS, Web concepts such as REST APIs and Aware of MVC and SPA application design. TTD experience and quality control with Sonarqube or Checkmarx, Tics Tiobe and Coverity Thorough with Linux(Ubuntu, Debian CentOS), Docker(File/compose/volume), Kubernetes cluster setup Expert in Workflow tools: Jenkins(declarative, plugins)/TeamCity and Build Servers configuration Experience with AWS CloudFormation / CDK and delivery automation Ensure end-to-end deployments succeed and resources come up in an automated fashion Good to have ServiceNow configuration experience for collaboration
What you will get:
- To be a part of the Core-Team 💪
- A Chunk of ESOPs 🚀
- Creating High Impact by Solving a Problem at Large (No one in the World has a similar product) 💥
- High Growth Work Environment ⚙️
What we are looking for:
- An 'Exceptional Executioner' -> Leader -> Create an Impact & Value 💰
- Ability to take Ownership of your work
- Past experience in leading a team
- Expertise in Infrastructure & Application design & architecture
- Expertise in AWS, OS & networking
- Having good exposure on Infra & Application security
- Expertise in Python, Shell scripting
- Proficient with Devops tools Terraform, Jenkins, Ansible, Docker, GIT
- Solid background in systems engineering and operations
- Strong in Devops methodologies and processes
- Strong in CI/CD pipeline & SDLC.
• Develop and maintain CI/CD tools to build and deploy scalable web and responsive applications in production environment
• Design and implement monitoring solutions that identify both system bottlenecks and production issues
• Design and implement workflows for continuous integration, including provisioning, deployment, testing, and version control of the software.
• Develop self-service solutions for the engineering team in order to deliver sites/software with great speed and quality
o Automating Infra creation
o Provide easy to use solutions to engineering team
• Conduct research, tests, and implements new metrics collection systems that can be reused and applied as engineering best practices
o Update our processes and design new processes as needed.
o Establish DevOps Engineer team best practices.
o Stay current with industry trends and source new ways for our business to improve.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
• Manage timely resolution of all critical and/or complex problems
• Maintain, monitor, and establish best practices for containerized environments.
• Mentor new DevOps engineers
What you will bring
• The desire to work in fast-paced environment.
• 5+ years’ experience building, maintaining, and deploying production infrastructures in AWS or other cloud providers
• Containerization experience with applications deployed on Docker and Kubernetes
• Understanding of NoSQL and Relational Database with respect to deployment and horizontal scalability
• Demonstrated knowledge of Distributed and Scalable systems Experience with maintaining and deployment of critical infrastructure components through Infrastructure-as-Code and configuration management tooling across multiple environments (Ansible, Terraform etc)
• Strong knowledge of DevOps and CI/CD pipeline (GitHub, BitBucket, Artifactory etc)
• Strong understanding of cloud and infrastructure components (server, storage, network, data, and applications) to deliver end-to-end cloud Infrastructure architectures and designs and recommendations
o AWS services like S3, CloudFront, Kubernetes, RDS, Data Warehouses to come up with architecture/suggestions for new use cases.
• Test our system integrity, implemented designs, application developments and other processes related to infrastructure, making improvements as needed
Good to have
• Experience with code quality tools, static or dynamic code analysis and compliance and undertaking and resolving issues identified from vulnerability and compliance scans of our infrastructure
• Good knowledge of REST/SOAP/JSON web service API implementation
•
Job Description :
- The engineer should be highly motivated, able to work independently, work and guide other engineers within and outside the team.
- The engineer should possess varied software skills in Shell scripting, Linux, Oracle database, WebLogic, GIT, ANT, Hudson, Jenkins, Docker and Maven
- Work is super fun, non-routine, and, challenging, involving the application of advanced skills in area of specialization.
Key responsibilities :
- Design, develop, troubleshoot and debug software programs for Hudson monitoring, installing Cloud Software, Infra and cloud application usage monitoring.
Required Knowledge :
- Source Configuration management, Docker, Puppet, Ansible, Application and Server Monitoring, AWS, Database Administration, Kubernetes, Log monitoring, CI/CD, Design and implement build, deployment, and configuration management, Improve infrastructure development, Scripting language.
- Good written and verbal communication skills
Qualification :
Education and Experience : Bachelors/Masters in Computer Science
Open for 24- 7 shifts











