Cutshort logo
Google Cloud Platform (GCP) Jobs in Bangalore (Bengaluru)

50+ Google Cloud Platform (GCP) Jobs in Bangalore (Bengaluru) | Google Cloud Platform (GCP) Job openings in Bangalore (Bengaluru)

Apply to 50+ Google Cloud Platform (GCP) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Google Cloud Platform (GCP) Job opportunities across top companies like Google, Amazon & Adobe.

icon
CyberWarFare Labs
Yash Bharadwaj
Posted by Yash Bharadwaj
Bengaluru (Bangalore)
1 - 3 yrs
₹4L - ₹6L / yr
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconDocker
CI/CD
+4 more


Job Overview:

We are looking for a full-time Infrastructure & DevOps Engineer to support and enhance our cloud, server, and network operations. The role involves managing virtualization platforms, container environments, automation tools, and CI/CD workflows while ensuring smooth, secure, and reliable infrastructure performance. The ideal candidate should be proactive, technically strong, and capable of working collaboratively across teams.


Qualifications and Requirements

  • Bachelor’s/Master’s degree in Computer Science, Engineering, or related field (B.E/B.Tech/BCA/MCA/M.Tech).
  • Strong understanding of cloud platforms (AWS, Azure, GCP),including core services and IT infrastructure concepts.
  • Hands-on experience with virtualization tools including vCenter, hypervisors, nested virtualization, and bare-metal servers and concepts.
  • Practical knowledge of Linux and Windows servers, including cron jobs and essential Linux commands.
  • Experience working with Docker, Kubernetes, and CI/CD pipelines.
  • Strong understanding of Terraform and Ansible for infrastructure automation.
  • Scripting proficiency in Python and Bash (PowerShell optional).
  • Networking fundamentals (IP, routing, subnetting, LAN/WAN/WLAN).
  • Experience with firewalls, basic security concepts, and tools like pfSense.
  • Familiarity with Git/GitHub for version control and team collaboration.
  • Ability to perform API testing using cURL and Postman.
  • Strong understanding of the application deployment lifecycle and basic application deployment processes.
  • Good problem-solving, analytical thinking, and documentation skills.


Roles and Responsibility

  • Manage and maintain Linux/Windows servers, virtualization environments, and cloud infrastructure across AWS/Azure/GCP.
  • Use Terraform and Ansible to provision, automate, and manage infrastructure components.
  • Support application deployment lifecycle—from build and testing to release and rollout.
  • Deploy and maintain Kubernetes clusters and containerized workloads using Docker.
  • Develop, enhance, and troubleshoot CI/CD pipelines and integrate DevSecOps practices.
  • Write automation scripts using Python/Bash to optimize recurring tasks.
  • Conduct API testing using curl and Postman to validate integrations and service functionality.
  • Configure and monitor firewalls including pfSense for secure access control.
  • Troubleshoot network, server, and application issues using tools like Wireshark, ping, traceroute, and SNMP.
  • Maintain Git/GitHub repos, manage branching strategies, and participate in code reviews.
  • Prepare clear, detailed documentation including infrastructure diagrams, workflows, SOPs, and configuration records.


Read more
E-Commerce Industry

E-Commerce Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹30L - ₹50L / yr
Security Information and Event Management (SIEM)
Information security governance
ISO/IEC 27001:2005
Systems Development Life Cycle (SDLC)
Software Development
+67 more

SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)

Key Skills: Software Development Life Cycle (SDLC), CI/CD

About Company: Consumer Internet / E-Commerce

Company Size: Mid-Sized

Experience Required: 6 - 10 years

Working Days: 5 days/week

Office Location: Bengaluru [Karnataka]


Review Criteria:

Mandatory:

  • Strong DevSecOps profile
  • Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
  • Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
  • Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
  • Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
  • Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
  • Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
  • Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
  • Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
  • B2B SaaS Product companies
  • Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments


Preferred:

  • Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
  • Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
  • Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language


Roles & Responsibilities:

We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.


This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.


If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.


What You’ll Do-

Cloud & Infrastructure Security:

  • Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
  • Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
  • Partner with platform teams to secure VPCs, security groups, and cloud access patterns.


Application & DevSecOps Security:

  • Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
  • Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
  • Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.


Security Monitoring & Incident Response:

  • Monitor security alerts and investigate potential threats across cloud and application layers.
  • Lead or support incident response efforts, root-cause analysis, and corrective actions.
  • Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
  • Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
  • Continuously improve detection, response, and testing maturity.


Security Tools & Platforms:

  • Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
  • Ensure tools are well-integrated, actionable, and aligned with operational needs.


Compliance, Governance & Awareness:

  • Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
  • Promote secure engineering practices through training, documentation, and ongoing awareness programs.
  • Act as a trusted security advisor to engineering and product teams.


Continuous Improvement:

  • Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
  • Continuously raise the bar on a company's security posture through automation and process improvement.


Endpoint Security (Secondary Scope):

  • Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.


Ideal Candidate:

  • Strong hands-on experience in cloud security across AWS and Azure.
  • Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
  • Experience securing containerized and Kubernetes-based environments.
  • Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
  • Solid understanding of network security, encryption, identity, and access management.
  • Experience with application security testing tools (SAST, DAST, SCA).
  • Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
  • Strong analytical, troubleshooting, and problem-solving skills.


Nice to Have:

  • Experience with DevSecOps automation and security-as-code practices.
  • Exposure to threat intelligence and cloud security monitoring solutions.
  • Familiarity with incident response frameworks and forensic analysis.
  • Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.


Perks, Benefits and Work Culture:

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.

Read more
Codemonk

at Codemonk

4 candid answers
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
7yrs+
Upto ₹42L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconPython
Google Cloud Platform (GCP)
RESTful APIs
SQL
+4 more

Like us, you'll be deeply committed to delivering impactful outcomes for customers.

  • 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
  • Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
  • Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
  • Experience writing batch/cron jobs using Python and Shell scripting.
  • Experience in web application development using JavaScript and JavaScript libraries.
  • Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
  • Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
  • Understanding of code versioning tools such as Git.
  • Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
  • Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
Read more
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data engineering
Data architecture
SQL
Data modeling
GCS
+47 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad, Chennai, Kochi (Cochin), Bengaluru (Bangalore), Trivandrum, Thiruvananthapuram
12 - 15 yrs
₹20L - ₹40L / yr
skill iconJava
DevOps
CI/CD
ReAct (Reason + Act)
skill iconReact.js
+6 more

Role Proficiency:

Leverage expertise in a technology area (e.g. Java Microsoft technologies or Mainframe/legacy) to design system architecture.


Knowledge Examples:

  • Domain/ Industry Knowledge: Basic knowledge of standard business processes within the relevant industry vertical and customer business domain
  1. Technology Knowledge: Demonstrates working knowledge of more than one technology area related to own area of work (e.g. Java/JEE 5+ Microsoft technologies or Mainframe/legacy) customer technology landscape multiple frameworks (Struts JSF Hibernate etc.) within one technology area and their applicability. Consider low level details such as data structures algorithms APIs and libraries and best practices for one technology stack configuration parameters for successful deployment and configuration parameters for high performance within one technology stack
  2. Technology Trends: Demonstrates working knowledge of technology trends related to one technology stack and awareness of technology trends related to least two technologies
  3. Architecture Concepts and Principles: Demonstrates working knowledge of standard architectural principles models patterns (e.g. SOA N-Tier EDA etc.) and perspective (e.g. TOGAF Zachman etc.) integration architecture including input and output components existing integration methodologies and topologies source and external system non functional requirements data architecture deployment architecture architecture governance
  4. Design Patterns Tools and Principles: Applies specialized knowledge of design patterns design principles practices and design tools. Knowledge of documentation of design using tolls like EA
  5. Software Development Process Tools & Techniques: Demonstrates thorough knowledge of end-to-end SDLC process (Agile and Traditional) SDLC methodology programming principles tools best practices (refactoring code code package etc.)
  6. Project Management Tools and Techniques: Demonstrates working knowledge of project management process (such as project scoping requirements management change management risk management quality assurance disaster management etc.) tools (MS Excel MPP client specific time sheets capacity planning tools etc.)
  7. Project Management: Demonstrates working knowledge of project governance framework RACI matrix and basic knowledge of project metrics like utilization onsite to offshore ratio span of control fresher ratio SLAs and quality metrics
  8. Estimation and Resource Planning: Working knowledge of estimation and resource planning techniques (e.g. TCP estimation model) company specific estimation templates
  9. Working knowledge of industry knowledge management tools (such as portals wiki) company and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT)
  10. Technical Standards Documentation & Templates: Demonstrates working knowledge of various document templates and standards (such as business blueprint design documents and test specifications)
  11. Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for ( non functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard) techniques (business analysis process mapping etc.) and requirements management tools (e.g.MS Excel) and basic knowledge of functional requirements gathering. Specifically identify Architectural concerns and to document them as part of IT requirements including NFRs
  12. Solution Structuring: Demonstrates working knowledge of service offering and products


Additional Comments:

Looking for a Senior Java Architect with 12+ years of experience. Key responsibilities include:

• Excellent technical background and end to end architecture to design and implement scalable maintainable and high performing systems integrating front end technologies with back-end services.

• Collaborate with front-end teams to architect React -based user interfaces that are robust, responsive and aligned with overall technical architecture.

• Expertise in cloud-based applications on Azure, leveraging key Azure services.

• Lead the adoption of DevOps practices, including CI/CD pipelines, automation, monitoring and logging to ensure reliable and efficient deployment cycles.

• Provide technical leadership to development teams, guiding them in building solutions that adhere to best practices, industry standards and customer requirements.

• Conduct code reviews to maintain high quality code and collaborate with team to ensure code is optimized for performance, scalability and security.

• Collaborate with stakeholders to defined requirements and deliver technical solutions aligned with business goals.

• Excellent communication skills

• Mentor team members providing guidance on technical challenges and helping them grow their skill set.

• Good to have experience in GCP and retail domain.

 

Skills: Devops, Azure, Java


Must-Haves

Java (12+ years), React, Azure, DevOps, Cloud Architecture

Strong Java architecture and design experience.

Expertise in Azure cloud services.

Hands-on experience with React and front-end integration.

Proven track record in DevOps practices (CI/CD, automation).

Notice period - 0 to 15days only

Location: Hyderabad, Chennai, Kochi, Bangalore, Trivandrum

Excellent communication and leadership skills.

Read more
Technology, Information and Internet Company

Technology, Information and Internet Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹65L / yr
Data Structures
CI/CD
Microservices
Architecture
Cloud Computing
+19 more

Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems


Criteria:

  • Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
  • Must be strong in one core backend language: Node.js, Go, Java, or Python.
  • Deep understanding of distributed systems, caching, high availability, and microservices architecture.
  • Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
  • Strong command over system design, data structures, performance tuning, and scalable architecture
  • Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.


Description

What This Role Is All About

We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.

 

What You’ll Own

● Architect backend systems that handle India-scale traffic without breaking a sweat.

● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.

● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.

● Partner with Product, Data, and Infra to ship features that are reliable and delightful.

● Set high engineering standards—clean architecture, performance, automation, and testing.

● Lead discussions on system design, performance tuning, and infra choices.

● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.

● Identify gaps proactively and push for improvements instead of waiting for fires.

 

What Makes You a Great Fit

● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.

● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.

● Deep understanding of distributed systems, caching, high-availability, and microservices.

● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.

● You think data structures and system design are not interviews — they’re daily tools.

● You write code that future-you won’t hate.

● Strong communication and a let’s figure this out attitude.

 

Bonus Points If You Have

● Built or scaled consumer apps with millions of DAUs.

● Experimented with event-driven architecture, streaming systems, or real-time pipelines.

● Love startups and don’t mind wearing multiple hats.

● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.

 

Why company Might Be Your Best Move

● Work on products used by real people every single day.

● Ownership from day one—your decisions will shape our core architecture.

● No unnecessary hierarchy; direct access to founders and senior leadership.

● A team that cares about quality, speed, and impact in equal measure.

● Build for Bharat — complex constraints, huge scale, real impact.


Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Kochi (Cochin), Trivandrum, Hyderabad, Thiruvananthapuram
8 - 10 yrs
₹10L - ₹25L / yr
Business Analysis
Data Visualization
PowerBI
SQL
Tableau
+18 more

Job Description – Senior Technical Business Analyst

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

 

About the Role

We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.

As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.

 

Key Responsibilities

Business & Analytical Responsibilities

  • Partner with business teams to understand one-line problem statements and translate them into detailed business requirementsopportunities, and project scope.
  • Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
  • Create documentation including Business Requirement Documents (BRDs)user storiesprocess flows, and analytical models.
  • Break down business needs into concise, actionable, and development-ready user stories in Jira.

Data & Technical Responsibilities

  • Collaborate with data engineering teams to design, review, and validate data pipelinesdata models, and ETL/ELT workflows.
  • Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
  • Apply foundational data science concepts such as statistical analysispredictive modeling, and machine learning fundamentals.
  • Validate and ensure data quality, consistency, and accuracy across datasets and systems.

Collaboration & Execution

  • Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
  • Assist in development, testing, and rollout of data-driven solutions.
  • Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.

 

Required Skillsets

Core Technical Skills

  • 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
  • Data Analytics: SQL, descriptive analytics, business problem framing.
  • Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
  • Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
  • Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.

 

Soft Skills

  • Strong analytical thinking and structured problem-solving capability.
  • Ability to convert business problems into clear technical requirements.
  • Excellent communication, documentation, and presentation skills.
  • High curiosity, adaptability, and eagerness to learn new tools and techniques.

 

Educational Qualifications

  • BE/B.Tech or equivalent in:
  • Computer Science / IT
  • Data Science

 

What We Look For

  • Demonstrated passion for data and analytics through projects and certifications.
  • Strong commitment to continuous learning and innovation.
  • Ability to work both independently and in collaborative team environments.
  • Passion for solving business problems using data-driven approaches.
  • Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.

 

Why Join Us?

  • Exposure to modern data platforms, analytics tools, and AI technologies.
  • A culture that promotes innovation, ownership, and continuous learning.
  • Supportive environment to build a strong career in data and analytics.

 

Skills: Data Analytics, Business Analysis, Sql


Must-Haves

Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R

 

******

Notice period - 0 to 15 days (Max 30 Days)

Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

Read more
Kuku FM
Bengaluru (Bangalore)
5 - 12 yrs
₹30L - ₹60L / yr
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

We're seeking an experienced Engineer to join our engineering team, handling massive-scale data processing and analytics infrastructure that supports over 1B daily events, 3M+ DAU, and 50k+ hours of content. The ideal candidate will bridge the gap between raw data collection and actionable insights, while supporting our ML initiatives.

Key Responsibilities

  • Lead and scale the Infrastructure Pod, setting technical direction for data, platform, and DevOps initiatives.
  • Architect and evolve our cloud infrastructure to support 1B+ daily events — ensuring reliability, scalability, and cost efficiency.
  • Collaborate with Data Engineering and ML pods to build high-performance pipelines and real-time analytics systems.
  • Define and implement SLOs, observability standards, and best practices for uptime, latency, and data reliability.
  • Mentor and grow engineers, fostering a culture of technical excellence, ownership, and continuous learning.
  • Partner with leadership on long-term architecture and scaling strategy — from infrastructure cost optimization to multi-region availability.
  • Lead initiatives on infrastructure automation, deployment pipelines, and platform abstractions to improve developer velocity.
  • Own security, compliance, and governance across infrastructure and data systems.

 

Who You Are

  • Previously a Tech Co-founder / Founding Engineer / First Infra Hire who scaled a product from early MVP to significant user or data scale.
  • 5–12 years of total experience, with at least 2+ years in leadership or team-building roles.
  • Deep experience with cloud infrastructure (AWS/GCP), 
  • Experience with containers (Docker, Kubernetes), and IaC tools (Terraform, Pulumi, or CDK).
  • Hands-on expertise in data-intensive systems, streaming (Kafka, RabbitMQ, Spark Streaming), and distributed architecture design.
  • Proven experience building scalable CI/CD pipelines, observability stacks (Prometheus, Grafana, ELK), and infrastructure for data and ML workloads.
  • Comfortable being hands-on when needed — reviewing design docs, debugging issues, or optimizing infrastructure.
  • Strong system design and problem-solving skills; understands trade-offs between speed, cost, and scalability.
  • Passionate about building teams, not just systems — can recruit, mentor, and inspire engineers.

 

Preferred Skills

  • Experience managing infra-heavy or data-focused teams.
  • Familiarity with real-time streaming architectures.
  • Exposure to ML infrastructure, data governance, or feature stores.
  • Prior experience in the OTT / streaming / consumer platform domain is a plus.
  • Contributions to open-source infra/data tools or strong engineering community presence.

 

What We Offer

  • Opportunity to build and scale infrastructure from the ground up, with full ownership and autonomy.
  • High-impact leadership role shaping our data and platform backbone.
  • Competitive compensation + ESOPs.
  • Continuous learning budget and certification support.
  • A team that values velocity, clarity, and craftsmanship.

 

Success Metrics

  • Reduction in infra cost per active user and event processed.
  • Increase in developer velocity (faster pipeline deployments, reduced MTTR).
  • High system availability and data reliability SLAs met.
  • Successful rollout of infra automation and observability frameworks.
  • Team growth, retention, and technical quality.


Read more
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Bengaluru (Bangalore)
1 - 8 yrs
₹5L - ₹30L / yr
skill iconPython
skill iconReact.js
skill iconPostgreSQL
TypeScript
skill iconNextJs (Next.js)
+11 more


Job Summary

We are seeking a highly skilled Full Stack Engineer with 2+ years of hands-on experience to join our high-impact engineering team. You will work across the full stack—building scalable, high-performance frontends using Typescript & Next.js and developing robust backend services using Python (FastAPI/Django).

This role is crucial in shaping product experiences and driving innovation at scale.


Mandatory Candidate Background

  • Experience working in product-based companies only
  • Strong academic background
  • Stable work history
  • Excellent coding skills and hands-on development experience
  • Strong foundation in Data Structures & Algorithms (DSA)
  • Strong problem-solving mindset
  • Understanding of clean architecture and code quality best practices


Key Responsibilities

  • Design, develop, and maintain scalable full-stack applications
  • Build responsive, performant, user-friendly UIs using Typescript & Next.js
  • Develop APIs and backend services using Python (FastAPI/Django)
  • Collaborate with product, design, and business teams to translate requirements into technical solutions
  • Ensure code quality, security, and performance across the stack
  • Own features end-to-end: architecture, development, deployment, and monitoring
  • Contribute to system design, best practices, and the overall technical roadmap


Requirements

Must-Have:

  • 2+ years of professional full-stack engineering experience
  • Strong expertise in Typescript / Next.js OR Python (FastAPI, Django) — must be familiar with both areas
  • Experience building RESTful APIs and microservices
  • Hands-on experience with Git, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure)
  • Strong debugging, optimization, and problem-solving abilities
  • Comfortable working in fast-paced startup environments


Good-to-Have:

  • Experience with containerization (Docker/Kubernetes)
  • Exposure to message queues or event-driven architectures
  • Familiarity with modern DevOps and observability tooling


Read more
Inferigence Quotient

at Inferigence Quotient

1 recruiter
Neeta Trivedi
Posted by Neeta Trivedi
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹15L / yr
skill iconPython
skill iconNodeJS (Node.js)
FastAPI
skill iconDocker
skill iconJavascript
+16 more

3-5 years of experience as full stack developer with essential requirements on the following technologies: FastAPI, JavaScript, React.js-Redux, Node.js, Next.js, MongoDB, Python, Microservices, Docker, and MLOps.


Experience in Cloud Architecture using Kubernetes (K8s), Google Kubernetes Engine, Authentication and Authorisation Tools, DevOps Tools and Scalable and Secure Cloud Hosting is a significant plus.


Ability to manage a hosting environment, ability to scale applications to handle the load changes, knowledge of accessibility and security compliance.

 

Testing of API endpoints.

 

Ability to code and create functional web applications and optimising them for increasing response time and efficiency. Skilled in performance tuning, query plan/ explain plan analysis, indexing, table partitioning.

 

Expert knowledge of Python and corresponding frameworks with their best practices, expert knowledge of relational databases, NoSQL.


Ability to create acceptance criteria, write test cases and scripts, and perform integrated QA techniques.

 

Must be conversant with Agile software development methodology. Must be able to write technical documents, coordinate with test teams. Proficiency using Git version control.

Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
1 - 8 yrs
₹12L - ₹34L / yr
skill iconPython
skill iconReact.js
skill iconDjango
FastAPI
TypeScript
+7 more

Please note that salary will be based on experience.


Job Title: Full Stack Engineer

Location: Bengaluru (Indiranagar) – Work From Office (5 Days)

Job Summary

We are seeking a skilled Full Stack Engineer with solid hands-on experience across frontend and backend development. You will work on mission-critical features, ensuring seamless performance, scalability, and reliability across our products.

Responsibilities

  • Design, develop, and maintain scalable full-stack applications.
  • Build responsive, high-performance UIs using Typescript & Next.js.
  • Develop backend services and APIs using Python (FastAPI/Django).
  • Work closely with product, design, and business teams to translate requirements into intuitive solutions.
  • Contribute to architecture discussions and drive technical best practices.
  • Own features end-to-end — design, development, testing, deployment, and monitoring.
  • Ensure robust security, code quality, and performance optimization.

Tech Stack

Frontend: Typescript, Next.js, React, Tailwind CSS

Backend: Python, FastAPI, Django

Databases: PostgreSQL, MongoDB, Redis

Cloud & Infra: AWS/GCP, Docker, Kubernetes, CI/CD

Other Tools: Git, GitHub, Elasticsearch, Observability tools

Requirements

Must-Have:

  • 2+ years of professional full-stack engineering experience.
  • Strong expertise in either frontend (Typescript/Next.js) or backend (Python/FastAPI/Django) with familiarity in both.
  • Experience building RESTful services and microservices.
  • Hands-on experience with Git, CI/CD, and cloud platforms (AWS/GCP/Azure).
  • Strong debugging, problem-solving, and optimization skills.
  • Ability to thrive in fast-paced, high-ownership startup environments.

Good-to-Have:

  • Exposure to Docker, Kubernetes, and observability tools.
  • Experience with message queues or event-driven architecture.


Perks & Benefits

  • Upskilling support – courses, tools & learning resources.
  • Fun team outings, hackathons, demos & engagement initiatives.
  • Flexible Work-from-Home: 12 WFH days every 6 months.
  • Menstrual WFH: up to 3 days per month.
  • Mobility benefits: relocation support & travel allowance.
  • Parental support: maternity, paternity & adoption leave.
Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Bengaluru (Bangalore)
2 - 4 yrs
₹7L - ₹18L / yr
CI/CD
skill iconJenkins
gitlab
ArgoCD
skill iconAmazon Web Services (AWS)
+8 more

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.


Key Responsibilities

CI/CD and Infrastructure Automation

  • Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
  • Automate deployments using tools such as Terraform, Helm, and Kubernetes
  • Improve build and release processes to support high-performance and low-latency trading applications
  • Work efficiently with Linux/Unix environments

Cloud and On-Prem Infrastructure Management

  • Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
  • Ensure system reliability, scalability, and high availability
  • Implement Infrastructure as Code (IaC) to standardize and streamline deployments

Performance Monitoring and Optimization

  • Monitor system performance and latency using Prometheus, Grafana, and ELK stack
  • Implement proactive alerting and fault detection to ensure system stability
  • Troubleshoot and optimize system components for maximum efficiency

Security and Compliance

  • Apply DevSecOps principles to ensure secure deployment and access management
  • Maintain compliance with financial industry regulations such as SEBI
  • Conduct vulnerability assessments and maintain logging and audit controls


Required Skills and Qualifications

  • 2+ years of experience as a DevOps Engineer in a software or trading environment
  • Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
  • Proficiency in cloud platforms such as AWS and GCP
  • Hands-on experience with Docker and Kubernetes
  • Experience with Terraform or CloudFormation for IaC
  • Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
  • Familiarity with Prometheus, Grafana, and ELK stack
  • Proficiency in scripting using Python, Bash, or Go
  • Solid understanding of security best practices including IAM, encryption, and network policies


Good to Have (Optional)

  • Experience with low-latency trading infrastructure or real-time market data systems
  • Knowledge of high-frequency trading environments
  • Exposure to FIX protocol, FPGA, or network optimization techniques
  • Familiarity with Redis or Nginx for real-time data handling


Why Join Us?

  • Work with a team that expects and delivers excellence.
  • A culture where risk-taking is rewarded, and complacency is not.
  • Limitless opportunities for growth—if you can handle the pace.
  • A place where learning is currency, and outperformance is the only metric that matters.
  • The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.


This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.


Read more
Auxo AI
kusuma Gullamajji
Posted by kusuma Gullamajji
Bengaluru (Bangalore), Hyderabad, Mumbai, Gurugram
5 - 10 yrs
₹10L - ₹40L / yr
skill iconPython
SQL
Google Cloud Platform (GCP)
Dataform

Responsibilities:

  • Build and optimize batch and streaming data pipelines using Apache Beam (Dataflow)
  • Design and maintain BigQuery datasets using best practices in partitioning, clustering, and materialized views
  • Develop and manage Airflow DAGs in Cloud Composer for workflow orchestration
  • Implement SQL-based transformations using Dataform (or dbt)
  • Leverage Pub/Sub for event-driven ingestion and Cloud Storage for raw/lake layer data architecture
  • Drive engineering best practices across CI/CD, testing, monitoring, and pipeline observability
  • Partner with solution architects and product teams to translate data requirements into technical designs
  • Mentor junior data engineers and support knowledge-sharing across the team
  • Contribute to documentation, code reviews, sprint planning, and agile ceremonies



Requirements


  • 5+ years of hands-on experience in data engineering, with at least 2 years on GCP
  • Proven expertise in BigQueryDataflow (Apache Beam)Cloud Composer (Airflow)
  • Strong programming skills in Python and/or Java
  • Experience with SQL optimizationdata modeling, and pipeline orchestration
  • Familiarity with GitCI/CD pipelines, and data quality monitoring frameworks
  • Exposure to Dataformdbt, or similar tools for ELT workflows
  • Solid understanding of data architectureschema design, and performance tuning
  • Excellent problem-solving and collaboration skills

Bonus Skills:

  • GCP Professional Data Engineer certification
  • Experience with Vertex AICloud FunctionsDataproc, or real-time streaming architectures
  • Familiarity with data governance tools (e.g., Atlan, Collibra, Dataplex)
  • Exposure to Docker/KubernetesAPI integration, and infrastructure-as-code (Terraform)


Read more
Hyderabad, Bengaluru (Bangalore)
5 - 12 yrs
₹25L - ₹35L / yr
skill iconC#
SQL
skill iconAmazon Web Services (AWS)
skill icon.NET
skill iconJava
+3 more

Senior Software Engineer

Location: Hyderabad, India


Who We Are:

Since our inception back in 2006, Navitas has grown to be an industry leader in the digital transformation space, and we’ve served as trusted advisors supporting our client base within the commercial, federal, and state and local markets.


What We Do:

At our very core, we’re a group of problem solvers providing our award-winning technology solutions to drive digital acceleration for our customers! With proven solutions, award-winning technologies, and a team of expert problem solvers, Navitas has consistently empowered customers to use technology as a competitive advantage and deliver cutting-edge transformative solutions.


What You’ll Do:

Build, Innovate, and Own:

  • Design, develop, and maintain high-performance microservices in a modern .NET/C# environment.
  • Architect and optimize data pipelines and storage solutions that power our AI-driven products.
  • Collaborate closely with AI and data teams to bring machine learning models into production systems.
  • Build integrations with external services and APIs to enable scalable, interoperable solutions.
  • Ensure robust security, scalability, and observability across distributed systems.
  • Stay ahead of the curve — evaluating emerging technologies and contributing to architectural decisions for our next-gen platform.

Responsibilities will include but are not limited to:

  • Provide technical guidance and code reviews that raise the bar for quality and performance.
  • Help create a growth-minded engineering culture that encourages experimentation, learning, and accountability.

What You’ll Need:

  • Bachelor’s degree in Computer Science or equivalent practical experience.
  • 8+ years of professional experience, including 5+ years designing and maintaining scalable backend systems using C#/.NET and microservices architecture.
  • Strong experience with SQL and NoSQL data stores.
  • Solid hands-on knowledge of cloud platforms (AWS, GCP, or Azure).
  • Proven ability to design for performance, reliability, and security in data-intensive systems.
  • Excellent communication skills and ability to work effectively in a global, cross-functional environment.

Set Yourself Apart With:

  • Startup experience - specifically in building product from 0-1
  • Exposure to AI/ML-powered systems, data engineering, or large-scale data processing.
  • Experience in healthcare or fintech domains.
  • Familiarity with modern DevOps practices, CI/CD pipelines, and containerization (Docker/Kubernetes).

Equal Employer/Veterans/Disabled

Navitas Business Consulting is an affirmative action and equal opportunity employer. If reasonable accommodation is needed to participate in the job application or interview process, to perform essential job functions, and/or to receive other benefits and privileges of employment, please contact Navitas Human Resources.

Navitas is an equal opportunity employer. We provide employment and opportunities for advancement, compensation, training, and growth according to individual merit, without regard to race, color, religion, sex (including pregnancy), national origin, sexual orientation, gender identity or expression, marital status, age, genetic information, disability, veteran-status veteran or military status, or any other characteristic protected under applicable Federal, state, or local law. Our goal is for each staff member to have the opportunity to grow to the limits of their abilities and to achieve personal and organizational objectives. We will support positive programs for equal treatment of all staff and full utilization of all qualified employees at all levels within Navita

Read more
Biofourmis

at Biofourmis

44 recruiters
Roopa Ramalingamurthy
Posted by Roopa Ramalingamurthy
Remote only
5 - 10 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Summary:

We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.


Responsibilities:

  • Deploy, configure, and troubleshoot various infrastructure and application environments
  • Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
  • Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
  • Collaborate with application teams on infrastructure design and issues
  • Architect solutions that optimally meet business needs
  • Implement CI/CD pipelines and automate deployment processes
  • Disaster recovery and infrastructure restoration
  • Restore/Recovery operations from backups
  • Automate routine tasks
  • Execute company initiatives in the infrastructure space
  • Expertise with observability tools like ELK, Prometheus, Grafana , Loki


Qualifications:

  • Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
  • Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
  • Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
  • Experience in architecting solutions that optimally meet business needs
  • Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
  • Strong understanding of system concepts like high availability, scalability, and redundancy
  • Ability to work with application teams on infrastructure design and issues
  • Excellent problem-solving and troubleshooting skills
  • Experience with automation of routine tasks
  • Good communication and interpersonal skills


Education and Experience:

  • Bachelor's degree in Computer Science or a related field
  • 5 to 10 years of experience as a DevOps Engineer or in a related role
  • Experience with observability tools like ELK, Prometheus, Grafana


Working Conditions:

The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.


Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.

Read more
appscrip

at appscrip

2 recruiters
Kanika Gaur
Posted by Kanika Gaur
Bengaluru (Bangalore)
1 - 3 yrs
₹4L - ₹10L / yr
DevOps
Windows Azure
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

Job Title: Sr. DevOps Engineer

Experience Required: 2 to 4 years in DevOps or related fields

Employment Type: Full-time


About the Role:

We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.


Key Responsibilities:

Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).

CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.

Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.

Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.

Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.

Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.

Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.

Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.


Required Skills & Qualifications:

Technical Expertise:

Strong proficiency in cloud platforms like AWS, Azure, or GCP.

Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).

Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.

Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.

Proficiency in scripting languages (e.g., Python, Bash, PowerShell).

Soft Skills:

Excellent communication and leadership skills.

Strong analytical and problem-solving abilities.

Proven ability to manage and lead a team effectively.

Experience:

4 years + of experience in DevOps or Site Reliability Engineering (SRE).

4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.

Strong understanding of microservices, APIs, and serverless architectures.


Nice to Have:

Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.

Experience with GitOps tools such as ArgoCD or Flux.

Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).


Perks & Benefits:

Competitive salary and performance bonuses.

Comprehensive health insurance for you and your family.

Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.

Flexible working hours and remote work options.

Collaborative and inclusive work culture.


Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.


You can directly contact us: Nine three one six one two zero one three two

Read more
Infilect

at Infilect

3 recruiters
Indira Ashrit
Posted by Indira Ashrit
Bengaluru (Bangalore)
2 - 3 yrs
₹12L - ₹15L / yr
skill iconKubernetes
skill iconDocker
cicd
Google Cloud Platform (GCP)

Job Description:


Infilect is a GenAI company pioneering the use of Image Recognition in Consumer Packaged Goods retail.


We are looking for a Senior DevOps Engineer to be responsible and accountable for the smooth running of our Cloud, AI workflows, and AI-based Computer Systems. Furthermore, the candidate will supervise the implementation and maintenance of the company’s computing needs including the in-house GPU & AI servers along with AI workloads.



Responsibilities

  • Understanding and automating AI based deployment an AI based workflows
  • Implementing various development, testing, automation tools, and IT infrastructure
  • Manage Cloud, computer systems and other IT assets.
  • Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
  • Design, develop, implement, and coordinate systems, policies, and procedures for Cloud and on-premise systems
  • Ensure the security of data, network access, and backup systems
  • Act in alignment with user needs and system functionality to contribute to organizational policy
  • Identify problematic areas, perform RCA and implement strategic solutions in time
  • Preserve assets, information security, and control structures
  • Handle monthly/annual cloud budget and ensure cost effectiveness


Requirements and skills

  • Well versed in automation tools such as Docker, Kubernetes, Puppet, Ansible etc.
  • Working Knowledge of Python, SQL database stack or any full-stack with relevant tools.
  • Understanding agile development, CI/CD, sprints, code reviews, Git and GitHub/Bitbucket workflows
  • Well versed with ELK stack or any other logging, monitoring and analysis tools
  • Proven working experience of 2+ years as an DevOps/Tech lead/IT Manager or relevant positions
  • Excellent knowledge of technical management, information analysis, and of computer hardware/software systems
  • Hands-on experience with computer networks, network administration, and network installation
  • Knowledge in ISO/SOC Type II implementation with be a 
  • BE/B.Tech/ME/M.Tech in Computer Science, IT, Electronics or a similar field


Read more
CGI Inc

at CGI Inc

3 recruiters
Shruthi BT
Posted by Shruthi BT
Bengaluru (Bangalore), Mumbai, Pune, Hyderabad, Chennai
8 - 15 yrs
₹15L - ₹25L / yr
Google Cloud Platform (GCP)
Data engineering
Big query

Google Data Engineer - SSE


Position Description

Google Cloud Data Engineer

Notice Period: Immediate to 30 days serving

Job Description:

We are seeking a highly skilled Data Engineer with extensive experience in Google Cloud Platform (GCP) data services and big data technologies. The ideal candidate will be responsible for designing, implementing, and optimizing scalable data solutions while ensuring high performance, reliability, and security.

Key Responsibilities:


• Design, develop, and maintain scalable data pipelines and architectures using GCP data services.

• Implement and optimize solutions using BigQuery, Dataproc, Composer, Pub/Sub, Dataflow, GCS, and BigTable.

• Work with GCP databases such as Bigtable, Spanner, CloudSQL, AlloyDB, ensuring performance, security, and availability.

• Develop and manage data processing workflows using Apache Spark, Hadoop, Hive, Kafka, and other Big Data technologies.

• Ensure data governance and security using Dataplex, Data Catalog, and other GCP governance tooling.

• Collaborate with DevOps teams to build CI/CD pipelines for data workloads using Cloud Build, Artifact Registry, and Terraform.

• Optimize query performance and data storage across structured and unstructured datasets.

• Design and implement streaming data solutions using Pub/Sub, Kafka, or equivalent technologies.


Required Skills & Qualifications:


• 8-15 years of experience

• Strong expertise in GCP Dataflow, Pub/Sub, Cloud Composer, Cloud Workflow, BigQuery, Cloud Run, Cloud Build.

• Proficiency in Python and Java, with hands-on experience in data processing and ETL pipelines.

• In-depth knowledge of relational databases (SQL, MySQL, PostgreSQL, Oracle) and NoSQL databases (MongoDB, Scylla, Cassandra, DynamoDB).

• Experience with Big Data platforms such as Cloudera, Hortonworks, MapR, Azure HDInsight, IBM Open Platform.

• Strong understanding of AWS Data services such as Redshift, RDS, Athena, SQS/Kinesis.

• Familiarity with data formats such as Avro, ORC, Parquet.

• Experience handling large-scale data migrations and implementing data lake architectures.

• Expertise in data modeling, data warehousing, and distributed data processing frameworks.

• Deep understanding of data formats such as Avro, ORC, Parquet.

• Certification in GCP Data Engineering Certification or equivalent.


Good to Have:


• Experience in BigQuery, Presto, or equivalent.

• Exposure to Hadoop, Spark, Oozie, HBase.

• Understanding of cloud database migration strategies.

• Knowledge of GCP data governance and security best practices.

Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Mumbai, Bengaluru (Bangalore)
10 - 18 yrs
₹25L - ₹50L / yr
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
+7 more

Type: Client-Facing Technical Architecture, Infrastructure Solutioning & Domain Consulting (India + International Markets)


Role Overview

Tradelab is seeking a senior Solution Architect who can interact with both Indian and international clients (Dubai, Singapore, London, US), helping them understand our trading systems, OMS/RMS/CMS stack, HFT platforms, feed systems, and Matching Engine. The architect will design scalable, secure, and ultra-low-latency deployments tailored to global forex markets, brokers, prop firms, liquidity providers, and market makers.


Key Responsibilities

1. Client Engagement (India + International Markets)

  • Engage with brokers, prop trading firms, liquidity providers, and financial institutions across India, Dubai, Singapore, and global hubs.
  • Explain Tradelab’s capabilities, architecture, and deployment options.
  • Understand region-specific latency expectations, connectivity options, and regulatory constraints.

2. Requirement Gathering & Solutioning

  • Capture client needs, throughput, order concurrency, tick volumes, and market data handling.
  • Assess infra readiness (cloud/on-prem/colo).
  • Propose architecture aligned with forex markets.

3. Global Architecture & Deployment Design

  • Design multi-region infrastructure using AWS/Azure/GCP.
  • Architect low-latency routing between India–Singapore–Dubai.
  • Support deployments in DCs like Equinix SG1/DX1.

4. Networking & Security Architecture

  • Architect multicast/unicast feeds, VPNs, IPSec tunnels, BGP routes.
  • Implement network hardening, segmentation, WAF/firewall rules.

5. DevOps, Cloud Engineering & Scalability

  • Build CI/CD pipelines, Kubernetes autoscaling, cost-optimized AWS multi-region deployments.
  • Design global failover models.

6. BFSI & Trading Domain Expertise

  • Indian broking, international forex, LP aggregation, HFT.
  • OMS/RMS, risk engines, LP connectivity, and matching engines.

7. Latency, Performance & Capacity Planning

  • Benchmark and optimize cross-region latency.
  • Tune performance for high tick volumes and volatility bursts.

8. Documentation & Consulting

  • Prepare HLDs, LLDs, SOWs, cost sheets, and deployment of playbooks.
  • Required Skills
  • AWS: EC2, VPC, EKS, NLB, MSK/Kafka, IAM, Global Accelerator.
  • DevOps: Kubernetes, Docker, Helm, Terraform.
  • Networking: IPSec, GRE, VPN, BGP, multicast (PIM/IGMP).
  • Message buses: Kafka, RabbitMQ, Redis Streams.

Domain Skills

  • Deep Broking Domain Understanding.
  • Indian broking + global forex/CFD.
  • FIX protocol, LP integration, market data feeds.
  • Regulations: SEBI, DFSA, MAS, ESMA.

Soft Skills

  • Excellent communication and client-facing ability.
  • Strong presales and solutioning mindset.
  • Preferred Qualifications
  • B.Tech/BE/M.Tech in CS or equivalent.
  • AWS Architect Professional, CCNP, CKA.

Why Join Us?

  • Experience in colocation/global trading infra.
  • Work with a team that expects and delivers excellence.
  • A culture where risk-taking is rewarded, and complacency is not.
  • Limitless opportunities for growth—if you can handle the pace.
  • A place where learning is currency, and outperformance is the only metric that matters.
  • The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.

This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Praffull Shinde
Posted by Praffull Shinde
Pune, Mumbai, Bengaluru (Bangalore)
8 - 14 yrs
Best in industry
Google Cloud Platform (GCP)
Terraform
skill iconKubernetes
DevOps
skill iconPython

JD for Cloud engineer

 

Job Summary:


We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.


You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.

 

Key Responsibilities:

1. Cloud Infrastructure Design & Management

  • Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
  • Implement Google Cloud Storage, Cloud SQL, filestore,  for data storage and processing needs.
  • Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
  • Optimize resource allocation, monitoring, and cost efficiency across GCP environments.


2. Kubernetes & Container Orchestration

  • Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
  • Work with Helm charts for microservices deployments.
  • Automate scaling, rolling updates, and zero-downtime deployments.

 

3. Serverless & Compute Services

  • Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
  • Optimize containerized applications running on Cloud Run for cost efficiency and performance.

 

4. CI/CD & DevOps Automation

  • Design, implement, and manage CI/CD pipelines using Azure DevOps.
  • Automate infrastructure deployment using Terraform, Bash and Powershell scripting
  • Integrate security and compliance checks into the DevOps workflow (DevSecOps).

 

 

Required Skills & Qualifications:

Experience: 8+ years in Cloud Engineering, with a focus on GCP.

Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Mumbai, Pune
7 - 12 yrs
₹1L - ₹45L / yr
Google Cloud Platform (GCP)
skill iconKubernetes
skill iconDocker
google kubernetes engineer
azure devops
+2 more

Required Skills & Qualifications:

✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.

✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Mumbai, Pune
3 - 7 yrs
Best in industry
Google Cloud Platform (GCP)
Shell Scripting
skill iconJava
skill iconRuby
Product Management

Job Title: Site Reliability Engineer (SRE) / Application Support Engineer

Experience: 3–7 Years

Location: Bangalore / Mumbai / Pune

About the Role

The successful candidate will join the S&C Site Reliability Engineering (SRE) Team, responsible for providing Tier 2/3 support to S&C business applications and environments. This role requires close collaboration with client-facing teams (Client Services, Product, and Research) as well as Infrastructure, Technology, and Application Development teams to maintain and support production and non-production environments.

Key Responsibilities

  • Provide Tier 2/3 product technical support and issue resolution.
  • Develop and maintain software tools to improve operations and support efficiency.
  • Manage system and software configurations; troubleshoot environment-related issues.
  • Identify opportunities to optimize system performance through configuration improvements or development suggestions.
  • Plan, document, and deploy software applications across Unix/Linux, Azure, and GCP environments.
  • Collaborate with Development and QA teams throughout the software release lifecycle.
  • Analyze and improve release and deployment processes to drive automation and efficiency.
  • Coordinate with infrastructure teams for maintenance, planned downtimes, and resource management across production and non-production environments.
  • Participate in on-call support (minimum one week per month) for off-hour emergencies and maintenance activities.

Required Skills & Qualifications

  • Education:
  • Bachelor’s degree in Computer Science, Engineering, or a related field (BE/MCA).
  • Master’s degree is a plus.
  • Experience:
  • 3–7 years in Production Support, Application Management, or Application Development (support/maintenance).
  • Technical Skills:
  • Strong Unix/Linux administration skills.
  • Excellent scripting skills — Shell, Python, Batch (mandatory).
  • Database expertise — Oracle (must have).
  • Understanding of Software Development Life Cycle (SDLC).
  • PowerShell knowledge is a plus.
  • Experience in Java or Ruby development is desirable.
  • Exposure to cloud platforms (GCP, Azure, or AWS) is an added advantage.
  • Soft Skills:
  • Excellent problem-solving and troubleshooting abilities.
  • Strong collaboration and communication skills.
  • Ability to work in a fast-paced, cross-functional environment.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Mumbai, Pune
3 - 7 yrs
Best in industry
Google Cloud Platform (GCP)
AZURE
skill iconJava
skill iconRuby
Oracle NoSQL Database
+5 more

Required skills and experience

• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)

• Master’s degree a plus

• 3-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.

• Excellent problem-solving/troubleshooting skills, fast learner

• Strong knowledge of Unix Administration.

• Strong scripting skills in Shell, Python, Batch is must.

• Strong Database experience – Oracle

• Strong knowledge of Software Development Life Cycle

• Power shell is nice to have

• Software development skillsets in Java or Ruby.

• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Moulina Dey
Posted by Moulina Dey
Pune, Bengaluru (Bangalore), Mumbai
3 - 6 yrs
₹2L - ₹14L / yr
technical product support
Linux/Unix
Google Cloud Platform (GCP)
SRE
Reliability engineering

Department: S&C – Site Reliability Engineering (SRE)  

Experience Required: 4–8 Years  

Location: Bangalore / Pune /Mumbai 

Employment Type: Full-time


  • Provide Tier 2/3 technical product support to internal and external stakeholders. 
  • Develop automation tools and scripts to improve operational efficiency and support processes. 
  • Manage and maintain system and software configurations; troubleshoot environment/application-related issues. 
  • Optimize system performance through configuration tuning or development enhancements. 
  • Plan, document, and deploy applications in Unix/Linux, Azure, and GCP environments
  • Collaborate with Development, QA, and Infrastructure teams throughout the release and deployment of lifecycles
  • Drive automation initiatives for release and deployment processes. 
  • Coordinate with infrastructure teams to manage hardware/software resources, maintenance, and scheduled downtimes across production and non-production environments. 
  • Participate in on-call rotations (minimum one week per month) to address critical incidents and off-hour maintenance tasks. 

 

Key Competencies 

  • Strong analytical, troubleshooting, and critical thinking abilities. 
  • Excellent cross-functional collaboration skills. 
  • Strong focus on documentation, process improvement, and system reliability
  • Proactive, detail-oriented, and adaptable in a fast-paced work environment. 


Read more
Payal
Payal Sangoi
Posted by Payal Sangoi
Bengaluru (Bangalore)
2 - 3 yrs
₹8L - ₹10L / yr
Linux/Unix
skill iconDocker
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+2 more

Junior DevOps Engineer

Experience: 2–3 years


About Us

We are a fast-growing fintech/trading company focused on building scalable, high-performance systems for financial markets. Our technology stack powers real-time trading, risk management, and analytics platforms. We are looking for a motivated Junior DevOps Engineer to join our dynamic team and help us maintain and improve our infrastructure.

Key Responsibilities

  • Support deployment, monitoring, and maintenance of trading and fintech applications.
  • Automate infrastructure provisioning and deployment pipelines using tools like Ansible, Terraform, or similar.
  • Collaborate with development and operations teams to ensure high availability, reliability, and security of systems.
  • Troubleshoot and resolve production issues in a fast-paced environment.
  • Implement and maintain CI/CD pipelines for continuous integration and delivery.
  • Monitor system performance and optimize infrastructure for scalability and cost-efficiency.
  • Assist in maintaining compliance with financial industry standards and security best practices.

Required Skills

  • 2–3 years of hands-on experience in DevOps or related roles.
  • Proficiency in Linux/Unix environments.
  • Experience with containerization (Docker) and orchestration (Kubernetes).
  • Familiarity with cloud platforms (AWS, GCP, or Azure).
  • Working knowledge of scripting languages (Bash, Python).
  • Experience with configuration management tools (Ansible, Puppet, Chef).
  • Understanding of networking concepts and security practices.
  • Exposure to monitoring tools (Prometheus, Grafana, ELK stack).
  • Basic understanding of CI/CD tools (Jenkins, GitLab CI, GitHub Actions).

Preferred Skills

  • Experience in fintech, trading, or financial services.
  • Knowledge of high-frequency trading systems or low-latency environments.
  • Familiarity with financial data protocols and APIs.
  • Understanding of regulatory requirements in financial technology.

What We Offer

  • Opportunity to work on cutting-edge fintech/trading platforms.
  • Collaborative and learning-focused environment.
  • Competitive salary and benefits.
  • Career growth in a rapidly expanding domain.



Read more
Cspar Enterprises Private Limited
Bhopal, Bengaluru (Bangalore)
4 - 10 yrs
₹3L - ₹8L / yr
skill iconDjango
RESTful APIs
deployment tools
RabbitMQ
Apache Kafka
+11 more

Designation: Senior Python Django Developer 

Position: Senior Python Developer

Job Types: Full-time, Permanent

Pay: Up to ₹800,000.00 per year

Schedule: Day shift

Ability to commute/relocate: Bhopal Indrapuri (MP) And Bangalore JP Nagar

 

Experience: Back-end development: 4 years (Required)

 

Job Description:

We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.

This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.

 

Responsibilities:

  • Design and develop scalable, secure, and high-performance applications using Python (Django framework).
  • Architect system components, define database schemas, and optimize backend services for speed and efficiency.
  • Lead and implement design patterns and software architecture best practices.
  • Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
  • Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
  • Drive performance improvements, monitor system health, and troubleshoot production issues.
  • Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
  • Contribute to technical decision-making and mentor junior developers.

 

Requirements:

  • 4 to 10 years of professional backend development experience with Python and Django.
  • Strong background in payments/financial systems or FinTech applications.
  • Proven experience in designing software architecture in a microservices or modular monolith environment.
  • Experience working in fast-paced startup environments with agile practices.
  • Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
  • Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
  • Hands-on experience with test-driven development (TDD) and frameworks like pytest, unit test, or factory boy.
  • Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).

 

Preferred Skills:

  • Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
  • Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
  • Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
  • Contributions to open-source or personal finance-related projects.


Read more
Payal
Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹15L / yr
Phoenix
Ecto,
Google Cloud Platform (GCP)
skill iconPostgreSQL
skill iconRedis
+8 more

JD: Elixir Developer- Trading & Fintech

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make

a mark in the trading and fintech industry. If you are looking for just another backend role, this

isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits

every day. If you thrive in high-stakes environments and have a deep passion for performance

driven backend systems, we want you.

About the Role

We’re looking for an Elixir Developer who is passionate about building scalable, high

performance backend systems. You’ll work closely with our engineering team to design,

develop, and maintain reliable applications that power mission-critical systems.

Key Responsibilities

• Develop and maintain backend services using Elixir and Phoenix framework.

• Build scalable, fault-tolerant, and distributed systems.

• Integrate APIs, databases, and message queues for real-time applications.

• Optimize system performance and ensure low latency and high throughput.

• Collaborate with frontend, DevOps, and product teams to deliver seamless solutions.

• Write clean, maintainable, and testable code with proper documentation.

• Participate in code reviews, architectural discussions, and deployment automation.

Required Skills & Experience

• 2–4 years of hands-on experience in Elixir (or strong functional programming background).

• Experience with Phoenix, Ecto, and RESTful API development.

• Solid understanding of OTP (Open Telecom Platform) concepts like GenServer, Supervisors, etc.

• Proficiency in PostgreSQL, Redis, or similar databases.

• Familiarity with Docker, Kubernetes, or cloud platforms (AWS/GCP/Azure).

• Understanding of CI/CD pipelines, version control (Git), and agile development.

Good to Have

• Experience with microservices architecture or real-time data systems.

• Knowledge of GraphQL, LiveView, or PubSub.

• Exposure to performance profiling, observability, or monitoring tools.

Why Join Us?

• Work with a team that expects and delivers excellence.

• A culture where risk-taking is rewarded, and complacency is not.

• Limitless opportunities for growth—if you can handle the pace.

• A place where learning is currency, and outperformance is the only metric that matters.

• The opportunity to build systems that move markets, execute trades in microseconds, and redefine

fintech.

This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.

Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
3 - 5 yrs
₹5L - ₹20L / yr
Automation
Manual testing
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
SQL
+4 more

🚀 Hiring: QA Engineer (Manual + Automation)

⭐ Experience: 3+ Years

📍 Location: Bangalore

⭐ Work Mode:- Hybrid

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


💫 About the Role:

We’re looking for a skilled QA Engineer You’ll ensure product quality through manual and automated testing across web, mobile, and APIs — working with tools and technologies like Postman, Playwright, Appium, Rest Assured, GCP/AWS, and React/Next.js.


Key Responsibilities:

✅ Develop & maintain automated tests using Cucumber, Playwright, Pytest, etc.

✅ Perform API testing using Postman.

✅ Work on cloud platforms (GCP/AWS) and CI/CD (Jenkins).

✅ Test web & mobile apps (Appium, BrowserStack, LambdaTest).

✅ Collaborate with developers to ensure seamless releases.


Must-Have Skills:

✅ API Testing (Postman)

✅ Cloud (GCP / AWS)

✅ Frontend understanding (React / Next.js)

✅ Strong SQL & Git skills

✅ Familiarity with OpenAI APIs


Read more
Wissen Technology
Pune, Mumbai, Bengaluru (Bangalore)
4 - 10 yrs
Best in industry
Google Cloud Platform (GCP)
skill iconPython
skill iconKubernetes
Shell Scripting
SRE Engineer
+1 more

Dear Candidate,


Greetings from Wissen Technology. 

We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.   

 About Wissen Technology:

  • The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
  • Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
  • Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
  • Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
  • Globally present with offices US, India, UK, Australia, Mexico, and Canada.
  • We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
  • Wissen Technology has been certified as a Great Place to Work®.
  • Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
  • Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
  • The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.

We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few.



Job Description: 

Please find below details:


Experience - 4+ Years

Location- Bangalore/Mumbai/Pune


Team Responsibilities

The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.

 

Resource's key Responsibilities


• Provide Tier 2/3 product technical support.

• Building software to help operations and support activities.

• Manage system\software configurations and troubleshoot environment issues.

• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.

• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.

• Collaborate with development and software testing teams throughout the release process.

• Analyze release and deployment processes to identify key areas for automation and optimization.

• Manage hardware and software resources & coordinate maintenance, planned downtimes with

infrastructure group across all the environments. (Production / Non-Production).

• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.

 

Required skills and experience

• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)

• Master’s degree a plus

• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.

• Excellent problem-solving/troubleshooting skills, fast learner

• Strong knowledge of Unix Administration.

• Strong scripting skills in Shell, Python, Batch is must.

• Strong Database experience – Oracle

• Strong knowledge of Software Development Life Cycle

• Power shell is nice to have

• Software development skillsets in Java or Ruby.

• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have




Read more
Wissen Technology

at Wissen Technology

4 recruiters
Sonali RajeshKumar
Posted by Sonali RajeshKumar
Bengaluru (Bangalore), Pune, Mumbai
4 - 9 yrs
Best in industry
Google Cloud Platform (GCP)
Reliability engineering
skill iconPython
Shell Scripting

Dear Candidate,


Greetings from Wissen Technology. 

We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.   

 About Wissen Technology:

  • The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
  • Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
  • Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
  • Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
  • Globally present with offices US, India, UK, Australia, Mexico, and Canada.
  • We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
  • Wissen Technology has been certified as a Great Place to Work®.
  • Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
  • Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
  • The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.

We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few


Job Description: 

Please find below details:


Experience - 4+ Years

Location- Bangalore/Mumbai/Pune


Team Responsibilities

The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.

 

Resource's key Responsibilities


• Provide Tier 2/3 product technical support.

• Building software to help operations and support activities.

• Manage system\software configurations and troubleshoot environment issues.

• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.

• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.

• Collaborate with development and software testing teams throughout the release process.

• Analyze release and deployment processes to identify key areas for automation and optimization.

• Manage hardware and software resources & coordinate maintenance, planned downtimes with

infrastructure group across all the environments. (Production / Non-Production).

• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.

 

Required skills and experience

• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)

• Master’s degree a plus

• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.

• Excellent problem-solving/troubleshooting skills, fast learner

• Strong knowledge of Unix Administration.

• Strong scripting skills in Shell, Python, Batch is must.

• Strong Database experience – Oracle

• Strong knowledge of Software Development Life Cycle

• Power shell is nice to have

• Software development skillsets in Java or Ruby.

• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have


Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Mohali, Dehradun, Panchkula, Chennai
6 - 14 yrs
₹12L - ₹28L / yr
Test Automation (QA)
skill iconKubernetes
helm
skill iconDocker
skill iconAmazon Web Services (AWS)
+13 more

Job Title : Senior QA Automation Architect (Cloud & Kubernetes)

Experience : 6+ Years

Location : India (Multiple Offices)

Shift Timings : 12 PM to 9 PM (Noon Shift)

Working Days : 5 Days WFO (NO Hybrid)


About the Role :

We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.

You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.


Key Responsibilities :

  • Architect and maintain test automation frameworks for microservices.
  • Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
  • Ensure reliability, scalability, and observability of test systems.
  • Work closely with DevOps and Cloud teams to streamline automation infrastructure.

Mandatory Skills :

  • Kubernetes, Helm, Docker, Linux
  • Cloud Platforms : AWS / Azure / GCP
  • CI/CD Tools : Jenkins, GitHub Actions
  • Scripting : Python, Pytest, Bash
  • Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
  • IaC Practices : Terraform / Ansible

Good to Have :

  • Experience with Service Mesh (Istio/Linkerd).
  • Container Security or DevSecOps exposure.
Read more
Financial Services

Financial Services

Agency job
via Jobdost by Saida Pathan
Bengaluru (Bangalore), Hyderabad
8 - 12 yrs
₹40L - ₹45L / yr
Systems architecture
SOW
System integration
Solution architecture
skill iconPython
+3 more

Position Overview


We are seeking an experienced Solutions Architect to lead the technical design and implementation strategy for our finance automation platform. This role sits at the intersection of business requirements, technical architecture, and implementation excellence. You will be responsible for translating complex Statement of Work (SOW) requirements into comprehensive technical designs while mentoring implementation engineers and driving platform evolution.

 

Key Responsibilities

Solution Design & Architecture

1. Translate SOW requirements into detailed C4 architecture models and Business Process Canvas documentation

2. Design end-to-end solutions for complex finance automation workflows including reconciliations, book closure, and financial reporting

3. Create comprehensive technical specifications for custom development initiatives

4. Establish architectural standards and best practices for finance domain solutions

Technical Leadership & Mentorship

1. Mentor Implementation Engineers on solution design, technical approaches, and best practices

2. Lead technical reviews and ensure solution quality across all implementations

3. Provide guidance on complex technical challenges and architectural decisions

4. Foster knowledge sharing and technical excellence within the solutions team

Platform Strategy & Development

1. Make strategic decisions on when to push feature development to the Platform Team vs. custom implementation

2. Interface with Implementation Support team to assess platform gaps and enhancement opportunities

3. Collaborate with Program Managers to track and prioritize new platform feature development

4. Contribute to product roadmap discussions based on client requirements and market trends

Client Engagement & Delivery

1. Lead technical discussions with enterprise clients during pre-sales and implementation phases

2. Design scalable solutions that align with client's existing technology stack and future roadmap

3. Ensure solutions comply with financial regulations (Ind AS/IFRS/GAAP) and industry standards

4. Drive technical aspects of complex implementations from design through go-live

 

Required Qualifications

Technical Expertise

● 8+ years of experience in solution architecture, preferably in fintech or enterprise software

● Strong expertise in system integration, API design, and microservices architecture

● Proficiency in C4 modeling and architectural documentation standards

● Experience with Business Process Management (BPM) and workflow design

● Advanced knowledge of data architecture, ETL pipelines, and real-time data processing

● Strong programming skills in Python, Java, or similar languages

● Experience with cloud platforms (AWS, Azure, GCP) and containerization technologies.


Financial Domain Knowledge

● Deep understanding of finance and accounting principles (Ind AS/IFRS/GAAP)

● Experience with financial systems integration (ERP, GL, AP/AR systems)

● Knowledge of financial reconciliation processes and automation strategies

● Understanding of regulatory compliance requirements in financial services

Leadership & Communication

● Proven experience mentoring technical teams and driving technical excellence

● Strong stakeholder management skills with ability to communicate with C-level executives

● Experience working in agile environments with cross-functional teams

● Excellent technical documentation and presentation skills

 

Preferred Qualifications

● Master's degree in Computer Science, Engineering, or related technical field

● Experience with finance automation platforms (Blackline, Trintech, Anaplan, etc.)

● Certification in enterprise architecture frameworks (TOGAF, Zachman)

● Experience with data visualization tools (Power BI, Tableau, Looker)

● Background in SaaS platform development and multi-tenant architectures

● Experience with DevOps practices and CI/CD pipeline design

● Knowledge of machine learning applications in finance automation.

 

Skills & Competencies


Technical Skills

● Solution architecture and system design

● C4 modeling and architectural documentation

● API design and integration patterns

● Cloud-native architecture and microservices

● Data architecture and pipeline design

● Programming and scripting languages

Financial & Business Skills

● Financial process automation

● Business process modeling and optimization

● Regulatory compliance and risk management

● Enterprise software implementation

● Change management and digital transformation

Leadership Skills

● Technical mentorship and team development

● Strategic thinking and decision making

● Cross-functional collaboration

● Client relationship management

● Project and program management

Soft Skills

● Critical thinking and problem-solving

● Cross-functional collaboration

● Task and project management

● Stakeholder management

● Team leadership

● Technical documentation

● Communication with technical and non-technical stakeholders


Mandatory Criteria:  

● Looking for candidates who are Solution Architects in Finance from Product Companies.

● The candidate should have worked in Fintech for at least 4–5 years.

● Candidate should have Strong Technical and Architecture skills with Finance Exposure.

● Candidate should be from Product companies.

● Candidate should have 8+ years’ experience in solution architecture, preferably in fintech or enterprise software.

● Candidate should have Proficiency in Python, Java (or similar languages) and hands-on with cloud platforms (AWS/Azure/GCP) & containerization (Docker/Kubernetes).

● Candidate should have Deep knowledge of finance & accounting principles (Ind AS/IFRS/GAAP) and financial system integrations (ERP, GL, AP/AR).

● Candidate should have Expertise in system integration, API design, microservices, and C4 modeling.

● Candidate should have Experience in financial reconciliations, automation strategies, and regulatory compliance.

● Candidate should be Strong in problem-solving, cross-functional collaboration, project management, documentation, and communication.

● Candidate should have Proven experience in mentoring technical teams and driving excellence.

Read more
The Alter Office

at The Alter Office

2 candid answers
Harsha Ravindran
Posted by Harsha Ravindran
Bengaluru (Bangalore)
2 - 4 yrs
₹8L - ₹12L / yr
Architecture
WebSocket
Authentication
skill iconRedis
RESTful APIs
+22 more

Job Summary:

We are looking for a skilled and motivated Backend Engineer with 2 to 4 years of professional experience to join our dynamic engineering team. You will play a key role in designing, building, and maintaining the backend systems that power our products. You’ll work closely with cross-functional teams to deliver scalable, secure, and high-performance solutions that align with business and user needs.

This role is ideal for engineers ready to take ownership of systems, contribute to architectural decisions, and solve complex backend challenges.


Website: https://www.thealteroffice.com/about


Key Responsibilities:

  • Design, build, and maintain robust backend systems and APIs that are scalable and maintainable.
  • Collaborate with product, frontend, and DevOps teams to deliver seamless, end-to-end solutions.
  • Model and manage data using SQL (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Redis), incorporating caching where needed.
  • Implement and manage authentication, authorization, and data security practices.
  • Write clean, well-documented, and well-tested code following best practices.
  • Work with cloud platforms (AWS, GCP, or Azure) to deploy, monitor, and scale services effectively.
  • Use tools like Docker (and optionally Kubernetes) for containerization and orchestration of backend services.
  • Maintain and improve CI/CD pipelines for faster and safer deployments.
  • Monitor and debug production issues, using observability tools (e.g., Prometheus, Grafana, ELK) for root cause analysis.
  • Participate in code reviews, contribute to improving development standards, and provide support to less experienced engineers.
  • Work with event-driven or microservices-based architecture, and optionally use technologies like GraphQL, WebSockets, or message brokers such as Kafka or RabbitMQ when suitable for the solution.


Requirements:

  • 2 to 4 years of professional experience as a Backend Engineer or similar role.
  • Proficiency in at least one backend programming language (e.g., Python, Java, Go, Ruby, etc.).
  • Strong understanding of RESTful API design, asynchronous programming, and scalable architecture patterns.
  • Solid experience with both relational and NoSQL databases, including designing and optimizing data models.
  • Familiarity with Docker, Git, and modern CI/CD workflows.
  • Hands-on experience with cloud infrastructure and deployment processes (AWS, GCP, or Azure).
  • Exposure to monitoring, logging, and performance profiling practices in production environments.
  • A good understanding of security best practices in backend systems.
  • Strong problem-solving, debugging, and communication skills.
  • Comfortable working in a fast-paced, agile environment with evolving priorities.


Read more
The Alter Office

at The Alter Office

2 candid answers
Harsha Ravindran
Posted by Harsha Ravindran
Bengaluru (Bangalore)
1 - 2 yrs
₹5L - ₹8L / yr
skill iconRedis
NOSQL Databases
RabbitMQ
skill iconMongoDB
cicd
+12 more

Job Title: Associate Backend Engineer


Job Summary:

We are looking for an enthusiastic and motivated Associate Backend Engineer with 1 to 2 years of experience to join our growing engineering team. Whether you're a recent graduate or have some industry experience, this role offers a strong foundation to grow your skills in real-world backend development. You’ll work closely with experienced engineers and contribute to the design, development, and maintenance of scalable backend systems that power our products.

This position is ideal for individuals who are eager to learn, write production-grade code, and grow into a high-performing backend engineer.


Website: https://www.thealteroffice.com/about


Key Responsibilities:

  • Assist in building and maintaining backend services and APIs for web and mobile applications.
  • Work with both relational (MySQL, PostgreSQL) and NoSQL (MongoDB, Redis) databases for data modeling and storage.
  • Write clean, maintainable, and well-documented code under guidance.
  • Contribute to authentication, authorization, and other core backend features.
  • Collaborate with cross-functional teams including product, frontend, and QA to deliver complete features.
  • Participate in code reviews and incorporate feedback to improve code quality.
  • Debug issues, write unit/integration tests, and help maintain service reliability and performance.
  • Learn to work with CI/CD pipelines, version control (Git), and deployment workflows.
  • Use tools like Docker, basic cloud services (AWS/GCP/Azure), and optionally explore monitoring/logging tools.
  • Explore new technologies such as GraphQL, WebSockets, or message queues (e.g., Kafka, RabbitMQ) when relevant.


Requirements:

  • 1 to 2 years of backend development experience, including internships, academic projects, freelance, or open-source work.
  • Familiarity with at least one backend programming language (e.g., Python, Java, Go, JavaScript, etc.).
  • Basic understanding of RESTful APIs, HTTP, databases, and server-side logic.
  • Exposure to SQL and NoSQL databases and understanding of CRUD operations.
  • Familiarity with Git and fundamental development workflows.
  • Willingness to learn and apply best practices in scalability, security, and asynchronous programming.
  • Strong problem-solving mindset and eagerness to take feedback and grow.
  • Good communication and collaboration skills in a team environment.


Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Dipika
Posted by Dipika
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Hyderabad, Pune
5 - 7 yrs
₹5L - ₹20L / yr
skill iconJava
Microservices
06692
Apache Kafka
Apache ActiveMQ
+3 more

1 Senior Associate Technology L1 – Java Microservices


Company Description

Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.


Job Description

We are looking for a Senior Associate Technology Level 1 - Java Microservices Developer to join our team of bright thinkers and doers. You’ll use your problem-solving creativity to design, architect, and develop high-end technology solutions that solve our clients’ most complex and challenging problems across different industries.

We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions.


Your Impact:

• Drive the design, planning, and implementation of multifaceted applications, giving you breadth and depth of knowledge across the entire project lifecycle.

• Combine your technical expertise and problem-solving passion to work closely with clients, turning • complex ideas into end-to-end solutions that transform our clients’ business

• Constantly innovate and evaluate emerging technologies and methods to provide scalable and elegant solutions that help clients achieve their business goals.


Qualifications

➢ 5 to 7 Years of software development experience

➢ Strong development skills in Java JDK 1.8 or above

➢ Java fundamentals like Exceptional handling, Serialization/Deserialization and Immutability concepts

➢ Good fundamental knowledge in Enums, Collections, Annotations, Generics, Auto boxing and Data Structure

➢ Database RDBMS/No SQL (SQL, Joins, Indexing)

➢ Multithreading (Re-entrant Lock, Fork & Join, Sync, Executor Framework)

➢ Spring Core & Spring Boot, security, transactions ➢ Hands-on experience with JMS (ActiveMQ, RabbitMQ, Kafka etc)

➢ Memory Mgmt (JVM configuration, Profiling, GC), profiling, Perf tunning, Testing, Jmeter/similar tool)

➢ Devops (CI/CD: Maven/Gradle, Jenkins, Quality plugins, Docker and containersization)

➢ Logical/Analytical skills. Thorough understanding of OOPS concepts, Design principles and implementation of

➢ different type of Design patterns. ➢ Hands-on experience with any of the logging frameworks (SLF4J/LogBack/Log4j) ➢ Experience of writing Junit test cases using Mockito / Powermock frameworks.

➢ Should have practical experience with Maven/Gradle and knowledge of version control systems like Git/SVN etc.

➢ Good communication skills and ability to work with global teams to define and deliver on projects.

➢ Sound understanding/experience in software development process, test-driven development.

➢ Cloud – AWS / AZURE / GCP / PCF or any private cloud would also be fine

➢ Experience in Microservices

Read more
Matilda cloud

Matilda cloud

Agency job
via Employee Hub by PREETI DUA
Hyderabad, Bengaluru (Bangalore)
6 - 7 yrs
₹22L - ₹26L / yr
skill iconFlask
API
Google Cloud Platform (GCP)
AWS CloudFormation
AWS Lambda
+5 more

Job Summary:


We are seeking an experienced and highly motivated Senior Python Developer to join our dynamic and growing engineering team. This role is ideal for a seasoned Python expert who thrives in a fast-paced, collaborative environment and has deep experience building scalable applications, working with cloud platforms, and automating infrastructure.



Key Responsibilities:


Develop and maintain scalable backend services and APIs using Python, with a strong emphasis on clean architecture and maintainable code.


Design and implement RESTful APIs using frameworks such as Flask or FastAPI, and integrate with relational databases using ORM tools like SQLAlchemy.


Work with major cloud platforms (AWS, GCP, or Oracle Cloud Infrastructure) using Python SDKs to build and deploy cloud-native applications.


Automate system and infrastructure tasks using tools like Ansible, Chef, or other configuration management solutions.


Implement and support Infrastructure as Code (IaC) using Terraform or cloud-native templating tools to manage resources effectively.





Work across both Linux and Windows environments, ensuring compatibility and stability across platforms.


Required Qualifications:


5+ years of professional experience in Python development, with a strong portfolio of backend/API projects.


Strong expertise in Flask, SQLAlchemy, and other Python-based frameworks and libraries.


Proficient in asynchronous programming and event-driven architecture using tools such as asyncio, Celery, or similar.


Solid understanding and hands-on experience with cloud platforms – AWS, Google Cloud Platform, or Oracle Cloud Infrastructure.


Experience using Python SDKs for cloud services to automate provisioning, deployment, or data workflows.


Practical knowledge of Linux and Windows environments, including system-level scripting and debugging.


Automation experience using tools such as Ansible, Chef, or equivalent configuration management systems.


Experience implementing and maintaining CI/CD pipelines with industry-standard tools.


Familiarity with Docker and container orchestration concepts (e.g., Kubernetes is a plus).


Hands-on experience with Terraform or equivalent infrastructure-as-code tools for managing cloud environments.


Excellent problem-solving skills, attention to detail, and a proactive mindset.


Strong communication skills and the ability to collaborate with diverse technical teams.


Preferred Qualifications (Nice to Have):


Experience with other Python frameworks (FastAPI, Django)


Knowledge of container orchestration tools like Kubernetes


Familiarity with monitoring tools like Prometheus, Grafana, or Datadog


Prior experience working in an Agile/Scrum environment


Contributions to open-source projects or technical blogs


Read more
Bluecopa

Bluecopa

Agency job
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹15L / yr
DevOps
skill iconPython
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

Role: DevOps Engineer


Exp: 4 - 7 Years

CTC: up to 28 LPA


Key Responsibilities

•   Design, build, and manage scalable infrastructure on cloud platforms (GCP, AWS, Azure, or OCI)

•   Administer and optimize Kubernetes clusters and container runtimes (Docker, containerd)

•   Develop and maintain CI/CD pipelines for multiple services and environments

•   Manage infrastructure as code using tools like Terraform and/or Pulumi

•   Automate operations with Python and shell scripting for deployment, monitoring, and maintenance

•   Ensure high availability and performance of production systems and troubleshoot incidents effectively

•   Monitor system metrics and implement observability best practices using tools like Prometheus, Grafana, ELK, etc.

•   Collaborate with development, security, and product teams to align infrastructure with business needs

•   Apply best practices in cloud networking, Linux administration, and configuration management

•   Support compliance and security audits; assist with implementation of cloud security measures (e.g., firewalls, IDS/IPS, IAM hardening)

•   Participate in on-call rotations and incident response activities

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Amita Soni
Posted by Amita Soni
Pune, Bengaluru (Bangalore), Mumbai
4 - 7 yrs
Best in industry
Google Cloud Platform (GCP)
GKE
Microsoft Windows Azure
Terraform

Job Description


We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.


Work location: Pune/Mumbai/Bangalore


Experience: 4-7 Years 


Joining: Mid of October


You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.


Key Responsibilities:

1. Cloud Infrastructure Design & Management

· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.

· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.

· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.

· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.

2. Kubernetes & Container Orchestration

· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).

· Work with Helm charts, Istio, and service meshes for microservices deployments.

· Automate scaling, rolling updates, and zero-downtime deployments.


3. Serverless & Compute Services

· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.

· Optimize containerized applications running on Cloud Run for cost efficiency and performance.


4. CI/CD & DevOps Automation

· Design, implement, and manage CI/CD pipelines using Azure DevOps.

· Automate infrastructure deployment using Terraform, Bash and Power shell scripting

· Integrate security and compliance checks into the DevOps workflow (DevSecOps).


Required Skills & Qualifications:

✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.

✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.


About Wissen Technology

Wissen Technology, established in 2015 and part of the Wissen Group (founded in 2000), is a specialized technology consulting company. We pride ourselves on delivering high-quality solutions for global organizations across Banking & Finance, Telecom, and Healthcare domains.

 

Here’s why Wissen Technology stands out:

 

Global Presence: Offices in US, India, UK, Australia, Mexico, and Canada.

Expert Team: Wissen Group comprises over 4000 highly skilled professionals worldwide, with Wissen Technology contributing 1400 of these experts. Our team includes graduates from prestigious institutions such as Wharton, MIT, IITs, IIMs, and NITs.

Recognitions: Great Place to Work® Certified.

Featured as a Top 20 AI/ML Vendor by CIO Insider (2020).

Impressive Growth: Achieved 400% revenue growth in 5 years without external funding.

Successful Projects: Delivered $650 million worth of projects to 20+ Fortune 500 companies.

 

For more details:

 

Website: www.wissen.com 

Wissen Thought leadership : https://www.wissen.com/articles/ 

 

LinkedIn: Wissen Technology

Read more
Bluecopa

Bluecopa

Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹15L / yr
DevOps
skill iconPython
skill iconKubernetes
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+1 more

Salary (Lacs): Up to 22 LPA


Required Qualifications

•   4–7 years of total experience, with a minimum of 4 years in a full-time DevOps role

•   Hands-on experience with major cloud platforms (GCP, AWS, Azure, OCI), more than one will be a plus

•   Proficient in Kubernetes administration and container technologies (Docker, containerd)

•   Strong Linux fundamentals

•   Scripting skills in Python and shell scripting

•   Knowledge of infrastructure as code with hands-on experience in Terraform and/or Pulumi (mandatory)

•   Experience in maintaining and troubleshooting production environments

•   Solid understanding of CI/CD concepts with hands-on experience in tools like Jenkins, GitLab CI, GitHub Actions, ArgoCD, Devtron, GCP Cloud Build, or Bitbucket Pipelines


If Interested kindly share your updated resume on 82008 31681

Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹20L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Terraform
skill iconPython
+2 more

Job Type : Contract


Location : Bangalore


Experience : 5+yrs


The role focuses on cloud security engineering with a strong emphasis on GCP, while also covering AWS and Azure.


Required Skills:


  • 5+ years of experience in software and/or cloud platform engineering, particularly focused on GCP environment.
  • Knowledge of the Shared Responsibility Model; keen understanding of the security risks inherent in hosting cloud-based applications and data.
  • Experience developing across the security assurance lifecycle (including prevent, detect, respond, and remediate controls)?Experience in configuring Public Cloud native security tooling and capabilities with a focus on Google Cloud Organizational policies/constraints, VPC SC, IAM policies and GCP APIs.
  • Experience with Cloud Security Posture Management (CSPM) 3rd Party tools such as Wiz, Prisma, Check Point CloudGuard, etc.
  • Experience in Policy-as-code (Rego) and OPA platform.
  • Experience solutioning and configuring event-driven serverless-based security controls in Azure, including but not limited to technologies such as Azure Function, Automation Runbook, AWS Lambda and Google Cloud Functions.
  • Deep understanding of DevOps processes and workflows.
  • Working knowledge of the Secure SDLC process
  • Experience with Infrastructure as Code (IaC) tooling, preferably Terraform.
  • Familiarity with Logging and data pipeline concepts and architectures in cloud.
  • Strong in scripting languages such as PowerShell or Python or Bash or Go.
  • Knowledge of Agile best practices and methodologies
  • Experience creating technical architecture documentation.? Excellent communication, written and interpersonal skills.
  • Practical experience in designing and configuring CICD pipelines. Practical experience in GitHub Actions and Jenkins.
  • Experience in ITSM.
  • Ability to articulate complex technical concepts to non-technical stakeholders.
  • Experience with risk control frameworks and engagements with risk and regulatory functions
  • Experience in the financial industry would be a plus.


Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
5 - 10 yrs
₹10L - ₹20L / yr
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Azure
skill iconJavascript
skill iconReact.js
+5 more

 Location: Hybrid/ Remote

Openings: 2

Experience: 5–12 Years

Qualification: Bachelor’s or Master’s in Computer Science or a related technical field


Key Responsibilities

Architect & Design:

  • Provide technical and architectural direction for complex frontend solutions,ensuring alignment with enterprise standards and best practices.
  • Conduct design and code reviews to maintain high-quality, reusable, and scalable frontend interfaces for enterprise applications.
  • Collaborate with cross-functional teams to define and enforce UI/UX design guidelines, accessibility standards, and performance benchmarks.
  • Identify and address potential security vulnerabilities in frontend implementations, ensuring compliance with security and data privacy requirements.

Development & Debugging:

  • Write clean, maintainable, and efficient frontend code.
  • Debug and troubleshoot code to ensure robust, high-performing applications.
  • Develop reusable frontend libraries that can be leveraged across multiple projects.

AI Awareness (Preferred):

  • Understand AI/ML fundamentals and how they can enhance frontend applications.
  • Collaborate with teams integrating AI-based features into chat applications.

Collaboration & Reporting:

  • Work closely with cross-functional teams to align on architecture and deliverables.
  • Regularly report progress, identify risks, and propose mitigation strategies.

Quality Assurance:

  • Implement unit tests and end-to-end tests to ensure code quality.
  • Participate in code reviews and enforce best practices.


Required Skills 

  • 5-10 years of experience architecting and developing cloud-based global applications in a public cloud environment (AWS, Azure, or GCP).
  • Strong hands-on expertise in frontend technologies: JavaScript, HTML5, CSS3
  • Proficiency with Modern frameworks like React, Angular, or Node.js
  • Backend familiarity with Java, Spring Boot (or similar technologies).
  • Experience developing real-world, at-scale products.
  • General knowledge of cloud platforms (AWS, Azure, or GCP) and their structure, use, and capabilities.
  • Strong problem-solving, debugging, and performance optimization skills.
Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
5 - 10 yrs
₹7L - ₹20L / yr
Fullstack Developer
skill iconJavascript
skill iconHTML/CSS
skill iconReact.js
skill iconSpring Boot
+9 more

Location: Hybrid/ Remote

Openings: 2

Experience: 5+ Years

Qualification: Bachelor’s or Master’s in Computer Science or related field


Job Responsibilities


Problem Solving & Optimization:

  • Analyze and resolve complex technical and application issues.
  • Optimize application performance, scalability, and reliability.

Design & Develop:

  • Build, test, and deploy scalable full-stack applications with high performance and security.
  • Develop clean, reusable, and maintainable code for both frontend and backend.

AI Integration (Preferred):

  • Collaborate with the team to integrate AI/ML models into applications where applicable.
  • Explore Generative AI, NLP, or machine learning solutions that enhance product capabilities.

Technical Leadership & Mentorship:

  • Provide guidance, mentorship, and code reviews for junior developers.
  • Foster a culture of technical excellence and knowledge sharing.

Agile & Delivery Management:

  • Participate in Agile ceremonies (sprint planning, stand-ups, retrospectives).
  • Define and scope backlog items, track progress, and ensure timely delivery.

Collaboration:

  • Work closely with cross-functional teams (product managers, designers, QA) to deliver high-quality solutions.
  • Coordinate with geographically distributed teams.

Quality Assurance & Security:

  • Conduct peer reviews of designs and code to ensure best practices.
  • Implement security measures and ensure compliance with industry standards.

Innovation & Continuous Improvement:

  • Identify areas for improvement in the software development lifecycle.
  • Stay updated with the latest tech trends, especially in AI and cloud technologies, and recommend new tools or frameworks.

Required Skills

  • Strong proficiency in JavaScript, HTML5, CSS3
  • Hands-on expertise with frontend frameworks like React, Angular, or Vue.js
  • Backend development experience with Java, Spring Boot (Node.js is a plus)
  • Knowledge of REST APIs, microservices, and scalable architectures
  • Familiarity with cloud platforms (AWS, Azure, or GCP)
  • Experience with Agile/Scrum methodologies and JIRA for project tracking
  • Proficiency in Git and version control best practices
  • Strong debugging, performance optimization, and problem-solving skills
  • Ability to analyze customer requirements and translate them into technical specifications
Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
0 - 2 yrs
₹3.5L - ₹4.5L / yr
Fullstack Developer
skill iconJavascript
skill iconReact.js
skill iconNodeJS (Node.js)
RESTful APIs
+6 more

Location: Hybrid/ Remote

Openings: 5

Experience: 0 - 2Years

Qualification: Bachelor’s or Master’s in Computer Science or a related technical field


Key Responsibilities:

Backend Development & APIs

  • Build microservices that provide REST APIs to power web frontends.
  • Design clean, reusable, and scalable backend code meeting enterprise security standards.
  • Conceptualize and implement optimized data storage solutions for high-performance systems.

Deployment & Cloud

  • Deploy microservices using a common deployment framework on AWS and GCP.
  • Inspect and optimize server code for speed, security, and scalability.

Frontend Integration

  • Work on modern front-end frameworks to ensure seamless integration with back-end services.
  • Develop reusable libraries for both frontend and backend codebases.


AI Awareness (Preferred)

  • Understand how AI/ML or Generative AI can enhance enterprise software workflows.
  • Collaborate with AI specialists to integrate AI-driven features where applicable.

Quality & Collaboration

  • Participate in code reviews to maintain high code quality.
  • Collaborate with teams using Agile/Scrum methodologies for rapid and structured delivery.


Required Skills:

  • Proficiency in JavaScript (ES6+), Webpack, Mocha, Jest
  • Experience with recent frontend frameworks – React.js, Redux.js, Node.js (or similar)
  • Deep understanding of HTML5, CSS3, SASS/LESS, and Content Management Systems
  • Ability to design and implement RESTful APIs and understand their impact on client-side applications
  • Familiarity with cloud platforms (AWS, Azure, or GCP) – deployment, storage, and scalability
  • Experience working with Agile and Scrum methodologies
  • Strong backend expertise in Java, J2EE, Spring Boot is a plus but not mandatory
Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore)
8 - 12 yrs
₹15L - ₹22L / yr
Data engineering
Google Cloud Platform (GCP)
Data Transformation Tool (DBT)
Google Dataform
BigQuery
+6 more

Job Title : Data Engineer – GCP + Spark + DBT

Location : Bengaluru (On-site at Client Location | 3 Days WFO)

Experience : 8 to 12 Years

Level : Associate Architect

Type : Full-time


Job Overview :

We are looking for a seasoned Data Engineer to join the Data Platform Engineering team supporting a Unified Data Platform (UDP). This role requires hands-on expertise in DBT, GCP, BigQuery, and PySpark, with a solid foundation in CI/CD, data pipeline optimization, and agile delivery.


Mandatory Skills : GCP, DBT, Google Dataform, BigQuery, PySpark/Spark SQL, Advanced SQL, CI/CD, Git, Agile Methodologies.


Key Responsibilities :

  • Design, build, and optimize scalable data pipelines using BigQuery, DBT, and PySpark.
  • Leverage GCP-native services like Cloud Storage, Pub/Sub, Dataproc, Cloud Functions, and Composer for ETL/ELT workflows.
  • Implement and maintain CI/CD for data engineering projects with Git-based version control.
  • Collaborate with cross-functional teams including Infra, Security, and DataOps for reliable, secure, and high-quality data delivery.
  • Lead code reviews, mentor junior engineers, and enforce best practices in data engineering.
  • Participate in Agile sprints, backlog grooming, and Jira-based project tracking.

Must-Have Skills :

  • Strong experience with DBT, Google Dataform, and BigQuery
  • Hands-on expertise with PySpark/Spark SQL
  • Proficient in GCP for data engineering workflows
  • Solid knowledge of SQL optimization, Git, and CI/CD pipelines
  • Agile team experience and strong problem-solving abilities

Nice-to-Have Skills :

  • Familiarity with Databricks, Delta Lake, or Kafka
  • Exposure to data observability and quality frameworks (e.g., Great Expectations, Soda)
  • Knowledge of MDM patterns, Terraform, or IaC is a plus
Read more
appscrip

at appscrip

2 recruiters
Kanika Gaur
Posted by Kanika Gaur
Bengaluru (Bangalore)
2 - 4 yrs
₹4L - ₹10L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
DevOps
Google Cloud Platform (GCP)

Job Title: Sr. DevOps Engineer

Experience Required: 2 to 4 years in DevOps or related fields

Employment Type: Full-time


About the Role:

We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.


Key Responsibilities:

Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).

CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.

Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.

Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.

Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.

Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.

Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.

Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.


Required Skills & Qualifications:

Technical Expertise:

Strong proficiency in cloud platforms like AWS, Azure, or GCP.

Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).

Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.

Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.

Proficiency in scripting languages (e.g., Python, Bash, PowerShell).

Soft Skills:

Excellent communication and leadership skills.

Strong analytical and problem-solving abilities.

Proven ability to manage and lead a team effectively.

Experience:

4 years + of experience in DevOps or Site Reliability Engineering (SRE).

4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.

Strong understanding of microservices, APIs, and serverless architectures.


Nice to Have:

Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.

Experience with GitOps tools such as ArgoCD or Flux.

Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).


Perks & Benefits:

Competitive salary and performance bonuses.

Comprehensive health insurance for you and your family.

Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.

Flexible working hours and remote work options.

Collaborative and inclusive work culture.


Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.


You can directly contact us: Nine three one six one two zero one three two

Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore)
10 - 18 yrs
₹35L - ₹54L / yr
skill iconReact.js
skill iconJavascript
TypeScript
Micro-Frontend Architecture
webpack
+7 more

Job Title : Lead Web Developer / Frontend Engineer

Experience Required : 10+ Years

Location : Bangalore (Hybrid – 3 Days Work From Office)

Work Timings : 11:00 AM to 8:00 PM IST

Notice Period : Immediate or Up to 30 Days (Preferred)

Work Mode : Hybrid

Interview Mode : Face-to-Face mandatory (for Round 2)


Role Overview :

We are hiring a Lead Frontend Engineer with 10+ Years of experience to drive the development of scalable, modern, and high-performance web applications.

This is a hands-on technical leadership role focused on React.js, micro-frontends, and Backend for Frontend (BFF) architecture, requiring both coding expertise and team leadership skills.


Mandatory Skills :

React.js, JavaScript/TypeScript, HTML, CSS, micro-frontend architecture, Backend for Frontend (BFF), Webpack, Jenkins (CI/CD), GCP, RDBMS/SQL, Git, and team leadership.


Core Responsibilities :

  • Design and develop cloud-based web applications using React.js, HTML, CSS.
  • Collaborate with UX/UI designers and backend engineers to implement seamless user experiences.
  • Lead and mentor a team of frontend developers.
  • Write clean, well-documented, scalable code using modern JavaScript/TypeScript practices.
  • Implement CI/CD pipelines using Jenkins, deploy applications to CDNs.
  • Integrate with GCP services, optimize front-end performance.
  • Stay updated with modern frontend technologies and design patterns.
  • Use Git for version control and collaborative workflows.
  • Implement JavaScript libraries for web analytics and performance monitoring.


Key Requirements :

  • 10+ Years of experience as a frontend/web developer.
  • Strong proficiency in React.js, JavaScript/TypeScript, HTML, CSS.
  • Experience with micro-frontend architecture and Backend for Frontend (BFF) patterns.
  • Proficiency in frontend design frameworks and libraries (jQuery, Node.js).
  • Strong understanding of build tools like Webpack, CI/CD using Jenkins.
  • Experience with GCP and deploying to CDNs.
  • Solid experience in RDBMS, SQL.
  • Familiarity with Git and agile development practices.
  • Excellent debugging, problem-solving, and communication skills.
  • Bachelor’s/Master’s in Computer Science or a related field.


Nice to Have :

  • Experience with Node.js.
  • Previous experience working with web analytics frameworks.
  • Exposure to JavaScript observability tools.


Interview Process :

1. Round 1 : Online Technical Interview (via Geektrust – 1 Hour)

2. Round 2 : Face-to-Face Interview with the Indian team in Bangalore (3 Hours – Mandatory)

3. Round 3 : Online Interview with CEO (30 Minutes)


Important Notes :

  • Face-to-face interview in Bangalore is mandatory for Round 2.
  • Preference given to candidates currently in Bangalore or willing to travel for interviews.
  • Remote applicants who cannot attend the in-person round will not be considered.
Read more
YOptima Media Solutions Pvt Ltd
Bengaluru (Bangalore)
8 - 12 yrs
₹40L - ₹60L / yr
skill iconReact.js
skill iconNodeJS (Node.js)
Google Cloud Platform (GCP)
Langchaing
Generative AI

Why This Role Matters

We’re looking for a Principal Engineer to lead the architecture and execution of our GenAI-powered, self-serve marketing platforms. You will work directly with the CEO to shape, build, and scale products that change how marketers interact with data and AI. This is intrapreneurship in action — not a sandbox innovation lab, but a real-world product with traction, velocity, and high stakes.


What You'll Do

  • Co-own product architecture and direction alongside the CEO.
  • Build GenAI-native, full-stack platforms from MVP to scale — powered by LLMs, agents, and predictive AI.
  • Own the full stack: React (frontend), Node.js/Python (backend), GCP (infra), BigQuery (data), and vector databases (AI).
  • Lead a lean, high-caliber team with a hands-on, unblock-and-coach mindset.
  • Drive rapid iteration with rigor, balancing short-term delivery with long-term resilience.
  • Ensure scalability, observability, and fault tolerance in multi-tenant, cloud-native environments.
  • Bridge business and tech — aligning execution with evolving user and market insights.


What You Bring

  • 8–12 years of experience building and scaling full-stack, data-heavy or AI-driven products.
  • Fluency in React, Node.js, and Google Cloud (Functions, BigQuery, Cloud SQL, Airflow, etc.).
  • Hands-on experience with GenAI tools (LangChain, OpenAI APIs, LlamaIndex) is a bonus.
  • Track record of shipping products from ambiguity to impact.
  • Strong product mindset — your goal is user value, not just elegant code.
  • Architectural leadership with ownership of engineering rigor and scaling best practices.
  • Startup or founder DNA — you’ve built things from scratch and know how to move fast without breaking things.


Who You Are

  • A former founder, senior IC, or tech lead who’s done zero-to-one and 1-to-n scaling.
  • Hungry for ownership and velocity — frustrated by bureaucracy or stagnation.
  • You code because you care about solving real problems for real users.
  • You’re pragmatic, hands-on, and grounded in first principles.
  • You understand that great software isn't just shipped — it's hardened, maintained, and evolves with minimal manual effort.
  • You’re open to evolving into a founding engineer role with influence over the tech vision and culture.


What You Get

  • Equity in a high-growth product-led startup.
  • A chance to build global products out of India with full-stack and GenAI innovation.
  • Access to high-context decision-making and direct collaboration with the CEO.
  • A tight, ego-free team and a culture that values clarity, ownership, learning, and candor.


Why YOptima?

YOptima is redefining how leading marketers unlock growth through full-funnel, AI-powered media solutions. As part of our growth journey, this is your opportunity to own the growth charter for leading brands and agencies globally and shape the narrative of a next-generation marketing platform.


Ready to lead, build, and scale?

We’d love to hear from you.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Praffull Shinde
Posted by Praffull Shinde
Pune, Mumbai, Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹26L / yr
skill iconPython
PySpark
skill iconDjango
skill iconFlask
RESTful APIs
+3 more

Job title - Python developer

Exp – 4 to 6 years

Location – Pune/Mum/B’lore

 

PFB JD

Requirements:

  • Proven experience as a Python Developer
  • Strong knowledge of core Python and Pyspark concepts
  • Experience with web frameworks such as Django or Flask
  • Good exposure to any cloud platform (GCP Preferred)
  • CI/CD exposure required
  • Solid understanding of RESTful APIs and how to build them
  • Experience working with databases like Oracle DB and MySQL
  • Ability to write efficient SQL queries and optimize database performance
  • Strong problem-solving skills and attention to detail
  • Strong SQL programing (stored procedure, functions)
  • Excellent communication and interpersonal skill

Roles and Responsibilities

  • Design, develop, and maintain data pipelines and ETL processes using pyspark
  • Work closely with data scientists and analysts to provide them with clean, structured data.
  • Optimize data storage and retrieval for performance and scalability.
  • Collaborate with cross-functional teams to gather data requirements.
  • Ensure data quality and integrity through data validation and cleansing processes.
  • Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
  • Stay up to date with industry best practices and emerging technologies in data engineering.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shikha Nagar
Posted by Shikha Nagar
Pune, Mumbai, Bengaluru (Bangalore)
8 - 10 yrs
Best in industry
Terraform
Google Cloud Platform (GCP)
skill iconKubernetes
DevOps
SQL Azure

We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.

You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.


Key Responsibilities:

1. Cloud Infrastructure Design & Management

· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.

· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.

· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.

· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.

2. Kubernetes & Container Orchestration

· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).

· Work with Helm charts, Istio, and service meshes for microservices deployments.

· Automate scaling, rolling updates, and zero-downtime deployments.


3. Serverless & Compute Services

· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.

· Optimize containerized applications running on Cloud Run for cost efficiency and performance.


4. CI/CD & DevOps Automation

· Design, implement, and manage CI/CD pipelines using Azure DevOps.

· Automate infrastructure deployment using Terraform, Bash and Power shell scripting

· Integrate security and compliance checks into the DevOps workflow (DevSecOps).


Required Skills & Qualifications:

✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.

✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.


Read more
appscrip

at appscrip

2 recruiters
Nilam Surti
Posted by Nilam Surti
Bengaluru (Bangalore)
0 - 0 yrs
₹3L - ₹4L / yr
DevOps
skill iconKubernetes
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Terraform

Looking for Fresher developers


Responsibilities:

  • Implement integrations requested by customers
  • Deploy updates and fixes
  • Provide Level 2 technical support
  • Build tools to reduce occurrences of errors and improve customer experience
  • Develop software to integrate with internal back-end systems
  • Perform root cause analysis for production errors
  • Investigate and resolve technical issues
  • Develop scripts to automate visualization
  • Design procedures for system troubleshooting and maintenance


Requirements and skill:

Knowledge in DevOps Engineer or similar software engineering role

Good knowledge of Terraform, Kubernetes

Working knowledge on AWS, Google Cloud



You can directly contact me on nine three one six one two zero one three two

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort