Cutshort logo
Google Cloud Platform (GCP) Jobs in Pune

50+ Google Cloud Platform (GCP) Jobs in Pune | Google Cloud Platform (GCP) Job openings in Pune

Apply to 50+ Google Cloud Platform (GCP) Jobs in Pune on CutShort.io. Explore the latest Google Cloud Platform (GCP) Job opportunities across top companies like Google, Amazon & Adobe.

icon
Wissen Technology

at Wissen Technology

4 recruiters
Sonali RajeshKumar
Posted by Sonali RajeshKumar
Bengaluru (Bangalore), Pune, Mumbai
4 - 9 yrs
Best in industry
Google Cloud Platform (GCP)
Reliability engineering
skill iconPython
Shell Scripting

Dear Candidate,


Greetings from Wissen Technology. 

We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.   

 About Wissen Technology:

  • The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
  • Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
  • Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
  • Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
  • Globally present with offices US, India, UK, Australia, Mexico, and Canada.
  • We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
  • Wissen Technology has been certified as a Great Place to Work®.
  • Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
  • Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
  • The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.

We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few


Job Description: 

Please find below details:


Experience - 4+ Years

Location- Bangalore/Mumbai/Pune


Team Responsibilities

The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.

 

Resource's key Responsibilities


• Provide Tier 2/3 product technical support.

• Building software to help operations and support activities.

• Manage system\software configurations and troubleshoot environment issues.

• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.

• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.

• Collaborate with development and software testing teams throughout the release process.

• Analyze release and deployment processes to identify key areas for automation and optimization.

• Manage hardware and software resources & coordinate maintenance, planned downtimes with

infrastructure group across all the environments. (Production / Non-Production).

• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.

 

Required skills and experience

• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)

• Master’s degree a plus

• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.

• Excellent problem-solving/troubleshooting skills, fast learner

• Strong knowledge of Unix Administration.

• Strong scripting skills in Shell, Python, Batch is must.

• Strong Database experience – Oracle

• Strong knowledge of Software Development Life Cycle

• Power shell is nice to have

• Software development skillsets in Java or Ruby.

• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have


Read more
Intineri infosol Pvt Ltd

at Intineri infosol Pvt Ltd

2 candid answers
Bhaskar Mishra
Posted by Bhaskar Mishra
Pune
5 - 7 yrs
₹8L - ₹14L / yr
RESTful APIs
Microservices
skill iconDocker
skill iconKubernetes
skill iconAmazon Web Services (AWS)
+4 more

 We’re Hiring | Full Stack Developer – Golang

Are you a skilled Full Stack Developer with hands-on experience in Golang?

Join our growing team and work on cutting-edge projects in a collaborative environment!

🔹 Role: Full Stack Developer – Golang

🔹 Experience: 5+ Years

🔹 Budget: ₹1,00,000 per month

🔹 Location: Yerwada, Pune (Hybrid – 2 days a week)

Key Responsibilities:

Develop and maintain scalable web applications using Golang for backend and modern frontend frameworks.

Design clean, efficient, and reusable code for both client and server sides.

Collaborate with cross-functional teams to define and deliver new features.

Ensure application performance, security, and scalability.

Troubleshoot and debug production issues effectively.

Requirements:

Strong proficiency in Golang (backend) and JavaScript/TypeScript (frontend).

Experience with React.js, RESTful APIs, and microservices architecture.

Familiarity with Docker, Kubernetes, and cloud platforms (AWS/GCP/Azure) is a plus.

Excellent problem-solving and communication skills.

Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Mohali, Dehradun, Panchkula, Chennai
6 - 14 yrs
₹12L - ₹28L / yr
Test Automation (QA)
skill iconKubernetes
helm
skill iconDocker
skill iconAmazon Web Services (AWS)
+13 more

Job Title : Senior QA Automation Architect (Cloud & Kubernetes)

Experience : 6+ Years

Location : India (Multiple Offices)

Shift Timings : 12 PM to 9 PM (Noon Shift)

Working Days : 5 Days WFO (NO Hybrid)


About the Role :

We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.

You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.


Key Responsibilities :

  • Architect and maintain test automation frameworks for microservices.
  • Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
  • Ensure reliability, scalability, and observability of test systems.
  • Work closely with DevOps and Cloud teams to streamline automation infrastructure.

Mandatory Skills :

  • Kubernetes, Helm, Docker, Linux
  • Cloud Platforms : AWS / Azure / GCP
  • CI/CD Tools : Jenkins, GitHub Actions
  • Scripting : Python, Pytest, Bash
  • Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
  • IaC Practices : Terraform / Ansible

Good to Have :

  • Experience with Service Mesh (Istio/Linkerd).
  • Container Security or DevSecOps exposure.
Read more
Virtana

at Virtana

2 candid answers
Eman Khan
Posted by Eman Khan
Pune
8 - 13 yrs
₹35L - ₹60L / yr
skill iconJava
Spring
skill iconGo Programming (Golang)
skill iconPython
skill iconAmazon Web Services (AWS)
+21 more

Company Overview:

Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud. 

As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.

 

Position Overview:

As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.

 

Key Responsibilities:

  • Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
  • Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
  • Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
  • Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
  • Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
  • Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
  • Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
  • Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.


Qualifications:

  • Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
  • 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
  • Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
  • Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
  • Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
  • Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
  • Solid experience designing and delivering features with high quality on aggressive schedules.
  • Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
  • Familiarity with performance optimization techniques and principles for backend systems.
  • Excellent problem-solving and critical-thinking abilities.
  • Outstanding communication and collaboration skills.


Why Join Us:

  • Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
  • Collaborative and innovative work environment.
  • Competitive salary and benefits package.
  • Professional growth and development opportunities.
  • Chance to work on cutting-edge technology and products that make a real impact.


If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Pune, Mumbai
4 - 7 yrs
Best in industry
skill iconJava
Windows Azure
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

Job Description

      The ideal candidate will possess expertise in Core Java (at least Java 8), Spring framework, JDBC, threading, database management, and cloud platforms such as Azure and GCP. The candidate should also have strong debugging skills, the ability to understand multi-service flow, experience with large data processing, and excellent problem-solving abilities.

 

JD:

  1. Develop, and maintain Java applications using Core Java, Spring framework, JDBC, and threading concepts.
  2. Strong understanding of the Spring framework and its various modules.
  3. Experience with JDBC for database connectivity and manipulation
  4. Utilize database management systems to store and retrieve data efficiently.
  5. Proficiency in Core Java8 and thorough understanding of threading concepts and concurrent programming.
  6. Experience in in working with relational and nosql databases.
  7. Basic understanding of cloud platforms such as Azure and GCP and gain experience on DevOps practices is added advantage.
  8. Knowledge of containerization technologies (e.g., Docker, Kubernetes)
  9. Perform debugging and troubleshooting of applications using log analysis techniques.
  10. Understand multi-service flow and integration between components.
  11. Handle large-scale data processing tasks efficiently and effectively.
  12. Hands on experience using Spark is an added advantage.
  13. Good problem-solving and analytical abilities.
  14. Collaborate with cross-functional teams to identify and solve complex technical problems.
  15. Knowledge of Agile methodologies such as Scrum or Kanban
  16. Stay updated with the latest technologies and industry trends to improve development processes continuously and methodologies.

 

Read more
Pune
3 - 8 yrs
₹5L - ₹30L / yr
skill iconVue.js
Google Cloud Platform (GCP)

Lead Frontend Architect (Vue.js & Firebase)

Amplifai transforms AI potential into measurable business value, guiding organizations from strategic planning to execution. With deep expertise in AI product development, technical architecture, regulatory compliance, and commercialization, we deliver secure, ethical, and high-performing solutions. Having co-founded one of Europe’s most innovative AI companies, our team drives unparalleled growth for clients through cutting-edge technologies like GPT tools, AI agents, and modern frameworks. Join our new Pune office to shape the future of AI-driven innovation!

One of our partners is transforming how the construction industry measures and manages carbon emissions, helping organizations meet their sustainability goals with accurate, scalable, and actionable insights. Their SaaS platform enables carbon footprint calculations, Life Cycle Assessment (LCA) data management, and complex environmental reporting — and we’re ready to take it from 70 customers to 700+ enterprise clients.

We’re seeking a Senior Cloud Architect & Tech Lead to spearhead the next phase of our platform’s growth. You’ll lead architectural decisions for a complex sustainability and carbon accounting platform built on Firebase/Google Cloud with a Vue.js frontend, driving scalability, enterprise readiness, and technical excellence. This is a hands-on leadership role where you’ll guide the engineering team, optimize system performance, and shape a long-term technical roadmap to support 10x growth — all while leveraging cutting-edge GenAI developer tools like Cursor, Claude, Lovable, and GitHub Copilot to accelerate delivery and innovation.


Key Responsibilities:

· Lead architecture design for a highly scalable, enterprise-ready SaaS platform built with Vue.js, Firebase Functions (Node.js), Firestore, Redis, and GenKit AI.

· Design and optimize complex hierarchical data models and computational workloads for high performance at scale.

· Evaluate platform evolution options — from deep Firebase optimizations to potential migration strategies — balancing technical debt, scalability, and enterprise needs.

· Implement SOC2/ISO27001-ready security controls including audit logging, data encryption, and enterprise-grade access management.

· Drive performance engineering to address Firestore fan-out queries, function cold starts, and database scaling bottlenecks.

· Oversee CI/CD automation and deployment pipelines for multi-environment enterprise releases.

· Design APIs and integration strategies to meet enterprise customer requirements and enable global scaling.

· Mentor and guide the development team, ensuring technical quality, scalability, and adoption of best practices.

· Collaborate cross-functionally with product managers, sustainability experts, and customer success teams to deliver impactful features and integrations.

· Plan and execute disaster recovery strategies, business continuity procedures, and cost-optimized infrastructure scaling.

· Maintain comprehensive technical documentation for architecture, processes, and security controls.


Required Skills & Experience:

· 5+ years of Google Cloud Platform experience with deep expertise in the Firebase ecosystem.

· Proven ability to scale SaaS platforms through 5–10x growth phases, ideally in an enterprise B2B environment.

· Strong background in serverless architecture, event-driven systems, and scaling NoSQL databases (Firestore, MongoDB, DynamoDB).

· Expertise in Vue.js for large-scale application performance and maintainability.

· Hands-on experience implementing enterprise security frameworks (SOC2, ISO27001) and compliance requirements.

· Demonstrated daily use of GenAI developer tools such as Cursor, Claude, Lovable, and GitHub Copilot to accelerate coding, documentation, and architecture work.

· Track record of performance optimization for high-traffic production systems.

· 3+ years leading engineering teams through architectural transitions and complex technical challenges.

· Strong communication skills to work with both technical and non-technical stakeholders.

Preferred Qualifications

· Domain knowledge in construction industry workflows or sustainability technology (LCA, carbon accounting).

· Experience with numerical computing, scientific applications, or computationally intensive workloads.

· Familiarity with multi-region deployments and advanced analytics architectures.

· Knowledge of data residency and privacy regulations

· Knowledge of BIM (Building Information Modeling), IFC standards for construction and engineering data interoperability.

Ideal Candidate

You’re a Senior Software Engineer who thrives on scaling complex systems for enterprise customers. You embrace GenAI tools as an integral part of your development workflow, using platforms like Cursor, Claude, Lovable, and GitHub Copilot to deliver faster and smarter. Experience with BIM, IFC, or Speckle is a strong plus, enabling you to bridge sustainability tech with real-world construction data standards. You balance deep technical execution with strategic thinking and can communicate effectively across teams. While direct sustainability or construction tech experience is a plus, your ability to quickly master complex domains is what will set you apart.

Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Shruti Wankhade
Posted by Shruti Wankhade
Pune
3.5 - 7 yrs
Best in industry
RESTful APIs
skill iconJava
skill iconSpring Boot
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

Who we are:

DeepIntent is leading the healthcare advertising industry with data-driven solutions built for the future. From day one, our mission has been to improve patient outcomes through the artful use of advertising, data science, and real-world clinical data. For more information visit, www.DeepIntent.com. 


Who you are:

  • 5+ years of software engineering experience with 2+ years in senior technical roles
  • Proven track record designing and implementing large-scale, distributed backend systems
  • Experience leading technical initiatives across multiple teams
  • Strong background in mentoring engineers and driving technical excellence
  • Programming Languages: Expert-level proficiency in Java and Spring Boot framework
  • Framework Expertise: Deep experience with Spring ecosystem (Spring Boot, Spring Security, Spring Data, Spring Cloud)
  • API Development: Strong experience building RESTfuI APIs, GraphQL endpoints, and micro-services architectures
  • Cloud Platforms: Advanced knowledge of AWS, GCP, Azure and cloud-native development patterns
  • Databases: Proficiency with both SQL (PostgreSQL, MySQL, Oracle) and NoSQL (MongoDB, Redis, Cassandra) databases, including design and optimization
  • Bachelor's or Master's degree in Computer Science, Engineering, Software Engineering, or related field (or equivalent industry experience)
  • Excellent technical communication skills for both technical and non-technical stakeholders
  • Strong mentorship abilities with experience coaching junior and mid—level engineers
  • Proven ability to drive consensus on technical decisions across teams
  • Comfortable with ambiguous problems and breaking down complex challenges

 

What You'll Do: 

  • Lead design and implementation of complex backend systems and micro-services serving multiple product teams
  • Drive architectural decisions ensuring scalability, reliability, and performance
  • Create technical design documents, system architecture diagrams, and API specifications
  • Champion engineering best practices including code quality, testing strategies, and security
  • Partner with Tech Leads, Engineering Managers, and Product Managers to align solutions with business objectives
  • Lead technical initiatives requiring coordination between backend, frontend, and data teams
  • Participate in architecture review boards and provide guidance for organisation-wide initiatives
  • Serve as technical consultant for complex system design problems across product areas
  • Mentor and coach engineers at various levels with technical guidance and career development
  • Conduct code reviews and design reviews, sharing knowledge and raising technical standards
  • Lead technical discussions and knowledge-sharing sessions
  • Help establish coding standards and engineering processes
  • Design and develop robust, scalable backend services and APIs using Java and Spring Boot
  • Implement comprehensive testing strategies and optimise application performance
  • Ensure security best practices across all applications
  • Research and prototype new approaches to improve system architecture and developer productivity


Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Dipika
Posted by Dipika
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Hyderabad, Pune
5 - 7 yrs
₹5L - ₹20L / yr
skill iconJava
Microservices
06692
Apache Kafka
Apache ActiveMQ
+3 more

1 Senior Associate Technology L1 – Java Microservices


Company Description

Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.


Job Description

We are looking for a Senior Associate Technology Level 1 - Java Microservices Developer to join our team of bright thinkers and doers. You’ll use your problem-solving creativity to design, architect, and develop high-end technology solutions that solve our clients’ most complex and challenging problems across different industries.

We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions.


Your Impact:

• Drive the design, planning, and implementation of multifaceted applications, giving you breadth and depth of knowledge across the entire project lifecycle.

• Combine your technical expertise and problem-solving passion to work closely with clients, turning • complex ideas into end-to-end solutions that transform our clients’ business

• Constantly innovate and evaluate emerging technologies and methods to provide scalable and elegant solutions that help clients achieve their business goals.


Qualifications

➢ 5 to 7 Years of software development experience

➢ Strong development skills in Java JDK 1.8 or above

➢ Java fundamentals like Exceptional handling, Serialization/Deserialization and Immutability concepts

➢ Good fundamental knowledge in Enums, Collections, Annotations, Generics, Auto boxing and Data Structure

➢ Database RDBMS/No SQL (SQL, Joins, Indexing)

➢ Multithreading (Re-entrant Lock, Fork & Join, Sync, Executor Framework)

➢ Spring Core & Spring Boot, security, transactions ➢ Hands-on experience with JMS (ActiveMQ, RabbitMQ, Kafka etc)

➢ Memory Mgmt (JVM configuration, Profiling, GC), profiling, Perf tunning, Testing, Jmeter/similar tool)

➢ Devops (CI/CD: Maven/Gradle, Jenkins, Quality plugins, Docker and containersization)

➢ Logical/Analytical skills. Thorough understanding of OOPS concepts, Design principles and implementation of

➢ different type of Design patterns. ➢ Hands-on experience with any of the logging frameworks (SLF4J/LogBack/Log4j) ➢ Experience of writing Junit test cases using Mockito / Powermock frameworks.

➢ Should have practical experience with Maven/Gradle and knowledge of version control systems like Git/SVN etc.

➢ Good communication skills and ability to work with global teams to define and deliver on projects.

➢ Sound understanding/experience in software development process, test-driven development.

➢ Cloud – AWS / AZURE / GCP / PCF or any private cloud would also be fine

➢ Experience in Microservices

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Anurag Sinha
Posted by Anurag Sinha
Pune, Mumbai, Navi Mumbai
5 - 8 yrs
Best in industry
Google Cloud Platform (GCP)
Azure Cloud
Terraform
DevOps
  • Looking manage IaC module
  • Terraform experience is a must
  • Terraform Module as a part of central platform team
  • Azure/GCP exp is a must
  • C#/Python/Java coding – is good to have

 

Read more
Virtana

at Virtana

2 candid answers
Bimla Dhirayan
Posted by Bimla Dhirayan
Pune, Chennai
4 - 10 yrs
Best in industry
skill iconJava
skill iconKubernetes
Google Cloud Platform (GCP)
openshift
skill iconPython
+11 more

Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 


Work Location- Pune/ Chennai


Job Type- Hybrid

 

Role Responsibilities: 

  • The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform 
  • Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform. 
  • Communicate effectively with people having differing levels of technical knowledge.  
  • Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment 
  • Provide customers with complex application support, problem diagnosis and problem resolution 

Required Qualifications:    

  • Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.  
  • Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.  
  • Bachelor’s (B.E, B.Tech) or Master’s degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent 
  • 2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS) 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a strong technical engineer who can design and code with strong communication skills 
  • Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 
  • Ability to use a variety of debugging tools, simulators and test harnesses is a plus 

  

About Virtana:  Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

 

 

Read more
Virtana

at Virtana

2 candid answers
Bimla Dhirayan
Posted by Bimla Dhirayan
Pune, Chennai
4 - 10 yrs
Best in industry
skill iconJava
skill iconGo Programming (Golang)
skill iconKubernetes
skill iconPython
Apache Kafka
+13 more

Senior Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 


Work Location: Pune/ Chennai


Job Type:Hybrid

 

Role Responsibilities: 

  • The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform 
  • Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform. 
  • Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.  
  • Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation 
  • Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution 
  • Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery 

 

Required Qualifications:    

  • Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS) 
  • Experience with CI/CD and cloud-based software development and delivery 
  • Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM. 
  • Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required. 
  • Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent 
  • Highly effective verbal and written communication skills and ability to lead and participate in multiple projects 
  • Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities 
  • Must be results-focused, team-oriented and with a strong work ethic 

 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills 
  • Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 

  

About Virtana:  Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

 

Read more
Edutech Platform

Edutech Platform

Agency job
via Scaling Theory by Keerthana Prabkharan
Pune
2 - 5 yrs
₹25L - ₹30L / yr
skill iconNodeJS (Node.js)
skill iconExpress
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

Responsibilities:

● Design and build scalable APIs and microservices in Node.js (or equivalent backend frameworks).

● Develop and optimize high-performance systems handling large-scale data and concurrent users.

● Ensure system security, reliability, and fault tolerance.

● Collaborate closely with product managers, designers, and frontend engineers for seamless delivery.

● Write clean, maintainable, and well-documented code with a focus on best practices.

● Contribute to architectural decisions, technology choices, and overall system design.

● Monitor, debug, and continuously improve backend performance.

● Stay updated with modern backend technologies and bring innovation into the product.

Desired Qualifications & Skillset:

● 2+ years of professional backend development experience.

● Proficiency with Node.js, Express.js, or similar frameworks.

● Strong knowledge of web application architecture, databases (SQL/NoSQL), and caching strategies.

● Experience with cloud platforms (AWS/GCP/Azure), CI/CD pipelines, and containerization (Docker/Kubernetes) is a plus.

● Ability to break down complex problems into scalable solutions.

● Strong logical aptitude, quick learning ability, and a proactive mindset

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Amita Soni
Posted by Amita Soni
Pune, Bengaluru (Bangalore), Mumbai
4 - 7 yrs
Best in industry
Google Cloud Platform (GCP)
GKE
Microsoft Windows Azure
Terraform

Job Description


We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.


Work location: Pune/Mumbai/Bangalore


Experience: 4-7 Years 


Joining: Mid of October


You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.


Key Responsibilities:

1. Cloud Infrastructure Design & Management

· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.

· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.

· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.

· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.

2. Kubernetes & Container Orchestration

· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).

· Work with Helm charts, Istio, and service meshes for microservices deployments.

· Automate scaling, rolling updates, and zero-downtime deployments.


3. Serverless & Compute Services

· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.

· Optimize containerized applications running on Cloud Run for cost efficiency and performance.


4. CI/CD & DevOps Automation

· Design, implement, and manage CI/CD pipelines using Azure DevOps.

· Automate infrastructure deployment using Terraform, Bash and Power shell scripting

· Integrate security and compliance checks into the DevOps workflow (DevSecOps).


Required Skills & Qualifications:

✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.

✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.


About Wissen Technology

Wissen Technology, established in 2015 and part of the Wissen Group (founded in 2000), is a specialized technology consulting company. We pride ourselves on delivering high-quality solutions for global organizations across Banking & Finance, Telecom, and Healthcare domains.

 

Here’s why Wissen Technology stands out:

 

Global Presence: Offices in US, India, UK, Australia, Mexico, and Canada.

Expert Team: Wissen Group comprises over 4000 highly skilled professionals worldwide, with Wissen Technology contributing 1400 of these experts. Our team includes graduates from prestigious institutions such as Wharton, MIT, IITs, IIMs, and NITs.

Recognitions: Great Place to Work® Certified.

Featured as a Top 20 AI/ML Vendor by CIO Insider (2020).

Impressive Growth: Achieved 400% revenue growth in 5 years without external funding.

Successful Projects: Delivered $650 million worth of projects to 20+ Fortune 500 companies.

 

For more details:

 

Website: www.wissen.com 

Wissen Thought leadership : https://www.wissen.com/articles/ 

 

LinkedIn: Wissen Technology

Read more
Virtana

at Virtana

2 candid answers
Bimla Dhirayan
Posted by Bimla Dhirayan
Pune, Chennai
4 - 10 yrs
Best in industry
skill iconJava
skill iconGo Programming (Golang)
skill iconDocker
openshift
network performance
+13 more

Senior Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 

 

Role Responsibilities: 

  • The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform 
  • Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform. 
  • Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.  
  • Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation 
  • Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution 
  • Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery 

 

Required Qualifications:    

  • Minimum of 4-10 years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS) 
  • Experience with CI/CD and cloud-based software development and delivery 
  • Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM. 
  • Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required. 
  • Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent 
  • Highly effective verbal and written communication skills and ability to lead and participate in multiple projects 
  • Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities 
  • Must be results-focused, team-oriented and with a strong work ethic 

 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills 
  • Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 

  

About Virtana: 

Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

 

 

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Pune
6 - 10 yrs
₹12L - ₹23L / yr
skill iconMachine Learning (ML)
skill iconDeep Learning
Natural Language Processing (NLP)
Computer Vision
Data engineering
+8 more

Job Title : AI Architect

Location : Pune (On-site | 3 Days WFO)

Experience : 6+ Years

Shift : US or flexible shifts


Job Summary :

We are looking for an experienced AI Architect to design and deploy AI/ML solutions that align with business goals.

The role involves leading end-to-end architecture, model development, deployment, and integration using modern AI/ML tools and cloud platforms (AWS/Azure/GCP).


Key Responsibilities :

  • Define AI strategy and identify business use cases
  • Design scalable AI/ML architectures
  • Collaborate on data preparation, model development & deployment
  • Ensure data quality, governance, and ethical AI practices
  • Integrate AI into existing systems and monitor performance

Must-Have Skills :

  • Machine Learning, Deep Learning, NLP, Computer Vision
  • Data Engineering, Model Deployment (CI/CD, MLOps)
  • Python Programming, Cloud (AWS/Azure/GCP)
  • Distributed Systems, Data Governance
  • Strong communication & stakeholder collaboration

Good to Have :

  • AI certifications (Azure/GCP/AWS)
  • Experience in big data and analytics
Read more
Blitzy

at Blitzy

2 candid answers
1 product
Eman Khan
Posted by Eman Khan
Pune
6 - 10 yrs
₹40L - ₹70L / yr
skill iconPython
skill iconDjango
skill iconFlask
FastAPI
Google Cloud Platform (GCP)
+1 more

Requirements

  • 7+ years of experience with Python
  • Strong expertise in Python frameworks (Django, Flask, or FastAPI)
  • Experience with GCP, Terraform, and Kubernetes
  • Deep understanding of REST API development and GraphQL
  • Strong knowledge of SQL and NoSQL databases
  • Experience with microservices architecture
  • Proficiency with CI/CD tools (Jenkins, CircleCI, GitLab)
  • Experience with container orchestration using Kubernetes
  • Understanding of cloud architecture and serverless computing
  • Experience with monitoring and logging solutions
  • Strong background in writing unit and integration tests
  • Familiarity with AI/ML concepts and integration points


Responsibilities

  • Design and develop scalable backend services for our AI platform
  • Architect and implement complex systems with high reliability
  • Build and maintain APIs for internal and external consumption
  • Work closely with AI engineers to integrate ML functionality
  • Optimize application performance and resource utilization
  • Make architectural decisions that balance immediate needs with long-term scalability
  • Mentor junior engineers and promote best practices
  • Contribute to the evolution of our technical standards and processes
Read more
AI driven consulting firm

AI driven consulting firm

Agency job
via PLEXO HR Solutions by Upashna Kumari
Pune
1 - 3 yrs
₹3L - ₹5L / yr
Google Cloud Platform (GCP)
CI/CD
skill iconKubernetes
Terraform
Linux/Unix

What You’ll Do:

We’re looking for a skilled DevOps Engineer to help us build and maintain reliable, secure, and scalable infrastructure. You will work closely with our development, product, and security teams to streamline deployments, improve performance, and ensure cloud infrastructure resilience.


Responsibilities:

● Deploy, manage, and monitor infrastructure on Google Cloud Platform (GCP)

● Build CI/CD pipelines using Jenkins and integrate them with Git workflows

● Design and manage Kubernetes clusters and helm-based deployments

● Manage infrastructure as code using Terraform

● Set up logging, monitoring, and alerting (Stackdriver, Prometheus, Grafana)

● Ensure security best practices across cloud resources, networks, and secrets

● Automate repetitive operations and improve system reliability

● Collaborate with developers to troubleshoot and resolve issues in staging/production environments


What We’re Looking For:

Required Skills:

● 1–3 years of hands-on experience in a DevOps or SRE role

● Strong knowledge of GCP services (IAM, GKE, Cloud Run, VPC, Cloud Build, etc.)

● Proficiency in Kubernetes (deployment, scaling, troubleshooting)

● Experience with Terraform for infrastructure provisioning

● CI/CD pipeline setup using Jenkins, GitHub Actions, or similar tools

● Understanding of DevSecOps principles and cloud security practices

● Good command over Linux, shell scripting, and basic networking concepts


Nice to have:

● Experience with Docker, Helm, ArgoCD

● Exposure to other cloud platforms (AWS, Azure)

● Familiarity with incident response and disaster recovery planning

● Knowledge of logging and monitoring tools like ELK, Prometheus, Grafana

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Praffull Shinde
Posted by Praffull Shinde
Pune, Mumbai, Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹26L / yr
skill iconPython
PySpark
skill iconDjango
skill iconFlask
RESTful APIs
+3 more

Job title - Python developer

Exp – 4 to 6 years

Location – Pune/Mum/B’lore

 

PFB JD

Requirements:

  • Proven experience as a Python Developer
  • Strong knowledge of core Python and Pyspark concepts
  • Experience with web frameworks such as Django or Flask
  • Good exposure to any cloud platform (GCP Preferred)
  • CI/CD exposure required
  • Solid understanding of RESTful APIs and how to build them
  • Experience working with databases like Oracle DB and MySQL
  • Ability to write efficient SQL queries and optimize database performance
  • Strong problem-solving skills and attention to detail
  • Strong SQL programing (stored procedure, functions)
  • Excellent communication and interpersonal skill

Roles and Responsibilities

  • Design, develop, and maintain data pipelines and ETL processes using pyspark
  • Work closely with data scientists and analysts to provide them with clean, structured data.
  • Optimize data storage and retrieval for performance and scalability.
  • Collaborate with cross-functional teams to gather data requirements.
  • Ensure data quality and integrity through data validation and cleansing processes.
  • Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
  • Stay up to date with industry best practices and emerging technologies in data engineering.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shikha Nagar
Posted by Shikha Nagar
Pune, Mumbai, Bengaluru (Bangalore)
8 - 10 yrs
Best in industry
Terraform
Google Cloud Platform (GCP)
skill iconKubernetes
DevOps
SQL Azure

We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.

You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.


Key Responsibilities:

1. Cloud Infrastructure Design & Management

· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.

· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.

· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.

· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.

2. Kubernetes & Container Orchestration

· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).

· Work with Helm charts, Istio, and service meshes for microservices deployments.

· Automate scaling, rolling updates, and zero-downtime deployments.


3. Serverless & Compute Services

· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.

· Optimize containerized applications running on Cloud Run for cost efficiency and performance.


4. CI/CD & DevOps Automation

· Design, implement, and manage CI/CD pipelines using Azure DevOps.

· Automate infrastructure deployment using Terraform, Bash and Power shell scripting

· Integrate security and compliance checks into the DevOps workflow (DevSecOps).


Required Skills & Qualifications:

✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.

✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.


Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Pune, Bengaluru (Bangalore)
1 - 5 yrs
₹10L - ₹25L / yr
Google Cloud Platform (GCP)
Strong Site Reliability Engineer (SRE - CloudOps) Profile Mandatory (Experience...
  • Strong Site Reliability Engineer (SRE - CloudOps) Profile
  • Mandatory (Experience 1) - Must have a minimum 1 years of experience in SRE (CloudOps)
  • Mandatory (Core Skill 1) - Must have experience with Google Cloud platforms (GCP)
  • Mandatory (Core Skill 2) - Experience with monitoring, APM, and alerting tools like Prometheus, Grafana, ELK, Newrelic, Pingdom, or Pagerduty
  • Mandatory (Core Skill 3) ) - Hands-on experience with Kubernetes for orchestration and container management.
  • Mandatory (Company) - B2C Product Companies.


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Kolkata
8 - 15 yrs
₹25L - ₹45L / yr
skill iconJava
skill iconSpring Boot
Microservices
skill iconLeadership
Team leadership
+11 more

Job Title : Lead Java Developer (Backend)

Experience Required : 8 to 15 Years

Open Positions : 5

Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)

Work Mode : Open to Remote / Hybrid / Onsite

Notice Period : Immediate Joiner/30 Days or Less


About the Role :

  • We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
  • This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
  • This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.


Key Responsibilities :

  • Design, develop, and implement scalable backend systems using Java and Spring Boot.
  • Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
  • Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
  • Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
  • Guide and mentor team members, fostering technical excellence and ownership.
  • Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.

What We’re Looking For :

  • Proven experience in Java backend development (Spring Boot, Microservices).
  • 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
  • Familiarity with cloud platforms such as AWS, Azure, or GCP.
  • Good understanding of containerization and orchestration tools like Docker and Kubernetes.
  • Exposure to DevOps and Infrastructure as Code practices.
  • Strong problem-solving skills and the ability to design solutions from first principles.
  • Prior experience in product-based or startup environments is a big plus.

Ideal Candidate Profile :

  • A tech enthusiast with a passion for clean code and scalable architecture.
  • Someone who thrives in collaborative, transparent, and feedback-driven environments.
  • A leader who takes ownership beyond individual deliverables to drive overall team and project success.

Interview Process

  1. Initial Technical Screening (via platform partner)
  2. Technical Interview with Engineering Team
  3. Client-facing Final Round

Additional Info :

  • Targeting profiles from product/startup backgrounds.
  • Strong preference for candidates with under 1 month of notice period.
  • Interviews will be fast-tracked for qualified profiles.
Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Chennai, Bhopal, Jaipur
10 - 15 yrs
₹30L - ₹40L / yr
Spark
Google Cloud Platform (GCP)
skill iconPython
Apache Airflow
PySpark
+1 more

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.


  • Shift: 2 PM 11 PM
  • Work Mode: Hybrid (3 days a week) across Xebia locations
  • Notice Period: Immediate joiners or those with a notice period of up to 30 days


Key Responsibilities:

  • Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
  • Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
  • Ensure data integrity, consistency, and availability across all systems.
  • Collaborate with data engineers, analysts, and stakeholders to optimize performance.
  • Document standards and best practices for data engineering workflows.

Required Experience:


  • 7-8 years of experience in data engineering, architecture, and pipeline development.
  • Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
  • Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
  • Understanding of Data Lake table formats (Delta, Iceberg, etc.).
  • Proficiency in Python for scripting and automation.
  • Strong problem-solving skills and collaborative mindset.


⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response!


Best regards,

Vijay S

Assistant Manager - TAG

https://www.linkedin.com/in/vijay-selvarajan/

Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Pune
8yrs+
Upto ₹50L / yr (Varies
)
DevOps
CI/CD
skill iconGit
skill iconKubernetes
Ansible
+7 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.

 

Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

 

Position summary:

We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance. 



Key Roles & Responsibilities:

  • Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
  • Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
  • Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
  • Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
  • Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
  • Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
  • Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
  • Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
  • Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability. 


Basic Qualifications:

  • A bachelor’s or master’s degree in computer science, electronics engineering or a related field
  • 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
  • Strong expertise in CI/CD pipelines, version control (Git), and release automation.
  •  Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
  • Proficiency in Terraform, Ansible for infrastructure automation.
  • Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
  • Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
  • Strong scripting and automation skills in Python, Bash, or Go.


Preferred Qualifications 

  • Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
  • Exposure to serverless architectures and event-driven workflows.
  • Contributions to open-source DevOps projects. 
Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Jaipur, Bhopal, Gurugram
5 - 11 yrs
₹30L - ₹40L / yr
skill iconScala
Microservices
CI/CD
DevOps
skill iconAmazon Web Services (AWS)
+2 more

Dear,


We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.


📌 Job Details:

  • Role: Senior Backend Engineer
  •  Shift: 1 PM – 10 PM
  • Work Mode: Hybrid (3 days a week) across Xebia locations
  • Notice Period: Immediate joiners or up to 30 days


🔹 Job Responsibilities:


✅ Design and develop scalable, reliable, and maintainable backend solutions

✅ Work on event-driven microservices architecture

✅ Implement REST APIs and optimize backend performance

✅ Collaborate with cross-functional teams to drive innovation

✅ Mentor junior and mid-level engineers


🔹 Required Skills:


✔ Backend Development: Scala (preferred), Java, Kotlin

✔ Cloud: AWS or GCP

✔ Databases: MySQL, NoSQL (Cassandra)

✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code

✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch

✔ Agile Methodologies: Scrum, Kanban


⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.


Best regards,

Vijay S

Assistant Manager - TAG

https://www.linkedin.com/in/vijay-selvarajan/

Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Shruti Wankhade
Posted by Shruti Wankhade
Pune
4 - 8 yrs
Best in industry
SQL
skill iconJava
skill iconSpring Boot
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+1 more

What You’ll Do:


* Establish formal data practice for the organisation.

* Build & operate scalable and robust data architectures.

* Create pipelines for the self-service introduction and usage of new data.

* Implement DataOps practices

* Design, Develop, and operate Data Pipelines which support Data scientists and machine learning Engineers.

* Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy to deploy and manage.

* Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.

 

Who You Are:


* Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data.

* Experience working with public clouds like GCP/AWS.

* Good understanding of software engineering, DataOps, data architecture, Agile and DevOps methodologies.

* Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown.

* Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash.

* Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc

* Good communication skills with the ability to collaborate with both technical and non-technical people.

* Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious

Read more
Appz global Tech Pvt Ltd
Bengaluru (Bangalore), Pune
6 - 9 yrs
₹18L - ₹25L / yr
JPA
Google Cloud Platform (GCP)
06692

 Urgent Hiring: Senior Java Developers |Bangalore (Hybrid) 🚀


We are looking for experienced Java professionals to join our team! If you have the right skills and are ready to make an impact, this is your opportunity!


📌 Role: Senior Java Developer

📌 Experience: 6 to 9 Years

📌 Education: BE/BTech/MCA (Full-time)

📌 Location: Bangalore (Hybrid)

📌 Notice Period: Immediate Joiners Only


✅ Mandatory Skills:


🔹 Strong Core Java

🔹 Spring Boot (data flow basics)

🔹 JPA

🔹 Google Cloud Platform (GCP)

🔹 Spring Framework

🔹 Docker, Kubernetes (Good to have)

Read more
TOP MNC

TOP MNC

Agency job
via TCDC by Sheik Noor
Bengaluru (Bangalore), Mangalore, Chennai, Coimbatore, Pune, Mumbai, Kolkata
6 - 10 yrs
₹10L - ₹21L / yr
skill iconJava
06692
Google Cloud Platform (GCP)

Java Developer with GCP

Skills : Java and Spring Boot, GCP, Cloud Storage, BigQuery, RESTful API, 

EXP : SA(6-10 Years)

Loc : Bangalore, Mangalore, Chennai, Coimbatore, Pune, Mumbai, Kolkata

Np : Immediate to 60 Days.


Kindly share your updated resume via WA - 91five000260seven

Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Gurugram, Bhopal, Jaipur
5 - 15 yrs
₹20L - ₹35L / yr
Spark
ETL
Data Transformation Tool (DBT)
skill iconPython
Apache Airflow
+2 more

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.


Qualifications & Experience:


bachelor's or master's degree in computer science, Information Systems, or a related field.


5+ years of experience in data engineering, with expertise in data architecture and pipeline development.


☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.


️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.


Strong proficiency in Python and data modelling.


Experience in testing and validation of data pipelines.


Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.


If you meet the above criteria and are interested, please share your updated CV along with the following details:


Total Experience:


Current CTC:


Expected CTC:


Current Location:


Preferred Location:


Notice Period / Last Working Day (if serving notice):


⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response!

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Tony Tom
Posted by Tony Tom
Pune
5 - 10 yrs
Best in industry
MariaDB
skill iconKubernetes
MySQL
TIDB
skill iconAmazon Web Services (AWS)
+2 more
  • TIDB (Good to have)
  • Kubernetes( Must to have)
  • MySQL(Must to have)
  • Maria DB(Must to have)
  • Looking candidate who has more exposure into Reliability over maintenance
Read more
Flytbase

at Flytbase

3 recruiters
Shilpa Kumari
Posted by Shilpa Kumari
Pune
0 - 4 yrs
₹6L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Position: SDE-1 DevSecOps

Location: Pune, India

Experience Required: 0+ Years


We are looking for a DevSecOps engineer to contribute to product development, mentor team members, and devise creative solutions for customer needs. We value effective communication in person, in documentation, and in code. Ideal candidates thrive in small, collaborative teams, love making an impact, and take pride in their work with a product-focused, self-driven approach. If you're passionate about integrating security and deployment seamlessly into the development process, we want you on our team.


About FlytBase


FlytBase is a global leader in enterprise drone software automation. FlytBase platform is enabling drone-in-a-box deployments all across the globe and has the largest network of partners in 50+ countries.


The team comprises young engineers and designers from top-tier universities such as IIT-B, IIT-KGP, University of Maryland, Georgia Tech, COEP, SRM, KIIT and with deep expertise in drone technology, computer science, electronics, aerospace, and robotics.

 

The company is headquartered in Silicon Valley, California, USA, and has R&D offices in Pune, India. Widely recognized as a pioneer in the commercial drone ecosystem, FlytBase continues to win awards globally - FlytBase was the Global Grand Champion at the ‘NTT Data Open Innovation Contest’ held in Tokyo, Japan, and was the recipient of ‘ TiE50 Award’ at TiE Silicon Valley.


Role and Responsibilities:


  • Participate in the creation and maintenance of CI/CD solutions and pipelines.
  • Leverage Linux and shell scripting for automating security and system updates, and design secure architectures using AWS services (VPC, EC2, S3, IAM, EKS/Kubernetes) to enhance application deployment and management.
  • Build and maintain secure Docker containers, manage orchestration using Kubernetes, and automate configuration management with tools like Ansible and Chef, ensuring compliance with security standards.
  • Implement and manage infrastructure using Terraform, aligning with security and compliance requirements, and set up Dynatrace for advanced monitoring, alerting, and visualization of security metrics. Develop Terraform scripts to automate and optimize infrastructure provisioning and management tasks.
  • Utilize Git for secure source code management and integrate continuous security practices into CI/CD pipelines, applying vulnerability scanning and automated security testing tools.
  • Contribute to security assessments, including vulnerability and penetration testing, NIST, CIS AWS, NIS2 etc.
  • Implement and oversee compliance processes for SOC II, ISO27001, and GDPR.
  • Stay updated on cybersecurity trends and best practices, including knowledge of SAST and DAST tools, OWASP Top10.
  • Automate routine tasks and create tools to improve team efficiency and system robustness.
  • Contribute to disaster recovery plans and ensure robust backup systems are in place.
  • Develop and enforce security policies and respond effectively to security incidents.
  • Manage incident response protocols, including on-call rotations and strategic planning.
  • Conduct post-incident reviews to prevent recurrence and refine the system reliability framework.
  • Implementing Service Level Indicators (SLIs) and maintaining Service Level Objectives (SLOs) and Service Level Agreements (SLAs) to ensure high standards of service delivery and reliability.


Best suited for candidates who: (Skills/Experience)


  • Up to 4 years of experience in a related field, with a strong emphasis on learning and execution.
  • Background in IT or computer science.
  • Familiarity with CI/CD tools, cloud platforms (AWS, Azure, or GCP), and programming languages like Python, JavaScript, or Ruby.
  • Solid understanding of network layers and TCP/IP protocols.
  • In-depth understanding of operating systems, networking, and cloud services.
  • Strong problem-solving skills with a 'hacker' mindset.
  • Knowledge of security principles, threat modeling, risk assessment, and vulnerability management is a plus. 
  • Relevant certifications (e.g., CISSP, GWAPT, OSCP) are a plus.


Compensation: 


This role comes with an annual CTC that is market competitive and depends on the quality of your work experience, degree of professionalism, culture fit, and alignment with FlytBase’s long-term business strategy.


Perks:


  • Fast-paced Startup culture
  • Hacker mode environment
  • Enthusiastic and approachable team
  • Professional autonomy
  • Company-wide sense of purpose
  • Flexible work hours
  • Informal dress code


Read more
Avegen Health
Pune
3 - 5 yrs
₹9L - ₹15L / yr
skill iconReact Native
skill iconReact.js
skill iconJavascript
TypeScript
RESTful APIs
+3 more

Avegen is a digital healthcare company empowering individuals to take control of their health and supporting healthcare professionals in delivering life-changing care. Avegen’s core product, HealthMachine®, is a cloud-hosted, next-generation digital healthcare engine for pioneers in digital healthcare, including healthcare providers and pharmaceutical companies, to deploy high-quality robust digital care solutions efficiently and effectively. We are ISO27001, ISO13485, and Cyber Essentials certified; and compliant with the NHS Data Protection Toolkit and GDPR.


Job Summary:


Senior Software Engineer will be responsible for developing, designing, and maintaining the core framework of mobile applications for our platform. This includes tasks such as creating and implementing new features, troubleshooting and debugging any issues, optimizing the performance of the app, collaborating with cross-functional teams, and staying current with the latest advancements in React Native and mobile app development. We are looking for exceptional candidates who have an in-depth understanding of React, JavaScript, and TypeScript, can create pixel-perfect UI, and are obsessed with creating the best experiences for end users.


Your responsibilities include:


  1. Architect and build performant mobile applications on both iOS and Android platforms using React Native.
  2. Work with managers to provide technical consultation and assist in defining the scope and sizing of work.
  3. Maintain compliance with standards such as ISO 27001, ISO 13485, and Cyber Essentials that Avegen adheres to.
  4. Lead configuration of our platform HealthMachine™ in line with functional specifications and development of platform modules with a focus on quality and performance.
  5. Write well-documented, clean Javascript/TypeScript code to build reusable components in the platform.
  6. Maintain code, write automated tests, and assist DevOps in CI/CD to ensure the product is of the highest quality.
  7. Lead by example in best practices for software design and quality. You will stay current with tools and technologies to seek out the best needed for the job.
  8. Train team members on software design principles and emerging technologies by taking regular engineering workshops.


Requirements:


  1. Hands-on experience working in a product company developing consumer-facing mobile apps that are deployed and currently in use in production. He/she must have at least 3 mobile apps live in the Apple App Store/Google Play Store.
  2. Proven ability to mentor junior engineers to realize a delivery goal.
  3. Solid attention to detail, problem-solving, and analytical skills & excellent troubleshooting skills.
  4. In-depth understanding of React and its ecosystem with the latest features.
  5. Experience in writing modular, reusable custom JavaScript/TypeScript modules that scale well for high-volume applications.
  6. Strong familiarity with native development tools such as Xcode and Android Studio.
  7. A positive, “can do” attitude who isn’t afraid to lead the complex React Native implementations.
  8. Experience in building mobile apps with intensive server communication (REST APIs, GraphQL, WebSockets, etc.).
  9. Self-starter, able to work in a fast-paced, deadline-driven environment with multiple priorities.
  10. Excellent command of version control systems like Git.
  11. Working in Agile/SCRUM methodology, understanding of the application life cycle, and experience working on project management tools like Atlassian JIRA.
  12. Good command of the Unix operating system and understanding of cloud computing platforms like AWS, GCP, Azure, etc.
  13. Hands-on experience in database technologies including RDBMS and NoSQL and a firm grasp of data models and ER diagrams.
  14. Open source contributions and experience developing your own React Native wrappers for native functionality is a plus.



Qualification:

BE/BTech/MS in Information Technology, Computer Science, or a related discipline.

Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
Shivani Kawade
Posted by Shivani Kawade
Remote, Pune
2 - 6 yrs
₹8L - ₹25L / yr
SQL Azure
databricks
skill iconPython
SQL
ETL
+9 more

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.


We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.


We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.


Skills Required:


  • Experience in the manufacturing industry (metal industry is a plus)
  • 4+ years of experience as a Data Engineer
  • Experience in data cleaning & structuring and data manipulation
  • Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
  • ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
  • Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
  • Experience in SQL and data structures
  • Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
  • Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
  • Proficient in data management and data governance
  • Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
  • Strong analytical and problem-solving skills.
  • Excellent communication and teamwork abilities.


Nice To Have:

  • Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
  • Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
  • Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
  • Benefits And Perks
  • A culture that fosters innovation, creativity, continuous learning, and resilience
  • Progressive leave policy promoting work-life balance
  • Mentorship opportunities with highly qualified internal resources and industry-driven programs
  • Multicultural peer groups and supportive workplace policies
  • Annual workcation program allowing you to work from various scenic locations
  • Experience the unique environment of a dynamic start-up


Why should you join TVARIT ?


Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.


If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Bengaluru (Bangalore), Mumbai, Pune
7 - 20 yrs
Best in industry
skill icon.NET
ASP.NET
skill iconC#
Google Cloud Platform (GCP)
Migration

Job Title: .NET Developer with Cloud Migration Experience

Job Description:

We are seeking a skilled .NET Developer with experience in C#, MVC, and ASP.NET to join our team. The ideal candidate will also have hands-on experience with cloud migration projects, particularly in migrating on-premise applications to cloud platforms such as Azure or AWS.

Responsibilities:

  • Develop, test, and maintain .NET applications using C#, MVC, and ASP.NET
  • Collaborate with cross-functional teams to define, design, and ship new features
  • Participate in code reviews and ensure coding best practices are followed
  • Work closely with the infrastructure team to migrate on-premise applications to the cloud
  • Troubleshoot and debug issues that arise during migration and post-migration phases
  • Stay updated with the latest trends and technologies in .NET development and cloud computing

Requirements:

  • Bachelor's degree in Computer Science or related field
  • X+ years of experience in .NET development using C#, MVC, and ASP.NET
  • Hands-on experience with cloud migration projects, preferably with Azure or AWS
  • Strong understanding of cloud computing concepts and principles
  • Experience with database technologies such as SQL Server
  • Excellent problem-solving and communication skills

Preferred Qualifications:

  • Microsoft Azure or AWS certification
  • Experience with other cloud platforms such as Google Cloud Platform (GCP)
  • Familiarity with DevOps practices and tools


Read more
Apptware solutions LLP Pune
Pune
6 - 10 yrs
₹9L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+5 more

Company - Apptware Solutions

Location Baner Pune

Team Size - 130+


Job Description -

Cloud Engineer with 8+yrs of experience


Roles and Responsibilities


● Have 8+ years of strong experience in deployment, management and maintenance of large systems on-premise or cloud

● Experience maintaining and deploying highly-available, fault-tolerant systems at scale

● A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)

● Practical experience with Docker containerization and clustering (Kubernetes/ECS)

● Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, VPN)

● Version control system experience (e.g. Git)

● Experience implementing CI/CD (e.g. Jenkins, TravisCI, CodePipeline)

● Operational (e.g. HA/Backups) NoSQL experience (e.g. MongoDB, Redis) SQL experience (e.g. MySQL)

● Experience with configuration management tools (e.g. Ansible, Chef) ● Experience with infrastructure-as-code (e.g. Terraform, Cloudformation)

● Bachelor's or master’s degree in CS, or equivalent practical experience

● Effective communication skills

● Hands-on cloud providers like MS Azure and GC

● A sense of ownership and ability to operate independently

● Experience with Jira and one or more Agile SDLC methodologies

● Nice to Have:

○ Sensu and Graphite

○ Ruby or Java

○ Python or Groovy

○ Java Performance Analysis


Role: Cloud Engineer

Industry Type: IT-Software, Software Services

Functional Area: IT Software - Application Programming, Maintenance Employment Type: Full Time, Permanent

Role Category: Programming & Design

Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 11 yrs
₹20L - ₹36L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.


Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

• Data Integration, Processing & Governance

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Implement scalable architectural models for data processing and storage

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode

• Build functionality for data analytics, search and aggregation

Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 5+ years of IT experience with 3+ years in Data related technologies

2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc

6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Cloud data specialty and other related Big data technology certifications


Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes


Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Noida
4 - 10 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.


Role & Responsibilities:

Job Title: Senior Associate L1 – Data Engineering

Your role is focused on Design, Development and delivery of solutions involving:

• Data Ingestion, Integration and Transformation

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time

• Build functionality for data analytics, search and aggregation


Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies

2.Minimum 1.5 years of experience in Big Data technologies

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security

7.Cloud data specialty and other related Big data technology certifications


Job Title: Senior Associate L1 – Data Engineering

Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes

Read more
Arahas Technologies
Nidhi Shivane
Posted by Nidhi Shivane
Pune
3 - 8 yrs
₹10L - ₹20L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+3 more


Role Description

This is a full-time hybrid role as a GCP Data Engineer,. As a GCP Data Engineer, you will be responsible for managing large sets of structured and unstructured data and developing processes to convert data into insights, information, and knowledge.

Skill Name: GCP Data Engineer

Experience: 7-10 years

Notice Period: 0-15 days

Location :-Pune

If you have a passion for data engineering and possess the following , we would love to hear from you:


🔹 7 to 10 years of experience working on Software Development Life Cycle (SDLC)

🔹 At least 4+ years of experience in Google Cloud platform, with a focus on Big Query

🔹 Proficiency in Java and Python, along with experience in Google Cloud SDK & API Scripting

🔹 Experience in the Finance/Revenue domain would be considered an added advantage

🔹 Familiarity with GCP Migration activities and the DBT Tool would also be beneficial


You will play a crucial role in developing and maintaining our data infrastructure on the Google Cloud platform.

Your expertise in SDLC, Big Query, Java, Python, and Google Cloud SDK & API Scripting will be instrumental in ensuring the smooth operation of our data systems..


Join our dynamic team and contribute to our mission of harnessing the power of data to make informed business decisions.

Read more
Ignite Solutions

at Ignite Solutions

6 recruiters
Meghana Dhamale
Posted by Meghana Dhamale
Remote, Pune
5 - 7 yrs
₹15L - ₹20L / yr
skill iconPython
LinkedIn
skill iconDjango
skill iconFlask
skill iconAmazon Web Services (AWS)
+2 more

We are looking for a hands-on technical expert who has worked with multiple technology stacks and has experience architecting and building scalable cloud solutions with web and mobile frontends. 

 What will you work on?

  •  Interface with clients
  • Recommend tech stacks
  • Define end-to-end logical and cloud-native architectures
  •  Define APIs
  • Integrate with 3rd party systems
  • Create architectural solution prototypes
  • Hands-on coding, team lead, code reviews, and problem-solving

What Makes You A Great Fit?

  • 5+ years of software experience 
  • Experience with architecture of technology systems having hands-on expertise in backend, and web or mobile frontend
  • Solid expertise and hands-on experience in Python with Flask or Django
  • Expertise on one or more cloud platforms (AWS, Azure, Google App Engine)
  • Expertise with SQL and NoSQL databases (MySQL, Mongo, ElasticSearch, Redis)
  • Knowledge of DevOps practices
  • Chatbot, Machine Learning, Data Science/Big Data experience will be a plus
  • Excellent communication skills, verbal and written

The job is for a full-time position at our https://goo.gl/maps/o67FWr1aedo">Pune (Viman Nagar) office. 

(Note: We are working remotely at the moment. However, once the COVID situation improves, the candidate will be expected to work from our office.)

Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Hyderabad, Pune, Noida, Bengaluru (Bangalore), Chennai
4 - 10 yrs
Best in industry
skill iconGo Programming (Golang)
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Windows Azure

Golang Developer

Location: Chennai/ Hyderabad/Pune/Noida/Bangalore

Experience: 4+ years

Notice Period: Immediate/ 15 days

Job Description:

  • Must have at least 3 years of experience working with Golang.
  • Strong Cloud experience is required for day-to-day work.
  • Experience with the Go programming language is necessary.
  • Good communication skills are a plus.
  • Skills- Aws, Gcp, Azure, Golang
Read more
Tredence
Rohit S
Posted by Rohit S
Chennai, Pune, Bengaluru (Bangalore), Gurugram
11 - 16 yrs
₹20L - ₹32L / yr
Data Warehouse (DWH)
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Data engineering
Data migration
+1 more
• Engages with Leadership of Tredence’s clients to identify critical business problems, define the need for data engineering solutions and build strategy and roadmap
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
Read more
one of the world's leading multinational investment bank

one of the world's leading multinational investment bank

Agency job
via HiyaMee by Lithin Raj
Pune
9 - 13 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more
  • Hands-on knowledge on various CI-CD tools (Jenkins/TeamCity, Artifactory, UCD, Bitbucket/Github, SonarQube) including setting up of build-deployment automated pipelines.
  • Very good knowledge in scripting tools and languages such as Shell, Perl or Python , YAML/Groovy, build tools such as Maven/Gradle.
  • Hands-on knowledge in containerization and orchestration tools such as Docker, OpenShift and Kubernetes.
  • Good knowledge in configuration management tools such as Ansible, Puppet/Chef and have worked on setting up of monitoring tools (Splunk/Geneos/New Relic/Elk).
  •             Expertise in job schedulers/workload automation tools such as Control-M or AutoSys is good to have.
  • Hands-on knowledge on Cloud technology (preferably GCP) including various computing services and infrastructure setup using Terraform.
  • Should have basic understanding on networking, certificate management, Identity and Access Management and Information security/encryption concepts.
  • •             Should support day-to-day tasks related to platform and environments upkeep such as upgrades, patching, migration and system/interfaces integration.
  • •             Should have experience in working in Agile based SDLC delivery model, multi-task and support multiple systems/apps.
  • •             Big-data and Hadoop ecosystem knowledge is good to have but not mandatory.
  • Should have worked on standard release, change and incident management tools such as ServiceNow/Remedy or similar
Read more
MNC

MNC

Agency job
via Bohiyaanam Talent Solutions by Harsha Manglani
Pune
6 - 9 yrs
₹1L - ₹25L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

We are hiring for Devops Engineer for a reputed MNC

 

Job Description:

Total exp- 6+Years

Must have:

Minimum 3-4 years hands-on experience in Kubernetes and Docker

Proficiency in AWS Cloud

Good to have Kubernetes admin certification

 

Job Responsibilities:

Responsible for managing Kubernetes cluster

Deploying infrastructure for the project

Build CICD pipeline

 

Looking for Immediate Joiners only

Location: Pune

Salary: As per market standards

Mode: Work from office



 
Read more
xpressbees
Alfiya Khan
Posted by Alfiya Khan
Pune, Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹25L / yr
Big Data
Data Warehouse (DWH)
Data modeling
Apache Spark
Data integration
+10 more
Company Profile
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.

Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.

What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.

Qualifications & Experience relevant for the role

• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).

• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Read more
marsdevs.com
Vishvajit Pathak
Posted by Vishvajit Pathak
Remote, Pune
2 - 5 yrs
₹4L - ₹15L / yr
skill iconDjango
skill iconFlask
FastAPI
skill iconPython
skill iconDocker
+3 more

We are having an immediate requirement for a Python web developer.

 

You have:

  • At least 2 years of experience developing web applications with Django/Flask/FastAPI
  • Familiarity with Linux
  • Experience in both SQL and NoSQL databases.
  • Uses Docker and CI/CD
  • Writes tests
  • Experienced in application deployments and scaling them on AWS or GCP

 

You are:

  • Eager to work independently without being watched
  • Easy going.
  • Able to handle clients on your own

 

Location: Remote (in India)

 

Read more
company logo
Agency job
via Anetcorp Ind Pvt Ltd by Jyoti Yadav
Remote, Pune
6 - 12 yrs
₹10L - ₹25L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
  • Essentail Skills:
    • Docker
    • Jenkins
    • Python dependency management using conda and pip
  • Base Linux System Commands, Scripting
  • Docker Container Build & Testing
    • Common knowledge of minimizing container size and layers
    • Inspecting containers for un-used / underutilized systems
    • Multiple Linux OS support for virtual system
  • Has experience as a user of jupyter / jupyter lab to test and fix usability issues in workbenches
  • Templating out various configurations for different use cases (we use Python Jinja2 but are open to other languages / libraries)
  • Jenkins PIpeline
  • Github API Understanding to trigger builds, tags, releases
  • Artifactory Experience
  • Nice to have: Kubernetes, ArgoCD, other deployment automation tool sets (DevOps)
Read more
DREAMS Pvt Ltd
Siddhant Malani
Posted by Siddhant Malani
Pune, Bengaluru (Bangalore)
0 - 1 yrs
₹8000 - ₹12000 / mo
skill iconFlutter
DART
User Interface (UI) Design
User Experience (UX) Design
RESTful APIs
+3 more

Role Description for the 3 month internship:-

• Create multi-platform apps for iOS & Android using Google's new Flutter development framework
• Strong OO design and programming skills in DART and SDK Framework for building Android as well as iOS Apps.
• Good expertise in Auto Layout and adding constraints programmatically
• Must have experience of Memory management, caching mechanisms., Threading and Performance tuning.
• Familiarity with RESTful APIs to connect Android & iOS applications to back-end services
• Experience with third-party libraries and APIs
• Collaborate with the team of product managers, developers, to define, design, & deploy new features & functionality
• Build software that ensures the best possible usability, performance, quality, & responsiveness of features
• Work in a team following agile development practices (Scrum)
• Proficient understanding of code versioning tools such as Git, Mercurial, or SVN, and Project Management tool (JIRA)
• Utilize your knowledge of the general mobile landscape, architectures, trends, & emerging technologies
• Get Solid understanding of full mobile development life cycle and make use of the same
• Help Develop and Deploy Good Quality UI
• Solid understanding of the full mobile development life cycle.
• Good written, verbal, organizational and interpersonal skills
• Unit-test code for robustness, including edge cases, usability, and general reliability.
• Excellent debugging and optimization skills
• Strong design, development and debugging skills.

Read more
Senwell Solutions

at Senwell Solutions

1 recruiter
Trupti Gholap
Posted by Trupti Gholap
Pune
1 - 3 yrs
₹2L - ₹7L / yr
skill iconAngular (2+)
skill iconAngularJS (1.x)
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+3 more

We are looking to hire an experienced Sr. Angular Developer to join our dynamic team. As a lead developer, you will be responsible for creating a top-level coding base using Angular best practices. To ensure success as an angular developer, you should have extensive knowledge of theoretical software engineering, be proficient in TypeScript, JavaScript, HTML, and CSS, and have excellent project management skills. Ultimately, a top-class Angular Developer can design and build a streamlined application to company specifications that perfectly meet the needs of the user.

 

Requirements:

 

  1. Bachelor’s degree in computer science, computer engineering, or similar
  2. Previous work Experience 2+ years as an Angular developer.
  3. Proficient in CSS, HTML, and writing cross-browser compatible code
  4. Experience using JavaScript & TypeScript building tools like Gulp or Grunt.
  5. Knowledge of JavaScript MV-VM/MVC frameworks including Angluar.JS / React.
  6. Excellent project management skills.

 

Responsibilities:

 

  1. Designing and developing user interfaces using Angular best practices.
  2. Adapting interface for modern internet applications using the latest front-end technologies.
  3. Writing TypeScript, JavaScript, CSS, and HTML.
  4. Developing product analysis tasks.
  5. Making complex technical and design decisions for Angular.JS projects.
  6. Developing application codes in Angular, Node.js, and Rest Web Services.
  7. Conducting performance tests.
  8. Consulting with the design team.
  9. Ensuring high performance of applications and providing support.

 

Read more
This company provides on-demand cloud computing platforms.

This company provides on-demand cloud computing platforms.

Agency job
via New Era India by Niharica Singh
Remote, Pune, Mumbai, Bengaluru (Bangalore), Gurugram, Hyderabad
15 - 25 yrs
₹35L - ₹55L / yr
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Windows Azure
Architecture
skill iconPython
+5 more
  • 15+ years of Hands-on technical application architecture experience and Application build/ modernization experience
  • 15+ years of experience as a technical specialist in Customer-facing roles.
  • Ability to travel to client locations as needed (25-50%)
  • Extensive experience architecting, designing and programming applications in an AWS Cloud environment
  • Experience with designing and building applications using AWS services such as EC2, AWS Elastic Beanstalk, AWS OpsWorks
  • Experience architecting highly available systems that utilize load balancing, horizontal scalability and high availability
  • Hands-on programming skills in any of the following: Python, Java, Node.js, Ruby, .NET or Scala
  • Agile software development expert
  • Experience with continuous integration tools (e.g. Jenkins)
  • Hands-on familiarity with CloudFormation
  • Experience with configuration management platforms (e.g. Chef, Puppet, Salt, or Ansible)
  • Strong scripting skills (e.g. Powershell, Python, Bash, Ruby, Perl, etc.)
  • Strong practical application development experience on Linux and Windows-based systems
  • Extra curricula software development passion (e.g. active open source contributor)
Read more
InFoCusp

at InFoCusp

3 recruiters
Apurva Gayawal
Posted by Apurva Gayawal
Pune, Ahmedabad
3 - 7 yrs
₹7L - ₹27L / yr
skill iconJavascript
Cloud Computing
skill iconReact.js
skill iconPython
skill iconAmazon Web Services (AWS)
+4 more
InFoCusp is a company working in the broad field of Computer Science, Software Engineering,
and Artificial Intelligence (AI). It is headquartered in Ahmedabad, India, having a branch office in
Pune.

We have worked on / are working on Software Engineering projects that touch upon making
full-fledged products. Starting from UI/UX aspects, responsive and blazing fast front-ends,
platform-specific applications (Android, iOS, web applications, desktop applications), very
large scale infrastructure, cutting edge machine learning, and deep learning (AI in general).
The projects/products have wide-ranging applications in finance, healthcare, e-commerce,
legal, HR/recruiting, pharmaceutical, leisure sports and computer gaming domains. All of this
is using core concepts of computer science such as distributed systems, operating systems,
computer networks, process parallelism, cloud computing, embedded systems and the
Internet of Things.

PRIMARY RESPONSIBILITIES:
● Own the design, development, evaluation and deployment of highly-scalable software
products involving front-end and back-end development.
● Maintain quality, responsiveness and stability of the system.
● Design and develop memory-efficient, compute-optimized solutions for the
software.
● Design and administer automated testing tools and continuous integration
tools.
● Produce comprehensive and usable software documentation.
● Evaluate and make decisions on the use of new tools and technologies.
● Mentor other development engineers.

KNOWLEDGE AND SKILL REQUIREMENTS:
● Mastery of one or more back-end programming languages (Python, Java, Scala, C++
etc.)
● Proficiency in front-end programming paradigms and libraries (for example : HTML,
CSS and advanced JavaScript libraries and frameworks such as Angular, Knockout,
React). - Knowledge of automated and continuous integration testing tools (Jenkins,
Team City, Circle CI etc.)
● Proven experience of platform-level development for large-scale systems.
● Deep understanding of various database systems (MySQL, Mongo,
Cassandra).
● Ability to plan and design software system architecture.
● Development experience for mobile, browsers and desktop systems is
desired.
● Knowledge and experience of using distributed systems (Hadoop, Spark)
and cloud environments (Amazon EC2, Google Compute Engine, Microsoft
Azure).
● Experience working in agile development. Knowledge and prior experience of tools
like Jira is desired.
● Experience with version control systems (Git, Subversion or Mercurial).
Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Pooja Singh
Posted by Pooja Singh
Bengaluru (Bangalore), Mumbai, Gurugram, Noida, Hyderabad, Pune
4 - 19 yrs
₹1L - ₹15L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
Microservices
+7 more
  • Experience building large scale, large volume services & distributed apps., taking them through production and post-production life cycles
  • Experience in Programming Language: Java 8, Javascript
  • Experience in Microservice Development or Architecture
  • Experience with Web Application Frameworks: Spring or Springboot or Micronaut
  • Designing: High Level/Low-Level Design
  • Development Experience: Agile/ Scrum, TDD(Test Driven Development)or BDD (Behaviour Driven Development) Plus Unit Testing
  • Infrastructure Experience: DevOps, CI/CD Pipeline, Docker/ Kubernetes/Jenkins, and Cloud platforms like – AWS, AZURE, GCP, etc
  • Experience on one or more Database: RDBMS or NoSQL
  • Experience on one or more Messaging platforms: JMS/RabbitMQ/Kafka/Tibco/Camel
  • Security (Authentication, scalability, performance monitoring)
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort