50+ Windows Azure Jobs in Pune | Windows Azure Job openings in Pune
Apply to 50+ Windows Azure Jobs in Pune on CutShort.io. Explore the latest Windows Azure Job opportunities across top companies like Google, Amazon & Adobe.


Job Title: Full Stack Developer
Job Description:
We are looking for a skilled Full Stack Developer with hands-on experience in building scalable web applications using .NET Core and ReactJS. The ideal candidate will have a strong understanding of backend development, cloud services, and modern frontend technologies.
Key Skills:
- .NET Core, C#
- SQL Server
- React JS
- Azure (Functions, Services)
- Entity Framework
- Microservices Architecture
Responsibilities:
- Design, develop, and maintain full-stack applications
- Build scalable microservices using .NET Core
- Implement and consume Azure Functions and Services
- Develop efficient database queries with SQL Server
- Integrate front-end components using ReactJS
- Collaborate with cross-functional teams to deliver high-quality solutions


About Us
Seeking a talented .NET Developer to join our team and work with one of our key clients on the development of a cloud-based SaaS product.
What You’ll Do
- Collaborate closely with the client’s Product Team to brainstorm ideas, suggest product flows, and influence technical direction a
- Develop and maintain robust, scalable, and maintainable code following SOLID principles, TDD/BDD, and clean architecture stan
- Work with Azure, C#/.NET, and React to build full-stack features and cloud-based services
- Design and implement microservices using CQRS, DDD, and other modern architectural patterns
- Manage and interact with SQL Server and other relational/non-relational databases
- Conduct code reviews and evaluate pull requests from team members
- Mentor junior developers and contribute to a strong engineering culture
- Take part in sprint reviews and agile ceremonies
- Analyze legacy systems for refactoring, modernization, and platform evolution
What We’re Looking For
- Strong hands-on experience with .NET (C#) and Azure Cloud Services
- Working knowledge of React for frontend development
- Deep understanding of Domain-Driven Design (DDD), Onion Architecture, CQRS, and Microservices Architecture
- Expertise in SOLID, OOP, Clean Code, KISS, and DRY principles
- Familiarity with both relational (SQL Server) and non-relational databases
- Experience with TDD/BDD testing approaches and scalable, testable code design
- Strong communication and collaboration skills
- Passion for mentoring and uplifting fellow engineers
Nice to Have
- Experience with event-driven architecture
- Exposure to containerization (Docker, Kubernetes ) - Familiarity with DevOps pipelines and CI/CD on Azure

Role description:
You will be building curated enterprise grade solutions for GenAI application deployment at a production scale for clients. Solid understanding and hands on skills for GenAI application deployment that includes development and engineering skills. The role requires development and engineering skills on GenAI application development including data ingestion, choosing the right fit LLMs, simple and advanced RAG, guardrails, prompt engineering for optimisation, traceability, security, LLM evaluation, observability, and deployment at scale on cloud or on premise. As this space evolves very rapidly, candidates must also demonstrate knowledge on agentic AI frameworks. Candidates having strong background on ML with engineering skills is highly preferred for LLMOps role.
Required skills:
- 4-8 years of experience in working on ML projects that includes business requirement gathering, model development, training, deployment at scale and monitoring model performance for production use cases
- Strong knowledge on Python, NLP, Data Engineering, Langchain, Langtrace, Langfuse, RAGAS, AgentOps (optional)
- Should have worked on proprietary and open source large language models
- Experience on LLM fine tuning, creating distilled model from hosted LLMs
- Building data pipelines for model training
- Experience on model performance tuning, RAG, guardrails, prompt engineering, evaluation and observability
- Experience in GenAI application deployment on cloud and on-premise at scale for production
- Experience in creating CI/CD pipelines
- Working knowledge on Kubernetes
- Experience in minimum one cloud: AWS / GCP / Azure to deploy AI services
- Experience in creating workable prototypes using Agentic AI frameworks like CrewAI, Taskweaver, AutoGen
- Experience in light weight UI development using streamlit or chainlit (optional)
- Desired experience on open-source tools for ML development, deployment, observability and integration
- Background on DevOps and MLOps will be a plus
- Experience working on collaborative code versioning tools like GitHub/GitLab
- Team player with good communication and presentation skills
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance.
Key Roles & Responsibilities:
- Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
- Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
- Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
- Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
- Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
- Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
- Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
- Strong expertise in CI/CD pipelines, version control (Git), and release automation.
- Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
- Proficiency in Terraform, Ansible for infrastructure automation.
- Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
- Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Strong scripting and automation skills in Python, Bash, or Go.
Preferred Qualifications
- Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
- Exposure to serverless architectures and event-driven workflows.
- Contributions to open-source DevOps projects.
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.
Key Roles & Responsibilities:
- Lead the design and development of scalable and secure software solutions using Java.
- Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
- Provide technical leadership, mentoring, and guidance to the development team.
- Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
- Collaborate with cross-functional teams including Product Management, Security, and DevOps to deliver high-quality security solutions.
- Design and implement security analytics, automation workflows and ITSM integrations.
- Drive continuous improvements in engineering processes, tools, and technologies.
- Troubleshoot complex technical issues and lead incident response for critical production systems.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-10 years of software development experience, with expertise in Java.
- Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
- In-depth understanding of microservices architecture, APIs, and distributed systems.
- Experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
- Strong problem-solving skills and ability to work in an agile, fast-paced environment.
- Excellent communication and leadership skills, with a track record of mentoring engineers.
Preferred Qualifications:
- Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
- Knowledge of zero-trust security models and secure API development.
- Hands-on experience with machine learning or AI-driven security analytics.
We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions.
Responsibilities:
Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool)
Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.
Develop data routes: You design scalable and powerful data management processes.
Analyze data: You derive sound findings from data sets and present them in an understandable way.
Requirements:
Requirements management and project experience: You successfully implement cloud-based data & analytics projects.
Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.
Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).
SQL know-how: You have a sound and solid knowledge of SQL.
Data management: You are familiar with topics such as master data management and data quality.
Bachelor's degree in computer science, or a related field.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.
Job Title: Senior Automation Engineer (API & Cloud Testing)
Job Type: Full-TimeJob Location: Bangalore, Pune
Work Mode:Hybrid
Experience: 8+ years (Minimum 5 years in Automation)
Notice Period: 0-30 days
About the Role:
We are looking for an experienced Senior Automation Engineer to join our team. The ideal candidate should have extensive expertise in API testing, Node.js, Cypress, Postman/Newman, and cloud-based platforms (AWS/Azure/GCP). The role involves automating workflows in ArrowSphere, optimizing test automation pipelines, and ensuring software quality in an Agile environment. The selected candidate will work closely with teams in France, requiring strong communication skills.
Key Responsibilities:
Automate ArrowSphere Workflows: Develop and implement automation strategies for ArrowSphere Public API workflows to enhance efficiency.
Support QA Team: Guide and assist QA engineers in improving automation strategies.
Optimize Test Automation Pipeline: Design and maintain a high-performance test automation framework.
Minimize Test Flakiness: Identify root causes of flaky tests and implement solutions to improve software reliability.
Ensure Software Quality: Actively contribute to maintaining the software’s high standards and cloud service innovation.
Mandatory Skills:
API Testing: Strong knowledge of API testing methodologies.
Node.js: Experience in automation with Cypress, Postman, and Newman.
Cloud Platforms: Working knowledge of AWS, Azure, or GCP (certification is a plus).
Agile Methodologies: Hands-on experience working in an Agile environment.
Technical Communication: Ability to interact with international teams effectively.
Technical Skills:
Cypress: Expertise in front-end automation with Cypress, ensuring scalable and reliable test scripts.
Postman & Newman: Experience in API testing and test automation integration within CI/CD pipelines.
Jenkins: Ability to set up and maintain CI/CD pipelines for automation.
Programming: Proficiency in Node.js (PHP knowledge is a plus).
AWS Architecture: Understanding of AWS services for development and testing.
Git Version Control: Experience with Git workflows (branching, merging, pull requests).
Scripting & Automation: Knowledge of Bash/Python for scripting and automating tasks.
Problem-Solving: Strong debugging skills across front-end, back-end, and database.
Preferred Qualifications:
Cloud Certification (AWS, Azure, or GCP) is an added advantage.
Experience working with international teams, particularly in Europe.
Responsibilities
- Design and implement advanced solutions utilizing Large Language Models (LLMs).
- Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
- Conduct research and stay informed about the latest developments in generative AI and LLMs.
- Develop and maintain code libraries, tools, and frameworks to support generative AI development.
- Participate in code reviews and contribute to maintaining high code quality standards.
- Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
- Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
- Possess strong analytical and problem-solving skills.
- Demonstrate excellent communication skills and the ability to work effectively in a team environment.
Primary Skills
- Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
- Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
- Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
- Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
- Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
- Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.

Position: .NET C# Developer
Job Category: Embedded HW_SW
Job Type: Full Time
Job Location: Pune
Experience: 5-7 years
Notice period: 0-30 days
Shift timing: General Shift
Work Mode: 5 Days work from EIC office
Education Required: Bachelor’s / Masters / PhD : BE/B tech
Must have skills: .NET Core, C#, microservices
Good to have skills: RDBMS, cloud platforms like AWS and Azure
Mandatory Skills
5-7 years of experience in software development using C# and NET Core
Hands-on experience in building Microservices with a focus on scalability and reliability.
Expertise in Docker for containerization and Kubernetes for orchestration and management of containerized applications.
Strong working knowledge of Cosmos DB (or similar NoSQL databases) and experience in designing distributed databases.
Familiarity with CI/CD pipelines, version control systems (like Git), and Agile development methodologies.
Proficiency in RESTful API design and development.
Experience with cloud platforms like Azure, AWS, or Google Cloud is a plus.
Excellent problem-solving skills and the ability to work independently and in a collaborative environment.
Strong communication skills, both verbal and written.
Key Responsibilities
Design, develop, and maintain applications using C#, NET Core, and Microservices architecture.
Build, deploy, and manage containerized applications using Docker and Kubernetes.
Work with Cosmos DB for efficient database design, management, and querying in a cloud-native environment.
Collaborate with cross-functional teams to define application requirements and ensure timely delivery of features.
Write clean, scalable, and efficient code following best practices and coding standards.
Implement and integrate APIs and services with microservice architectures.
Troubleshoot, debug, and optimize applications for performance and scalability.
Participate in code reviews and contribute to improving coding standards and practices.
Stay up-to-date with the latest industry trends, technologies, and development practices.
Optional Skills
Experience with Azure Kubernetes Service (AKS) or Amazon Elastic Kubernetes Service (EKS)
Familiarity with Event-driven architecture, RabbitMQ, Kafka, or similar messaging systems.
Knowledge of DevOps practices and tools for continuous integration and deployment.
- Experience with front-end technologies like Angular or React is a plus.

position: Data Scientist
Job Category: Embedded HW_SW
Job Type: Full Time
Job Location: Pune
Experience: 3 - 5 years
Notice period: 0-30 days
Must have skills: Python, Linux-Ubuntu based OS, cloud-based platforms
Education Required: Bachelor’s / Masters / PhD:
Bachelor’s or master’s in computer science, Statistics, Mathematics, Data Science, Engineering
Bachelors with 5 years or Masters with 3 years
Mandatory Skills
- Bachelor’s or master’s in computer science, Statistics, Mathematics, Data Science, Engineering, or related field
- 3-5 years of experience as a data scientist, with a strong foundation in machine learning fundamentals (e.g., supervised and unsupervised learning, neural networks)
- Experience with Python programming language (including libraries such as NumPy, pandas, scikit-learn) is essential
- Deep hands-on experience building computer vision and anomaly detection systems involving algorithm development in fields such as image-segmentation
- Some experience with open-source OCR models
- Proficiency in working with large datasets and experience with feature engineering techniques is a plus
Key Responsibilities
- Work closely with the AI team to help build complex algorithms that provide unique insights into our data using images.
- Use agile software development processes to make iterative improvements to our back-end systems.
- Stay up to date with the latest developments in machine learning and data science, exploring new techniques and tools to apply within Customer’s business context.
Optional Skills
- Experience working with cloud-based platforms (e.g., Azure, AWS, GCP)
- Knowledge of computer vision techniques and experience with libraries like OpenCV
- Excellent Communication skills, especially for explaining technical concepts to nontechnical business leaders.
- Ability to work on a dynamic, research-oriented team that has concurrent projects.
- Working knowledge of Git/version control.
- Expertise in PyTorch, Tensor Flow, Keras.
- Excellent coding skills, especially in Python.
- Experience with Linux-Ubuntu based OS
Position: SDE-1 DevSecOps
Location: Pune, India
Experience Required: 0+ Years
We are looking for a DevSecOps engineer to contribute to product development, mentor team members, and devise creative solutions for customer needs. We value effective communication in person, in documentation, and in code. Ideal candidates thrive in small, collaborative teams, love making an impact, and take pride in their work with a product-focused, self-driven approach. If you're passionate about integrating security and deployment seamlessly into the development process, we want you on our team.
About FlytBase
FlytBase is a global leader in enterprise drone software automation. FlytBase platform is enabling drone-in-a-box deployments all across the globe and has the largest network of partners in 50+ countries.
The team comprises young engineers and designers from top-tier universities such as IIT-B, IIT-KGP, University of Maryland, Georgia Tech, COEP, SRM, KIIT and with deep expertise in drone technology, computer science, electronics, aerospace, and robotics.
The company is headquartered in Silicon Valley, California, USA, and has R&D offices in Pune, India. Widely recognized as a pioneer in the commercial drone ecosystem, FlytBase continues to win awards globally - FlytBase was the Global Grand Champion at the ‘NTT Data Open Innovation Contest’ held in Tokyo, Japan, and was the recipient of ‘ TiE50 Award’ at TiE Silicon Valley.
Role and Responsibilities:
- Participate in the creation and maintenance of CI/CD solutions and pipelines.
- Leverage Linux and shell scripting for automating security and system updates, and design secure architectures using AWS services (VPC, EC2, S3, IAM, EKS/Kubernetes) to enhance application deployment and management.
- Build and maintain secure Docker containers, manage orchestration using Kubernetes, and automate configuration management with tools like Ansible and Chef, ensuring compliance with security standards.
- Implement and manage infrastructure using Terraform, aligning with security and compliance requirements, and set up Dynatrace for advanced monitoring, alerting, and visualization of security metrics. Develop Terraform scripts to automate and optimize infrastructure provisioning and management tasks.
- Utilize Git for secure source code management and integrate continuous security practices into CI/CD pipelines, applying vulnerability scanning and automated security testing tools.
- Contribute to security assessments, including vulnerability and penetration testing, NIST, CIS AWS, NIS2 etc.
- Implement and oversee compliance processes for SOC II, ISO27001, and GDPR.
- Stay updated on cybersecurity trends and best practices, including knowledge of SAST and DAST tools, OWASP Top10.
- Automate routine tasks and create tools to improve team efficiency and system robustness.
- Contribute to disaster recovery plans and ensure robust backup systems are in place.
- Develop and enforce security policies and respond effectively to security incidents.
- Manage incident response protocols, including on-call rotations and strategic planning.
- Conduct post-incident reviews to prevent recurrence and refine the system reliability framework.
- Implementing Service Level Indicators (SLIs) and maintaining Service Level Objectives (SLOs) and Service Level Agreements (SLAs) to ensure high standards of service delivery and reliability.
Best suited for candidates who: (Skills/Experience)
- Up to 4 years of experience in a related field, with a strong emphasis on learning and execution.
- Background in IT or computer science.
- Familiarity with CI/CD tools, cloud platforms (AWS, Azure, or GCP), and programming languages like Python, JavaScript, or Ruby.
- Solid understanding of network layers and TCP/IP protocols.
- In-depth understanding of operating systems, networking, and cloud services.
- Strong problem-solving skills with a 'hacker' mindset.
- Knowledge of security principles, threat modeling, risk assessment, and vulnerability management is a plus.
- Relevant certifications (e.g., CISSP, GWAPT, OSCP) are a plus.
Compensation:
This role comes with an annual CTC that is market competitive and depends on the quality of your work experience, degree of professionalism, culture fit, and alignment with FlytBase’s long-term business strategy.
Perks:
- Fast-paced Startup culture
- Hacker mode environment
- Enthusiastic and approachable team
- Professional autonomy
- Company-wide sense of purpose
- Flexible work hours
- Informal dress code
Job description
We are seeking a highly skilled and experienced IT Department Head with strong communication skills, a technical background, and leadership capabilities to manage our IT team. The ideal candidate will be responsible for overseeing the organization's IT infrastructure, ensuring the security and efficiency of our systems, and maintaining compliance with relevant industry standards. The role requires an
in-depth understanding of cloud technologies , server management, network security, managed IT services, and strong problem-solving capabilities.
Key Responsibilities:-
The Information Technology Manager is a proactive and hands-on IT Manager to oversee and evolve our technology infrastructure
· In this role, the Manager will manage all aspects of our IT operations, from maintaining our current tech stack to strategizing and implementing future developments
· This position will ensure that our technology systems are modern, secure, and efficient, aligning IT initiatives with our business goals
· IT Strategy & Leadership: Develop and execute an IT strategy that supports the company's objectives, ensuring scalability and security
· Infrastructure Management: Oversee the maintenance and optimization of our Azure Cloud infrastructure, AWS Cloud, and Cisco Meraki networking systems
· Software & Systems Administration: Manage Microsoft 365 administration.
· Cybersecurity: Enhance our cybersecurity posture using tools like Sentinel One, Sophos Firewall and other tools
· Project Management: Lead IT projects, including system upgrades and optimizations, ensuring timely delivery and adherence to budgets
· Team Leadership: Mentor and guide a small IT team, fostering a culture of continuous improvement and professional development
· Vendor Management: Collaborate with external vendors and service providers to ensure optimal performance and cost-effectiveness
· Technical Support: Provide high-level technical support and troubleshooting for IT-related issues across the organization and client in USA Other duties as needed
· IT Audit & Compliance: Conduct regular audits to ensure IT processes are compliant with security regulations and best practices (GDPR, SOC2, ISO 27001), ensuring readiness for internal and external audit.
· Documentation: Maintain thorough and accurate documentation for all systems, processes, and procedures to ensure clarity and consistency in IT operations.
Preferred Skills:-
. Experience with SOC 2, ISO 27001, or similar security frameworks.
. Experience with advanced firewall configurations and network
architecture.
Job Type: Full-time
Benefits:
- Paid sick time
Shift:
- Day shift
Work Days:
- Monday to Friday
Experience:
- IT management: 2 years (Required)
Work Location: In person
Requirements:
• Bachelor’s degree in computer science, Engineering, or a related field.
• Strong understanding of distributed data processing platforms like Databricks and BigQuery.
• Proficiency in Python, PySpark, and SQL programming languages.
• Experience with performance optimization for large datasets.
• Strong debugging and problem-solving skills.
• Fundamental knowledge of cloud services, preferably Azure or GCP.
• Excellent communication and teamwork skills.
Nice to Have:
• Experience in data migration projects.
• Understanding of technologies like Delta Lake/warehouse.
Role & Responsiblities
- DevOps Engineer will be working with implementation and management of DevOps tools and technologies.
- Create and support advanced pipelines using Gitlab.
- Create and support advanced container and serverless environments.
- Deploy Cloud infrastructure using Terraform and cloud formation templates.
- Implement deployments to OpenShift Container Platform, Amazon ECS and EKS
- Troubleshoot containerized builds and deployments
- Implement processes and automations for migrating between OpenShift, AKS and EKS
- Implement CI/CD automations.
Required Skillsets
- 3-5 years of cloud-based architecture software engineering experience.
- Deep understanding of Kubernetes and its architecture.
- Mastery of cloud security engineering tools, techniques, and procedures.
- Experience with AWS services such as Amazon S3, EKS, ECS, DynamoDB, AWS Lambda, API Gateway, etc.
- Experience with designing and supporting infrastructure via Infrastructure-as-Code in AWS, via CDK, CloudFormation Templates, Terraform or other toolset.
- Experienced with tools like Jenkins, Github, Puppet or other similar toolset.
- Experienced with monitoring functions like cloudwatch, newrelic, graphana, splunk, etc,
- Excellence in verbal and written communication, and in working collaboratively with a variety of colleagues and clients in a remote development environment.
- Proven track record in cloud computing systems and enterprise architecture and security

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required
- Experience in the manufacturing industry (metal industry is a plus)
- 2+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required:
- Experience in the manufacturing industry (metal industry is a plus)
- 4+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have:
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
- Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
- Benefits And Perks
- A culture that fosters innovation, creativity, continuous learning, and resilience
- Progressive leave policy promoting work-life balance
- Mentorship opportunities with highly qualified internal resources and industry-driven programs
- Multicultural peer groups and supportive workplace policies
- Annual workcation program allowing you to work from various scenic locations
- Experience the unique environment of a dynamic start-up
Why should you join TVARIT ?
Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.
If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!
· IMMEDIATE JOINER
Professional Experience with 5+ years in Confluent Kafka Admin
· Demonstrated experience design / development.
· Must have proven knowledge and practical application of – Confluent Kafka (Producers/ Consumers / Kafka Connectors / Kafka Stream/ksqlDB/Schema Registry)
· Experience in performance optimization of consumers, producers.
· Good experience debugging issues related offset, consumer lag, partitions.
· Experience with Administrative tasks on Confluent Kafka.
· Kafka admin experience including but not limited to setup new Kafka cluster, create topics, grant permissions, offset reset, purge data, setup connectors, setup replicator task, troubleshooting issues, Monitor Kafka cluster health and performance, backup and recovery.
· Experience in implementing security measures for Kafka clusters, including access controls and encryption, to protect sensitive data.
· Install/Upgrade Kafka cluster techniques.
· Good experience with writing unit tests using Junit and Mockito
· Have experience with working in client facing project.
· Exposure to any cloud environment like AZURE is added advantage.
· Experience in developing or working on REST Microservices
Experience in Java, Springboot is a plus
About Kiru:
Kiru is a forward-thinking payments startup on a mission to revolutionise the digital payments landscape in Africa and beyond. Our innovative solutions will reshape how people transact, making payments safer, faster, and more accessible. Join us on our journey to redefine the future of payments.
Position Overview:
We are searching for a highly skilled and motivated DevOps Engineer to join our dynamic team in Pune, India. As a DevOps Engineer at Kiru, you will play a critical role in ensuring our payment infrastructure's reliability, scalability, and security.
Key Responsibilities:
- Utilize your expertise in technology infrastructure configuration to manage and automate infrastructure effectively.
- Collaborate with cross-functional teams, including Software Developers and technology management, to design and implement robust and efficient DevOps solutions.
- Configure and maintain a secure backend environment focusing on network isolation and VPN access.
- Implement and manage monitoring solutions like ZipKin, Jaeger, New Relic, or DataDog and visualisation and alerting solutions like Prometheus and Grafana.
- Work closely with developers to instrument code for visualisation and alerts, ensuring system performance and stability.
- Contribute to the continuous improvement of development and deployment pipelines.
- Collaborate on the selection and implementation of appropriate DevOps tools and technologies.
- Troubleshoot and resolve infrastructure and deployment issues promptly to minimize downtime.
- Stay up-to-date with emerging DevOps trends and best practices.
- Create and maintain comprehensive documentation related to DevOps processes and configurations.
Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
- Proven experience as a DevOps Engineer or in a similar role.
- Experience configuring infrastructure on Microsoft Azure
- Experience with Kubernetes as a container orchestration technology
- Experience with Terraform and Azure ARM or Bicep templates for infrastructure provisioning and management.
- Experience configuring and maintaining secure backend environments, including network isolation and VPN access.
- Proficiency in setting up and managing monitoring and visualization tools such as ZipKin, Jaeger, New Relic, DataDog, Prometheus, and Grafana.
- Ability to collaborate effectively with developers to instrument code for visualization and alerts.
- Strong problem-solving and troubleshooting skills.
- Excellent communication and teamwork skills.
- A proactive and self-motivated approach to work.
Desired Skills:
- Experience with Azure Kubernetes Services and managing identities across Azure services.
- Previous experience in a financial or payment systems environment.
About Kiru:
At Kiru, we believe that success is achieved through collaboration. We recognise that every team member has a vital role to play, and it's the partnerships we build within our organisation that drive our customers' success and our growth as a business.
We are more than just a team; we are a close-knit partnership. By bringing together diverse talents and fostering powerful collaborations, we innovate, share knowledge, and continually learn from one another. We take pride in our daily achievements but never stop challenging ourselves and supporting each other. Together, we reach new heights and envision a brighter future.
Regardless of your career journey, we provide the guidance and resources you need to thrive. You will have everything required to excel through training programs, mentorship, and ongoing support. At Kiru, your success is our success, and that success matters because we are the essential partners for the world's most critical businesses. These companies manufacture, transport, and supply the world's essential goods.
Equal Opportunities and Accommodations Statement:
Kiru is committed to fostering a workplace and global community where inclusion is celebrated and where you can bring your authentic selfbecause that's who we're interested in. If you are interested in this role but don't meet every qualification in the job description, don't hesitate to apply. We are an equal opportunity employer.


Title/Role: Python Django Consultant
Experience: 8+ Years
Work Location: Indore / Pune /Chennai / Vadodara
Notice period: Immediate to 15 Days Max
Key Skills: Python, Django, Crispy Forms, Authentication, Bootstrap, jQuery, Server Side Rendered, SQL, Azure, React, Django DevOps
Job Description:
- Should have knowledge and created forms using Django. Crispy forms is a plus point.
- Must have leadership experience
- Should have good understanding of function based and class based views.
- Should have good understanding about authentication (JWT and Token authentication)
- Django – at least one senior with deep Django experience. The other 1 or 2 can be mid to senior python or Django
- FrontEnd – Must have React/ Angular, CSS experience
- Database – Ideally SQL but most senior has solid DB experience
- Cloud – Azure preferred but agnostic
- Consulting / client project background ideal.
Django Stack:
- Django
- Server Side Rendered HTML
- Bootstrap
- jQuery
- Azure SQL
- Azure Active Directory
- Server Side Rendered/jQuery is older tech but is what we are ok with for internal tools. This is a good combination of late adopter agile stack integrated within an enterprise. Potentially we can push them to React for some discreet projects or pages that need more dynamism.
Django Devops:
- Should have expertise with deploying and managing Django in Azure.
- Django deployment to Azure via Docker.
- Django connection to Azure SQL.
- Django auth integration with Active Directory.
- Terraform scripts to make this setup seamless.
- Easy, proven to deployment / setup to AWS, GCP.
- Load balancing, more advanced services, task queues, etc.

Golang Developer
Location: Chennai/ Hyderabad/Pune/Noida/Bangalore
Experience: 4+ years
Notice Period: Immediate/ 15 days
Job Description:
- Must have at least 3 years of experience working with Golang.
- Strong Cloud experience is required for day-to-day work.
- Experience with the Go programming language is necessary.
- Good communication skills are a plus.
- Skills- Aws, Gcp, Azure, Golang


Our client is a rapid growth stage Edtech start-up focused on solving the teacher shortage crisis in the US education system using technology by bringing teachers to the classrooms and giving them access to the right tools to teach students in a highly engaging manner.
They have a development center in Mumbai and they are looking to scale up the teams in Pune and Bangalore as well.
What does the role entail:
● Design, develop, and maintain software applications using .NET Core ,C#
● Write clean, maintainable, and efficient code
● Collaborate with cross-functional teams to analyze requirements, design solutions, and implement new features
● Develop and implement unit tests and integration tests to ensure software quality
● Troubleshoot and debug applications
● Participate in code reviews and provide constructive feedback to peers
Required Skills:
● B.E/B.Tech in computer science
● 1 - 3 years of experience
● Hands-on experience following skills
MS Technologies: NET CORE 7+, C#
Backend: MSSQL / MySQL /Postgres
● Preferred understanding of Cloud: Azure/AWS.
● Strong understanding of object-oriented programming, Design principles, data structures, and algorithms
● Worked in an Agile software development environment


Our client is a rapid growth stage Edtech start-up focused on solving the teacher shortage crisis in the US education system using technology by bringing teachers to the classrooms and giving them access to the right tools to teach students in a highly engaging manner.
They have a development center in Mumbai and they are looking to scale up the teams in Pune and Bangalore as well.
What does the role entail:
● Lead User Stories and guide senior/software engineers in the development process
● Understand architecture principles, design patterns and implement them from architectural artifacts.
● Provide effort estimation of assigned work and be able to complete as per the estimations and timelines
● Write clean, maintainable, and efficient code including unit tests and integration tests to ensure software quality
● Collaborate with cross-functional teams to analyze requirements, design solutions, and implement new features
● Participate in design and code reviews
● Ability to resolve performance issues
● Mentor junior software engineers and help them grow their technical skills
Must Have Skills:
● B.E/B.Tech in computer science
● 8-11 years of experience
● Hands-on experience following skills
MS Technologies: NET CORE 5+, C#
Backend : MSSQL /MySQL/Postgres
● Good understanding of Cloud technologies like Azure/AWS.
● Good understanding of Design Principles, Design patterns and Microservices Architecture.
● Excellent problem-solving, critical thinking, and communication skills
● Worked in an Agile software development environment
Must understand full stack development frameworks including knowledge of building scalable APIs, interfaces, software components, schema design, availability, and latency preferably in a cloud environment.
● Worked in a fast-paced environment preferably in a tech driven startup
● Understanding of different types of architectures.


Our client is a rapid growth stage Edtech start-up focused on solving the teacher shortage crisis in the US education system using technology by bringing teachers to the classrooms and giving them access to the right tools to teach students in a highly engaging manner.
They have a development center in Mumbai and they are looking to scale up the teams in Pune and Bangalore as well.
What does the role entail:
● Design, develop, and maintain software applications using .NET Core ,C#
● Write clean, maintainable, and efficient code
● Collaborate with cross-functional teams to analyze requirements, design solutions, and implement new features
● Develop and implement unit tests and integration tests to ensure software quality
● Participate in code reviews and provide constructive feedback to peers
● Lead requirements/tasks and guide/mentor junior software engineers in the development process and help them grow their technical skills
Required Skills:
● B.E/B.Tech in computer science
● 4 - 7 years of experience
● Hands-on experience following skills
MS Technologies: NET CORE 5+, C#
Backend : MSSQL/MySQL/Postgres
● Good understanding of Cloud technologies like Azure/AWS.
● Strong understanding of Design Principles, Design patterns and Microservices Architecture.
● Excellent problem-solving, critical thinking, and communication skills
● Must understand full stack development frameworks including knowledge of building scalable APIs, interfaces, software components, schema design, availability, and latency preferably in a cloud environment.
● Worked in fast-paced environment product startup/startup-like culture in an Agile software development environment
Position : Senior Java Backend Developer
Job Location: Navi Mumbai / Bangalore / Hyderabad / Pune
Job Description :
At least 5 years of professional experience in developing backend applications using Java
- Proficiency in using Spring Boot, Hibernate, RESTful APIs, microservices and other modern web technologies
- Experience in working with relational and non-relational databases such as MySQL, MongoDB, Redis etc.
- Exp on Azure Cloud.
- Experience in using DevOps tools such as Docker, Jenkins etc.
- Knowledge of GraphQL and how to use it with Java
- Knowledge of best practices and principles of software engineering such as SOLID, design patterns, code quality, testing etc.
- Familiarity with agile methodologies such as Scrum or Kanban
- Ability to work independently and as part of a team
- Excellent communication and problem-solving skills
About Us -Celebal Technologies is a premier software services company in the field of Data Science, Big Data and Enterprise Cloud. Celebal Technologies helps you to discover the competitive advantage by employing intelligent data solutions using cutting-edge technology solutions that can bring massive value to your organization. The core offerings are around "Data to Intelligence", wherein we leverage data to extract intelligence and patterns thereby facilitating smarter and quicker decision making for clients. With Celebal Technologies, who understands the core value of modern analytics over the enterprise, we help the business in improving business intelligence and more data-driven in architecting solutions.
Key Responsibilities
• As a part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the CI/CD components.
• Creating and managing build and release pipelines with Azure DevOps and Jenkins.
• Assist in planning and reviewing application architecture and design to promote an efficient deployment process.
• Troubleshoot server performance issues & handle the continuous integration system.
• Automate infrastructure provisioning using ARM Templates and Terraform.
• Monitor and Support deployment, Cloud-based and On-premises Infrastructure.
• Diagnose and develop root cause solutions for failures and performance issues in the production environment.
• Deploy and manage Infrastructure for production applications
• Configure security best practices for application and infrastructure
Essential Requirements
• Good hands-on experience with cloud platforms like Azure, AWS & GCP. (Preferably Azure)
• Strong knowledge of CI/CD principles.
• Strong work experience with CI/CD implementation tools like Azure DevOps, Team city, Octopus Deploy, AWS Code Deploy, and Jenkins.
• Experience of writing automation scripts with PowerShell, Bash, Python, etc.
• GitHub, JIRA, Confluence, and Continuous Integration (CI) system.
• Understanding of secure DevOps practices
Good to Have -
• Knowledge of scripting languages such as PowerShell, Bash
• Experience with project management and workflow tools such as Agile, Jira, Scrum/Kanban, etc.
• Experience with Build technologies and cloud services. (Jenkins, TeamCity, Azure DevOps, Bamboo, AWS Code Deploy)
• Strong communication skills and ability to explain protocol and processes with team and management.
• Must be able to handle multiple tasks and adapt to a constantly changing environment.
• Must have a good understanding of SDLC.
• Knowledge of Linux, Windows server, Monitoring tools, and Shell scripting.
• Self-motivated; demonstrating the ability to achieve in technologies with minimal supervision.
• Organized, flexible, and analytical ability to solve problems creatively.
👋🏼We're Nagarro.
We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (19000+ experts across 33 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in.
REQUIREMENTS:
- 16+ years of experience in designing and developing technology solutions using a variety of platforms, languages, and tools, with at least 5+ years of software architecture experience
- Strong background into Devops, Azure, Java, .NET, all mobile, web and backend solutions
- Ability to think breakthroughs by leveraging new age technologies and a deep understanding of the client’s business and the industry they operate in
- Ability to lead by example by picking up coding of complex functionalities when required
- Strong understanding of technology and the ability to deep dive into a technology problem
- Ability to influence key client stakeholders on their technology and operations strategy
- Vast experience in owning delivery of complex technology solutions for global clients
- Ability to multitask and own multiple technology tracks simultaneously in a globally distributed delivery setup
- Experience in creating cutting edge technology solution by collaborating with other world class technologists
- Balanced approach that aligns technology-based solutions with customer needs
- Visible thought leadership through technology blogs, whitepapers, presentations etc.
- Fluent verbal and written language skills and ability to convey a message in a simple and structured manner, customized to the audience and to the mode of communication
RESPONSIBILITIES:
- Owning the technology health of the project / account on all key metrics
- Ensuring projects / accounts meet technical standards of governance, technology standards and best practices
- Owning the long term as well as the short-term technology strategy of your project / account
- Identifying opportunities in the current engagement to cross sell or up sell Nagarro’s offerings
- Conceptualizing and owning the technical architecture and design of the projects you are influencing
- Harnessing your consulting skills in a culture that promotes opportunities to provide thought leadership and breakthrough solutions for our clients.
- Running workshops internally and with customers on technology and business topics to create new solution areas and use cases
- Communicating and driving adoption of organizational technology initiatives in your account
- Mentoring and managing team members, by giving constant on-the-job feedback, and by providing guidance
- If you are aligned with a Center of Excellence (CoE) or practice –
- Defining the vision for the practice/CoE, creating plan/budget for the practice

About Apexon:
Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. For over 17 years, Apexon has been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving our clients’ toughest technology problems, and a commitment to continuous improvement. We focus on three broad areas of digital services: User Experience (UI/UX, Commerce); Engineering (QE/Automation, Cloud, Product/Platform); and Data (Foundation, Analytics, and AI/ML), and have deep expertise in BFSI, healthcare, and life sciences.
Apexon is backed by Goldman Sachs Asset Management and Everstone Capital.
To know more about us please visit: https://www.apexon.com/" target="_blank">https://www.apexon.com/
Responsibilities:
- C# Automation engineer with 4-6 years of experience to join our engineering team and help us develop and maintain various software/utilities products.
- Good object-oriented programming concepts and practical knowledge.
- Strong programming skills in C# are required.
- Good knowledge of C# Automation is preferred.
- Good to have experience with the Robot framework.
- Must have knowledge of API (REST APIs), and database (SQL) with the ability to write efficient queries.
- Good to have knowledge of Azure cloud.
- Take end-to-end ownership of test automation development, execution and delivery.
Good to have:
- Experience in tools like SharePoint, Azure DevOps
.
Other skills:
- Strong analytical & logical thinking skills. Ability to think and act rationally when faced with challenges.
- Should be well versed with Data Structures & algorithms
- Understanding of software development lifecycle
- Excellent analytical and problem-solving skills.
- Ability to work independently as a self-starter, and within a team environment.
- Good Communication skills- Written and Verbal
- Recommend a migration and consolidation strategy for DevOps tools
- Design and implement an Agile work management approach
- Make a quality strategy
- Design a secure development process
- Create a tool integration strategy
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
- Hands-on knowledge on various CI-CD tools (Jenkins/TeamCity, Artifactory, UCD, Bitbucket/Github, SonarQube) including setting up of build-deployment automated pipelines.
- Very good knowledge in scripting tools and languages such as Shell, Perl or Python , YAML/Groovy, build tools such as Maven/Gradle.
- Hands-on knowledge in containerization and orchestration tools such as Docker, OpenShift and Kubernetes.
- Good knowledge in configuration management tools such as Ansible, Puppet/Chef and have worked on setting up of monitoring tools (Splunk/Geneos/New Relic/Elk).
- Expertise in job schedulers/workload automation tools such as Control-M or AutoSys is good to have.
- Hands-on knowledge on Cloud technology (preferably GCP) including various computing services and infrastructure setup using Terraform.
- Should have basic understanding on networking, certificate management, Identity and Access Management and Information security/encryption concepts.
- • Should support day-to-day tasks related to platform and environments upkeep such as upgrades, patching, migration and system/interfaces integration.
- • Should have experience in working in Agile based SDLC delivery model, multi-task and support multiple systems/apps.
- • Big-data and Hadoop ecosystem knowledge is good to have but not mandatory.
- Should have worked on standard release, change and incident management tools such as ServiceNow/Remedy or similar
4 – 6 years of application development with design, development, implementation, and
support experience, including the following:
o C#
o JavaScript
o HTML
o SQL
o Messaging/RabbitMQ
o Asynchronous communication patterns
Experience with Visual Studio and Git
A working understanding of build and release automation, preferably with Azure DevOps
Excellent understanding of object-oriented concepts and .Net framework
Experience in creating reusable libraries in C#
Ability to troubleshoot and isolate/solve complex bugs, connectivity issues, or OS related
issues
Ability to write complex SQL queries and stored procedures in Oracle and/or MS SQL
Proven ability to use design patterns to accomplish scalable architecture
Understanding of event-driven architecture
Experience with message brokers such as RabbitMQ
Experience in the development of REST APIs
Understanding of basic steps of an Agile SDLC
Excellent communication (both written and verbal) and interpersonal skills
Demonstrated accountability and ownership of assigned tasks
Demonstrated leadership and ability to work as a leader on large and complex tasks

- 4+ yrs of experience having strong fundamentals on Windchill Customization & Configurations, Reporting Framework Customization, Workflow Customization , Customer handling
- Strong Customization background around Form Processors, Validator, data utilities, Form Controllers etc.
- Strong Programming skills in Java/J2EE technologies – JavaScript, GWT, JQuerry, XMLs, JSPs, SQL etc.
- Deep Knowledge in Windchill architecture
- Experience in atleast one full lifecycle PLM implementation with Windchill.
- Should have strong coding skills in Windchill development and Customization, ThingWorx Navigate Development (Mandatory), Thing Worx Architecture Configurations Mashup creation, ThingWorx and Windchill Upgrade
- Should have Build and Configuration management (Mandatory) - HPQC \JIRA\Azure\SVN\GITHUB \ Ant
- Knowledge & Experience in Build and Release process
- Having worked on custom upgrade will be a plus.
- Understanding of application s development environment, Database, data management and infrastructure capabilities and constraints. Understanding of Database administration, Database design and performance Tuning
- Follow Quality processes for tasks with appropriate reviews. Participate in sharing knowledge within the team.
Skill- Spark and Scala along with Azure
Location - Pan India
Looking for someone Bigdata along with Azure

consulting & implementation services in the area of Oil & Gas, Mining and Manufacturing Industry
Job Responsibilities:
- Technically sound in Dot Net technology. Good working knowledge & experience in Web API and SQL Server
- Should be able to carry out requirement analysis, design, coding unit testing and support to fix defects reported during QA, UAT phases and at GO Live times.
- Able to work alone or as part of a team with minimal or no supervision from Delivery leads.
- Good experience required in Azure stack of integration technology-Logic app, Azure Function, APIM and Application insights.
Must have skill
- Strong Web API development using ASP.Net Core, Logic app, azure functions, APIM
- Azure Functions
- Azure Logic App
- Azure APIM
- Azure ServiceBus
Desirable Skills
- Azure Event Grid/Hub
- Azure KeyVault
- Azure SQL – Knowledge on SQL query
- Configure, optimize, document, and support of the infrastructure components of software products (which are hosted in collocated facilities and cloud services such as AWS)
- Design and build tools and frameworks that support deployment and management and platforms
- Design, build, and deliver cloud computing solutions, hosted services, and underlying software infrastructures
- Build core functionality of our cloud-based platform product, deliver secure, reliable services and construct third party integrations
- Assist in coaching application developers on proper DevOps techniques for building scalable applications in the microservices paradigm
- Foster collaboration with software product development and architecture teams to ensure releases are delivered with repeatable and auditable processes
- Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restores of different environments
- Work independently across multiple platforms and applications to understand dependencies
- Evaluate new tools, technologies, and processes to improve speed, efficiency, and scalability of continuous integration environments
- Design and architect solutions for existing client-facing applications as they are moved into cloud environments such as AWS
- Competencies
- Full understanding of scripting and automated process management in languages such as Shell, Ruby and/ or Python
- Working Knowledge SCM tools such as Git, GitHub, Bitbucket, etc.
- Working knowledge of Amazon Web Services and related APIs
- Ability to deliver and manage web or cloud-based services
- General familiarity with monitoring tools
- General familiarity with configuration/provisioning tools such as Terraform
- Experience
- Experience working within an Agile type environment
- 4+ years of experience with cloud-based provisioning (Azure, AWS, Google), monitoring, troubleshooting, and related DevOps technologies
- 4+ years of experience with containerization/orchestration technologies like Rancher, Docker and Kubernetes


Company: MNC
Location: Pune (Currently WFH)
Experience: 4-6 Years
Shift: 11:30AM - 8:30PM
Skills: .Net Core, Microservices, Web API / Rest API, C#, Azure Functions, CosmosDB
Job Details:
- Develop client projects using Asp.net core 3.1 and above, MySQL Database, Azure Functions, CosmosDB, C#.
- Communicate with external clients on a regular basis regarding progress, challenges, timelines and results of client projects
- Gather technical requirements as needed
- Create and update, design and functional documents
- Identify and troubleshoot issues as needed
- Perform a new development as required
- Implement project applications according to specifications
- Research technical issues and provide recommendations to enhance client websites
- Work both independently and as part of a team to create reliable and high-performing Web Applications
- Unit test code to ensure quality
- Develop client projects using Asp.net core 3.1 and above, MySQL Database, Azure Functions, CosmosDB, C#.
- Communicate with external clients on a regular basis regarding progress, challenges, timelines and results of client projects
- Gather technical requirements as needed
- Create and update, design and functional documents
- Identify and troubleshoot issues as needed
- Perform a new development as required
- Implement project applications according to specifications
- Research technical issues and provide recommendations to enhance client websites
- Work both independently and as part of a team to create reliable and high-performing Web Applications
- Unit test code to ensure quality
- Essentail Skills:
- Docker
- Jenkins
- Python dependency management using conda and pip
- Base Linux System Commands, Scripting
- Docker Container Build & Testing
- Common knowledge of minimizing container size and layers
- Inspecting containers for un-used / underutilized systems
- Multiple Linux OS support for virtual system
- Has experience as a user of jupyter / jupyter lab to test and fix usability issues in workbenches
- Templating out various configurations for different use cases (we use Python Jinja2 but are open to other languages / libraries)
- Jenkins PIpeline
- Github API Understanding to trigger builds, tags, releases
- Artifactory Experience
- Nice to have: Kubernetes, ArgoCD, other deployment automation tool sets (DevOps)

We are looking to hire an experienced Sr. Angular Developer to join our dynamic team. As a lead developer, you will be responsible for creating a top-level coding base using Angular best practices. To ensure success as an angular developer, you should have extensive knowledge of theoretical software engineering, be proficient in TypeScript, JavaScript, HTML, and CSS, and have excellent project management skills. Ultimately, a top-class Angular Developer can design and build a streamlined application to company specifications that perfectly meet the needs of the user.
Requirements:
- Bachelor’s degree in computer science, computer engineering, or similar
- Previous work Experience 2+ years as an Angular developer.
- Proficient in CSS, HTML, and writing cross-browser compatible code
- Experience using JavaScript & TypeScript building tools like Gulp or Grunt.
- Knowledge of JavaScript MV-VM/MVC frameworks including Angluar.JS / React.
- Excellent project management skills.
Responsibilities:
- Designing and developing user interfaces using Angular best practices.
- Adapting interface for modern internet applications using the latest front-end technologies.
- Writing TypeScript, JavaScript, CSS, and HTML.
- Developing product analysis tasks.
- Making complex technical and design decisions for Angular.JS projects.
- Developing application codes in Angular, Node.js, and Rest Web Services.
- Conducting performance tests.
- Consulting with the design team.
- Ensuring high performance of applications and providing support.

This company provides on-demand cloud computing platforms.

- 15+ years of Hands-on technical application architecture experience and Application build/ modernization experience
- 15+ years of experience as a technical specialist in Customer-facing roles.
- Ability to travel to client locations as needed (25-50%)
- Extensive experience architecting, designing and programming applications in an AWS Cloud environment
- Experience with designing and building applications using AWS services such as EC2, AWS Elastic Beanstalk, AWS OpsWorks
- Experience architecting highly available systems that utilize load balancing, horizontal scalability and high availability
- Hands-on programming skills in any of the following: Python, Java, Node.js, Ruby, .NET or Scala
- Agile software development expert
- Experience with continuous integration tools (e.g. Jenkins)
- Hands-on familiarity with CloudFormation
- Experience with configuration management platforms (e.g. Chef, Puppet, Salt, or Ansible)
- Strong scripting skills (e.g. Powershell, Python, Bash, Ruby, Perl, etc.)
- Strong practical application development experience on Linux and Windows-based systems
- Extra curricula software development passion (e.g. active open source contributor)
Role and responsibilities
- Expertise in AWS (Most typical Services),Docker & Kubernetes.
- Strong Scripting knowledge, Strong Devops Automation, Good at Linux
- Hands on with CI/CD (CircleCI preferred but any CI/CD tool will do). Strong Understanding of GitHub
- Strong understanding of AWS networking and. Strong with Security & Certificates.
Nice-to-have skills
- Involved in Product Engineering
About Company:
The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.
- Senior Engineer with a strong background and experience in cloud related technologies and architectures.
- Can design target cloud architectures to transform existing architectures together with the in-house team.
- Can actively hands-on configure and build cloud architectures and guide others.
Key Knowledge
- 3-5+ years of experience in AWS/GCP or Azure technologies
- Is likely certified on one or more of the major cloud platforms
- Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
- Ability to guide and lead internal agile teams on cloud technology
- Background from the financial services industry or similar critical operational experience
Below is the Job Description for the position of DevOps Azure Engineer in Xceedance co.
Qualifications BE/ B.Tech/ MCA in computer science
Key Requirement for the Position Develop Azure application design and connectivity patterns, Azure networking topologies, and Azure storage facilities.
• Run code conformance tools as part of releases.
• Design Azure app service web app by using Azure CLI, PowerShell, and other tools.
• Implement containerized solution using Docker and Azure Kubernetes Service
• Automating the build and deployment process through Azure DevOps approach and tools from development to production
• Design and implement CI/CD pipelines
• Script and update build and deployments.
• Coordinate environment usage and alignment.
• Develop, maintain, and optimize automated deployments code for development, test, staging and production environments.
• Configure the application and container platform with proactive monitoring tools and trigger alerts through communication channels
• Develop infrastructure and platform code
• Effectively contribute to building the overall knowledge and expertise of the technical team
• Provide Level 2/3 technical support
Location Noida or Gurgaon
Experience: 3+ years of experience in Cloud Architecture
About Company:
The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.
Cloud Architect / Lead
- Role Overview
- Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
- Key Knowledge
- 3-5+ years of experience in AWS/GCP or Azure technologies
- Is likely certified on one or more of the major cloud platforms
- Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
- Ability to guide and lead internal agile teams on cloud technology
- Background from the financial services industry or similar critical operational experience
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is looking for highly talented hands-on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
Job Description :
- Extensive exp. with K8s (EKS/GKE) and k8s eco-system tooling e,g., Prometheus, ArgoCD, Grafana, Istio etc.
- Extensive AWS/GCP Core Infrastructure skills
- Infrastructure/ IAC Automation, Integration - Terraform
- Kubernetes resources engineering and management
- Experience with DevOps tools, CICD pipelines and release management
- Good at creating documentation(runbooks, design documents, implementation plans )
Linux Experience :
- Namespace
- Virtualization
- Containers
Networking Experience
- Virtual networking
- Overlay networks
- Vxlans, GRE
Kubernetes Experience :
Should have experience in bringing up the Kubernetes cluster manually without using kubeadm tool.
Observability
Experience in observability is a plus
Cloud automation :
Familiarity with cloud platforms exclusively AWS, DevOps tools like Jenkins, terraform etc.
Purpose of Job |
|
|
|
Job Role |
|
Who you are
You will be responsible for: · Building complex, enterprise-transforming applications on diverse platforms and technologies. · Writing and implementing efficient code · Working closely with other developers, QA Teams, UX designers, Business Analysts and Product Owner · Working in different domains and client environments · Documenting standard operating procedures You will get to: · Build custom software using cutting edge technologies and tools · Work with amazing and talented people to make innovative solutions a reality · Work in a dynamic environment where your talent is valued over your job title or years of experience · Travel overseas and work with diverse teams. · Build your own career path |
|
Skills/Experience/Qualifications |
|
Essential: · Excellent software programming skills with a strong focus on code quality · Excellent problem-solving skills · Excellent creativity skills to help invent new ways for approaching problems and suggest innovative solutions · Strong knowledge of JavaScript, TypeScript and React · Good UI design using HTML and a modern CSS framework such as Bootstrap, Foundation or Semantic UI. · Hands-on experience in analysis, design, coding, and implementation of complex, custom-built applications · Experience with working with AWS specially Lambda, API Gateway, CloudFormation
Desirable: · Knowledge of Amazon Connect · Knowledge of Python, Node.js, Spring Boot · Knowledge of Relational databases such as MySQL, PostgreSQL, SQL Server or Oracle. · Knowledge of software standard methodologies, like Test Driven Development (TDD) and Continuous Integration (CI) · Experience working with, or an interest in Agile Methodologies such as Scrum
|
• Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a
different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Key Skills Required:
· You will be part of the DevOps engineering team, configuring project environments, troubleshooting integration issues in different systems also be involved in building new features for next generation of cloud recovery services and managed services.
· You will directly guide the technical strategy for our clients and build out a new capability within the company for DevOps to improve our business relevance for customers.
· You will be coordinating with Cloud and Data team for their requirements and verify the configurations required for each production server and come with Scalable solutions.
· You will be responsible to review infrastructure and configuration of micro services and packaging and deployment of application
To be the right fit, you'll need:
· Expert in Cloud Services like AWS.
· Experience in Terraform Scripting.
· Experience in container technology like Docker and orchestration like Kubernetes.
· Good knowledge of frameworks such as Jenkins, CI/CD pipeline, Bamboo Etc.
· Experience with various version control system like GIT, build tools (Mavan, ANT, Gradle ) and cloud automation tools (Chef, Puppet, Ansible)


Preferred Education & Experience: •
Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Required Experience: 5+ Years
Job Location: Remote/Pune
Job Description:
Mandatory Skills:
Should have strong working experience with Cloud technologies like AWS and Azure.
Should have strong working experience with CI/CD tools like Jenkins and Rundeck.
Must have experience with configuration management tools like Ansible.
Must have working knowledge on tools like Terraform.
Must be good at Scripting Languages like shell scripting and python.
Should be expertise in DevOps practices and should have demonstrated the ability to apply that knowledge across diverse projects and teams.
Preferable skills:
Experience with tools like Docker, Kubernetes, Puppet, JIRA, gitlab and Jfrog.
Experience in scripting languages like groovy.
Experience with GCP
Summary & Responsibilities:
Write build pipelines and IaaC (ARM templates, terraform or cloud formation).
Develop ansible playbooks to install and configure various products.
Implement Jenkins and Rundeck jobs( and pipelines).
Must be a self-starter and be able to work well in a fast paced, dynamic environment
Work independently and resolve issues with minimal supervision.
Strong desire to learn new technologies and techniques
Strong communication (written / verbal ) skills
Qualification:
Bachelor's degree in Computer Science or equivalent.
4+ years of experience in DevOps and AWS.
2+ years of experience in Python, Shell scripting and Azure.