• Drive the architectural design, solution planning and feasibility study on Cloud Computing Infrastructure.
• Deliver new IT services and exploit current infrastructure technologies.
• Drive the infrastructure roadmaps and planning in adopting the cloud infrastructure in a long run.
• Conduct research and make recommendations on suitable cloud platforms & services.
• Advise on and implement cloud best practices.
Job Requirements:
Desired understanding of the following - VPC, EC2, S3, IAM, Route 53, Lambda, Billing, AWS MYSQL, Kinesis, API
Gateway, Cloud Watch, EBS, AMI, RDS, Dynamo DB, ELB, Light sail, Kubernetes, Docker, NAT Gateway
Education & Experience:
• 3 to 5 years related work experience
• Bachelor’s degree in Computer Science, Information Technology or related field
• Solid experience in infrastructure architecture solutions design
• Solid knowledge in AWS/Google Cloud
• Experience in managing implementations on public clouds (AWS/Google Cloud)
• Excellent analytical and problem-solving skills
• Good command in written and spoken English.
• Certification for AWS/Google Cloud Architect – Associate level

About Intellve Solutions Ltd
About
Company video


Connect with the team
Similar jobs
Job Description:
Infilect is a GenAI company pioneering the use of Image Recognition in Consumer Packaged Goods retail.
We are looking for a Senior DevOps Engineer to be responsible and accountable for the smooth running of our Cloud, AI workflows, and AI-based Computer Systems. Furthermore, the candidate will supervise the implementation and maintenance of the company’s computing needs including the in-house GPU & AI servers along with AI workloads.
Responsibilities
- Understanding and automating AI based deployment an AI based workflows
- Implementing various development, testing, automation tools, and IT infrastructure
- Manage Cloud, computer systems and other IT assets.
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Design, develop, implement, and coordinate systems, policies, and procedures for Cloud and on-premise systems
- Ensure the security of data, network access, and backup systems
- Act in alignment with user needs and system functionality to contribute to organizational policy
- Identify problematic areas, perform RCA and implement strategic solutions in time
- Preserve assets, information security, and control structures
- Handle monthly/annual cloud budget and ensure cost effectiveness
Requirements and skills
- Well versed in automation tools such as Docker, Kubernetes, Puppet, Ansible etc.
- Working Knowledge of Python, SQL database stack or any full-stack with relevant tools.
- Understanding agile development, CI/CD, sprints, code reviews, Git and GitHub/Bitbucket workflows
- Well versed with ELK stack or any other logging, monitoring and analysis tools
- Proven working experience of 2+ years as an DevOps/Tech lead/IT Manager or relevant positions
- Excellent knowledge of technical management, information analysis, and of computer hardware/software systems
- Hands-on experience with computer networks, network administration, and network installation
- Knowledge in ISO/SOC Type II implementation with be a
- BE/B.Tech/ME/M.Tech in Computer Science, IT, Electronics or a similar field
About the Role:
Join Nitor Infotech as a DevOps Architect, where you will drive CI/CD pipeline and infrastructure automation initiatives. Collaborate with development teams to ensure seamless application deployment and maintenance.
Responsibilities
- CI/CD Pipeline Development: Design and maintain CI/CD pipelines using Jenkins, GitLab CI/CD, or GitHub Actions.
- Infrastructure as Code (IaC): Automate infrastructure provisioning with Ansible, Terraform, or AWS CloudFormation.
- Cloud Platform Management: Optimize cloud infrastructure on AWS, Azure, or GCP.
- Monitoring and Alerting: Implement monitoring solutions with Prometheus and Grafana for proactive issue identification.
- DevOps Culture Promotion: Foster collaboration between development and operations teams.
- Team Leadership: Mentor junior DevOps engineers and support their career development.
- Problem Solving: Troubleshoot complex technical issues related to infrastructure and deployments.
Must-Have Skills and Qualifications
- 8+ years in DevOps or related fields.
- 3-5 years experience as a DevOps Architect or Solution Architect
- Proficient in CI/CD tools (Docker, Jenkins, GitHub Actions).
- Expertise in infrastructure automation (Ansible, Terraform).
- In-depth knowledge of cloud platforms (AWS, Azure, GCP).
- Experience with monitoring tools (Prometheus, Grafana).
- Strong scripting skills (Bash, Python).
- Excellent problem-solving and communication skills.
- Familiarity with Agile development methodologies.
Good-to-Have Skills and Qualifications
- Experience with configuration management tools (Ansible, Puppet).
- Knowledge of security best practices in DevOps.
- Familiarity with container orchestration (Kubernetes).
What We Offer
- Competitive salary and performance bonuses.
- Comprehensive health and wellness benefits.
- Opportunities for professional growth.
- Dynamic and inclusive work culture.
- Flexible work arrangements.
Key Required Skills: DevOps Architect, Terraform, Kubertents,CI/CD Pipeline, Azure DevOps, Github action.
Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are looking for an experienced Technical Team Lead to guide a local IT Services Management Team and also acting as a software developer. In this role, you will be responsible for the application management of a B2C application to meet the agreed Service Level Agreements (SLAs) and fulfil customer expectations.
Your Team will act as a on-call-duty team in the time between 6 pm to 8 am, 365 days a year. You will work together with the responsible Senior Project Manager in Germany.
We are seeking a hands-on leader who thrives in both team management and operational development. Whether you have experience in DevOps and Backend or Frontend, your expertise in both leadership and technical skills will be key to success in this position.
Responsibilities:
Problem Management & Incident Management activities: Identifying and resolving technical issues and errors that arise during application usage.
Release and Update Coordination: Planning and executing software updates, new versions, or system upgrades to keep applications up to date.
Change Management: Responsible for implementing and coordinating changes to the application, considering the impact on ongoing operations.
Requirements:
Education und Experience: A Bachelor’s or Master’s degree in a relevant field, with a minimum of 5 years of professional experience or equivalent work experience.
Skills & Expertise:
Proficient in ITIL service management frameworks.
Strong analytical and problem-solving abilities.
Experienced in project management methodologies (Agile, Kanban).
Leadership: Very good leadership skills with a customer orientated, proactive and results driven approach.
Communication: Excellent communication, presentation, and interpersonal skills, with the ability to engage and collaborate with stakeholders.
Language: English on a C2 Level.
Skills & Requirements
kubeAPI high Kustomize high docker/container high Debug Tools openSSL high Curl high Azure Devops, Pipeline, Repository, Deployment, ArgoCD, Certificates: Certificate Management / SSL, LetsEncrypt, Linux Shell, Keycloak.
We are looking for a DevOps Lead to join our team.
Responsibilities
• A technology Professional who understands software development and can solve IT Operational and deployment challenges using software engineering tools and processes. This position requires an understanding of both Software development (Dev) and deployment
Operations (Ops)
• Identity manual processes and automate them using various DevOps automation tools
• Maintain the organization’s growing cloud infrastructure
• Monitor and maintain DevOps environment stability
• Collaborate with distributed Agile teams to define technical requirements and resolve technical design issues
• Orchestrating builds and test setups using Docker and Kubernetes.
• Participate in designing and building Kubernetes, Cloud, and on-prem environments for maximum performance, reliability and scalability
• Share business and technical learnings with the broader engineering and product organization, while adapting approaches for different audiences
Requirements
• Candidates working for this position should possess at least 5 years of work experience as a DevOps Engineer.
• Candidate should have experience in ELK stack, Kubernetes, and Docker.
• Solid experience in the AWS environment.
• Should have experience in monitoring tools like DataDog or Newrelic.
• Minimum of 5 years experience with code repository management, code merge and quality checks, continuous integration, and automated deployment & management using tools like Jenkins, SVN, Git, Sonar, and Selenium.
• Candidates must possess ample knowledge and experience in system automation, deployment, and implementation.
• Candidates must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.
• The candidates should also possess experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, PostgreSQL, and Git.
• Candidates should demonstrate knowledge in handling distributed data systems.
Examples: Elastisearch, Cassandra, Hadoop, and others.
• Should have experience in GitLab- CIRoles and Responsibilities
We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.
Job Description:
- Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
- Experience interacting with customer.
- Excellent communication.
- Hands-on in creating and managing Jenkins job, Groovy scripting.
- Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
- Experience in Maven.
- Experience in scripting languages like Bash, Powershell, Python.
- Experience in automation tools like Terraform, Ansible, Chef, Puppet.
- Excellent troubleshooting skills.
- Experience in Docker and Kuberneties with creating docker files.
- Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
- Develop and Maintain IAC using Terraform and Ansible
- Draft design documents that translate requirements into code.
- Deal with challenges associated with scale.
- Assume responsibilities from technical design through technical client support.
- Manage expectations with internal stakeholders and context-switch in a fast paced environment.
- Thrive in an environment that uses Elasticsearch extensively.
- Keep abreast of technology and contribute to the engineering strategy.
- Champion best development practices and provide mentorship
An AWS Certified Engineer with strong skills in
- Terraform o Ansible
- *nix and shell scripting
- Elasticsearch
- Circle CI
- CloudFormation
- Python
- Packer
- Docker
- Prometheus and Grafana
- Challenges of scale
- Production support
- Sharp analytical and problem-solving skills.
- Strong sense of ownership.
- Demonstrable desire to learn and grow.
- Excellent written and oral communication skills.
- Mature collaboration and mentoring abilities.
As an Infrastructure Engineer at Navi, you will be building a resilient infrastructure platform, using modern Infrastructure engineering practices.
You will be responsible for the availability, scaling, security, performance and monitoring of the navi Cloud platform. You’ll be joining a team that follows best practices in infrastructure as code
Your Key Responsibilities
- Build out the Infrastructure components like API Gateway, Service Mesh, Service Discovery, container orchestration platform like kubernetes.
- Developing reusable Infrastructure code and testing frameworks
- Build meaningful abstractions to hide the complexities of provisioning modern infrastructure components
- Design a scalable Centralized Logging and Metrics platform
- Drive solutions to reduce Mean Time To Recovery(MTTR), enable High Availability.
What to Bring
- Good to have experience in managing large scale cloud infrastructure, preferable AWS and Kubernetes
- Experience in developing applications using programming languages like Java, Python and Go
- Experience in handling logs and metrics at a high scale.
- Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
- Job Title:- Backend/DevOps Engineer
- Job Location:- Opp. Sola over bridge, Ahmedabad
- Education:- B.E./ B. Tech./ M.E./ M. Tech/ MCA
- Number of Vacancy:- 03
- 5 Days working
- Notice Period:- Can join less than a month
- Job Timing:- 10am to 7:30pm.
About the Role
Are you a server-side developer with a keen interest in reliable solutions?
Is Python your language?
Do you want a challenging role that goes beyond backend development and includes infrastructure and operations problems?
If you answered yes to all of the above, you should join our fast growing team!
We are looking for 3 experienced Backend/DevOps Engineers who will focus on backend development in Python and will be working on reliability, efficiency and scalability of our systems. As a member of our small team you will have a lot of independence and responsibilities.
As Backend/DevOps Engineer you will...:-
- Design and maintain systems that are robust, flexible and preformat
- Be responsible for building complex and take high- scale systems
- Prototype new gameplay ideas and concepts
- Develop server tools for game features and live operations
- Be one of three backend engineers on our small and fast moving team
- Work alongside our C++, Android, and iOS developers
- Contribute to ideas and design for new features
To be successful in this role, we'd expect you to…:-
- Have 3+ years of experience in Python development
- Be familiar with common database access patterns
- Have experience with designing systems and monitoring metrics, looking at graphs.
- Have knowledge of AWS, Kubernetes and Docker.
- Be able to work well in a remote development environment.
- Be able to communicate in English at a native speaking and writing level.
- Be responsible to your fellow remote team members.
- Be highly communicative and go out of your way to contribute to the team and help others
Job Location: Jaipur
Experience Required: Minimum 3 years
About the role:
As a DevOps Engineer for Punchh, you will be working with our developers, SRE, and DevOps teams implementing our next generation infrastructure. We are looking for a self-motivated, responsible, team player who love designing systems that scale. Punchh provides a rich engineering environment where you can be creative, learn new technologies, solve engineering problems, all while delivering business objectives. The DevOps culture here is one with immense trust and responsibility. You will be given the opportunity to make an impact as there are no silos here.
Responsibilities:
- Deliver SLA and business objectives through whole lifecycle design of services through inception to implementation.
- Ensuring availability, performance, security, and scalability of AWS production systems
- Scale our systems and services through continuous integration, infrastructure as code, and gradual refactoring in an agile environment.
- Maintain services once a project is live by monitoring and measuring availability, latency, and overall system and application health.
- Write and maintain software that runs the infrastructure that powers the Loyalty and Data platform for some of the world’s largest brands.
- 24x7 in shifts on call for Level 2 and higher escalations
- Respond to incidents and write blameless RCA’s/postmortems
- Implement and practice proper security controls and processes
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on platform.
Must have:
- Minimum 3 Years of Experience in DevOps.
- BS degree in Computer Science, Mathematics, Engineering, or equivalent practical experience.
- Strong inter-personal skills.
- Must have experience in CI/CD tooling such as Jenkins, CircleCI, TravisCI
- Must have experience in Docker, Kubernetes, Amazon ECS or Mesos
- Experience in code development in at least one high-level programming language fromthis list: python, ruby, golang, groovy
- Proficient in shell scripting, and most importantly, know when to stop scripting and start developing.
- Experience in creation of highly automated infrastructures with any Configuration Management tools like: Terraform, Cloudformation or Ansible.
- In-depth knowledge of the Linux operating system and administration.
- Production experience with a major cloud provider such Amazon AWS.
- Knowledge of web server technologies such as Nginx or Apache.
- Knowledge of Redis, Memcache, or one of the many in-memory data stores.
- Experience with various load balancing technologies such as Amazon ALB/ELB, HA Proxy, F5.
- Comfortable with large-scale, highly-available distributed systems.
Good to have:
- Understanding of Web Standards (REST, SOAP APIs, OWASP, HTTP, TLS)
- Production experience with Hashicorp products such as Vault or Consul
- Expertise in designing, analyzing troubleshooting large-scale distributed systems.
- Experience in an PCI environment
- Experience with Big Data distributions from Cloudera, MapR, or Hortonworks
- Experience maintaining and scaling database applications
- Knowledge of fundamental systems engineering principles such as CAP Theorem, Concurrency Control, etc.
- Understanding of the network fundamentals: OSI, TCI/IP, topologies, etc.
- Understanding of Auditing of Infrastructure and help org. to control Infrastructure costs.
- Experience in Kafka, RabbitMQ or any messaging bus.

If you are looking for good opportunity in Cloud Development/Devops. Here is the right opportunity.
EXP: 4-10 YRs
Location:Pune
Job Type: Permanent
Minimum qualifications:
- Education: Bachelor-Master degree
- Proficient in English language.
Relevant experience:
- Should have been working for at least four years as a DevOps/Cloud Engineer
- Should have worked on AWS Cloud Environment in depth
- Should have been working in an Infrastructure as code environment or understands it very clearly.
- Has done Infrastructure coding using Cloudformation/Terraform and Configuration Management using Chef/Ansibleand Enterprise Bus(RabbitMQ/Kafka)
- Deep understanding of the microservice design and aware of centralized Caching(Redis), centralizedconfiguration(Consul/Zookeeper)










