
Job Description:
We are looking to recruit engineers with zeal to learn cloud solutions using Amazon Web Services (AWS). We\'ll prefer an engineer who is passionate about AWS Cloud technology, passionate about helping customers succeed, passionate about quality and truly enjoys what they do. The qualified candidate for AWS Cloud Engineer position is someone who has a can-do attitude and is an innovative thinker.
- Be a hands on with responsibilities for the installation, configuration, and ongoing management of Linux based solutions on AWS for our clients.
- Responsible for creating and managing Autoscaling EC2 instances using VPCs, Elastic Load Balancers, and other services across multiple availability zones to build resilient, scalable and failsafe cloud solutions.
- Familiarity with other AWS services such as CloudFront, ALB, EC2, RDS, Route 53 etc. desirable.
- Working Knowledge of RDS, Dynamo DB, Guard Duty, WAF, Multi tier architecture.
- Proficient in working on Git, CI CD Pipelined, AWS Devops, Git, Bit Bucket, Ansible.
- Proficient in working on Docker Engine, Containers, Kubernetes .
- Expertise in Migration workload to AWS from different cloud providers
- Should be versatile in problem solving and resolve complex issues ranging from OS and application faults to creatively improving solution design
- Should be ready to work in rotation on a 24x7 schedule, and be available on call at other times due to the critical nature of the role
- Fault finding, analysis and of logging information for reporting of performance exceptions
- Deployment, automation, management, and maintenance of AWS cloud-based production system.
- Ensuring availability, performance, security, and scalability of AWS production systems.
- Management of creation, release, and configuration of production systems.
- Evaluation of new technology alternatives and vendor products.
- System troubleshooting and problem resolution across various application domains and platforms.
- Pre-production acceptance testing for quality assurance.
- Provision of critical system security by leveraging best practices and prolific cloud security solutions.
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on AWS platform.
- Designing, maintenance and management of tools for automation of different operational processes.
Desired Candidate Profile
o Customer oriented personality with good communication skills, who is able to articulate and communicate very effectively verbally as well as in written communications.
o Be a team player that collaborates and shares experience and expertise with the rest of the team.
o Understands database system such as MSSQL, Mongo DB, MySQL, MariaDB, Dynamo DB, RDS.
o Understands Web Servers such as Apache, Ningx.
o Must be RHEL certified.
o In depth knowledge of Linux Commands and Services.
o Efficiency enough to manage all internet applications inclusive FTP, SFTP, Ningx Apache, MySQL, PHP.
o Good communication skill.
o Atleast 3-7 Years of experience in AWS and Devops.
Company Profile:
i2k2 Networks is a trusted name in the IT cloud hosting services industry. We help enterprises with cloud migration, cost optimization, support, and fully managed services which helps them to move faster and scale with lower IT costs. i2k2 Networks offers a complete range of cutting-edge solution that drives the Internet-powered business modules. We excel in:
- Managed IT Services
- Dedicated Web Servers Hosting
- Cloud Solutions
- Email Solutions
- Enterprise Services
- Round the clock Technical Support
https://www.i2k2.com/">https://www.i2k2.com/
Regards
Nidhi Kohli
i2k2 Networks Pvt Ltd.
AM - Talent Acquisition

About K2India
About
Connect with the team
Similar jobs
CoinCROWD is a cutting-edge platform in the digital finance space, focused on delivering innovative solutions that empower individuals and businesses in the cryptocurrency ecosystem. We are passionate about creating seamless, secure, and scalable solutions to simplify the way people interact with digital currencies. As we continue to grow, we're looking for skilled backend developers to join our dynamic engineering team.
Position overview:
We're seeking a detail-oriented and proactive DevOps Engineer who has a strong background in Google Cloud Platform (GCP) environments. The ideal candidate will be comfortable operating in a fast-paced, dynamic startup environment, where they will have the opportunity to make substantial contributions.
Key Responsibilities :
- Develop, test, and maintain infrastructure on GCP.
- Automate infrastructure, application deployment, scaling, and management using Kubernetes and other similar tools.
- Collaborate with our software development team to ensure seamless deployment of software updates and enhancements.
- Monitor system performance and troubleshoot issues.
- Ensure high levels of performance, availability, sustainability, and security.
- Implement DevOps best practices, such as IAC (Infrastructure as Code).
Qualifications :
- Proven experience as a DevOps Engineer or similar role in software development and system administration.
- Strong experience with GCP (Google Cloud Platform), including Compute Engine, Cloud Functions, Cloud Storage, and other relevant GCP services.
- Knowledge of Kubernetes, Docker, Jenkins, or similar technologies.
- Familiarity with network protocols, firewalls, and VPN.
- Experience with scripting languages such as Python, Bash, etc.
- Understanding of Infrastructure as Code (IAC) tools, like Terraform or CloudFormation.
- Excellent problem-solving skills, attention to detail, and ability to work in a team.
What We Offer :
In recognition of your valuable contributions, you will receive an equity-based compensation package. Join our dynamic and innovative team in the rapidly evolving fintech industry and play a key role in shaping the future of CoinCROWD's success.
If you're ready to be at the forefront of the Payment Technology revolution and have the vision and experience to drive sales growth in the crypto space, please join us in our mission to redefine fintech at CoinCROWD.
Job Description
Position - SRE developer / DevOps Engineer
Location - Mumbai
Experience- 3- 10 years
About HaystackAnalytics:
HaystackAnalytics is a company working in deep technology of genomics, computing and data science for creating the first of its kind clinical reporting engine in Healthcare. We are a new but well funded company with a tremendous amount of pedigree in the team (IIT Founders, IIT & IIM core team). Some of the technologies we have created are a global first in infectious disease and chronic diagnostics. As a product company creating a huge amount of IP, our Technology and R&D team are our crown jewels. With early success of our products in India, we are now expanding to take our products to international shores.
Inviting Passionate Engineers to join a new age enterprise:
At HaystackAnalytics, we rely on our dynamic team of engineers to solve the many challenges and puzzles that come with our rapidly evolving stack that deals with Healthcare and Genomics.
We’re looking for full stack engineers who are passionate problem solvers, ready to work with new technologies and architectures in a forward-thinking organization that’s always pushing boundaries. Here, you will take complete, end-to-end ownership of projects across the entire stack.
Our ideal candidate has experience building enterprise products and has understanding and experience of working with new age front end technologies, web frameworks, APIs, databases, distributed computing,back end languages, caching, security, message based architectures et al.
You’ll be joining a small team working at the forefront of new technology, solving the challenges that impact both the front end and back end architecture, and ultimately, delivering amazing global user experiences.
Objectives of this Role:
- Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
- Ideate and develop new product features in collaboration with domain experts in healthcare and genomics
- Develop state of the art enterprise standard front-end and backend services
- Develop cloud platform services based on container orchestration platform
- Continuously embrace automation for repetitive tasks
- Ensure application performance, uptime, and scale, maintaining high standards of code quality by using clean coding principles and solid design patterns
- Build robust tech modules that are Unit Testable, Automating recurring tasks and processes
- Engage effectively with team members and collaborate to upskill and unblock each other
Frontend Skills
- HTML 5
- CSS framework ( LESS/ SASS )
- Es6 / Typescript
- Electron app / TAURI
- Component library ( Webcomponents / radix / material )
- CSS ( tailwind)
- State management --> Redux / Zustand / Recoil
- Build tools - > (webpack/vite/Parcel/turborepo)
- Frameworks -- > Next JS /
- Design patterns
- Test Automation Frameworks (cypress playwright etc )
- Functional Programming concepts
- Scripting ( bash , python )
Backend Skills
- Node / Deno / bun - Express / NEST JS
- Language : Typescript / Python / Rust /
- REST / GRAPHQL
- SOLID Design Principles
- Storage (mongodb/ Object Storage / postgres )
- Caching ( Redis / In memory Data grid )
- Pub sub (KAFKA / SQS / SNS / Event bridge / RabbitMQ)
- Container Technology ( Docker / Kubernetes )
- Cloud ( azure , aws , openshift )
- GITOPS
- Automation ( terraform , Serverless )
Other Skills
- Innovation and thought leadership
- UI - UX design skills
- Interest in learning new tools, languages, workflows, and philosophies to grow
- Communication
Job title - DevOps Engineer
Experience - 4+ years
Location - Pune (Onsite)
Primary Skills - Kubernetes, AWS
Roles and Responsibilities:
Cloud Infrastructure Management: Design, deploy, and maintain AWS-based cloud infrastructure using best practices for scalability, security, and cost optimization.
Kubernetes & Containerization: Manage and orchestrate containerized applications using Kubernetes and Docker, ensuring efficient deployments and scaling.
CI/CD Pipelines: Develop, implement, and maintain continuous integration and continuous deployment (CI/CD) pipelines to enable fast, reliable software releases.
Automation: Automate infrastructure provisioning, configuration, and management using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Set up and manage monitoring, logging, and alerting solutions (e.g., Prometheus, Grafana, CloudWatch, ELK Stack) to ensure the health and performance of applications and infrastructure.
Collaboration: Work closely with software engineers, system administrators, and other teams to identify bottlenecks, improve processes, and resolve issues.
Security: Implement and monitor security measures within the cloud infrastructure, ensuring compliance with industry standards and best practices (e.g., IAM, VPC, SSL/TLS).
Backup and Disaster Recovery: Set up and maintain disaster recovery plans, backup strategies, and ensure business continuity.
Performance Tuning: Continuously analyze and optimize application performance, ensuring fast and efficient operations.

- Experience with Infrastructure-as-Code tools(IaS) like Terraform and Cloud Formation.
- Proficiency in cloud-native technologies and architectures (Docker/ Kubernetes), Ci/CD pipelines.
- Good experience in Javascript.
- Expertise in Linux / Windows environment.
- Good Experience in Scripting languages like PowerShell / Bash/ Python.
- Proficiency in revision control and DevOps best practices like Git
Exp:8 to 10 years notice periods 0 to 20 days
Job Description :
- Provision Gcp Resources Based On The Architecture Design And Features Aligned With Business Objectives
- Monitor Resource Availability, Usage Metrics And Provide Guidelines For Cost And Performance Optimization
- Assist It/Business Users Resolving Gcp Service Related Issues
- Provide Guidelines For Cluster Automation And Migration Approaches And Techniques Including Ingest, Store, Process, Analyse And Explore/Visualise Data.
- Provision Gcp Resources For Data Engineering And Data Science Projects.
- Assistance With Automated Data Ingestion, Data Migration And Transformation(Good To Have)
- Assistance With Deployment And Troubleshooting Applications In Kubernetes.
- Establish Connections And Credibility In How To Address The Business Needs Via Design And Operate Cloud-Based Data Solutions
Key Responsibilities / Tasks :
- Building complex CI/CD pipelines for cloud native PaaS services such as Databases, Messaging, Storage, Compute in Google Cloud Platform
- Building deployment pipeline with Github CI (Actions)
- Building terraform codes to deploy infrastructure as a code
- Working with deployment and troubleshooting of Docker, GKE, Openshift, and Cloud Run
- Working with Cloud Build, Cloud Composer, and Dataflow
- Configuring software to be monitored by Appdynamics
- Configuring stackdriver logging and monitoring in GCP
- Work with splunk, Kibana, Prometheus and grafana to setup dashboard
Your skills, experience, and qualification :
- Total experience of 5+ Years, in as Devops. Should have at least 4 year of experience in Google could and Github CI.
- Should have strong experience in Microservices/API.
- Should have strong experience in Devops tools like Gitbun CI, teamcity, Jenkins and Helm.
- Should know Application deployment and testing strategies in Google cloud platform.
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Excellent understanding of Java
- Knowledge on Kafka, ZooKeeper, Hazelcast, Pub/Sub is nice to have.
- Understanding of cloud networking, security such as software defined networking/firewalls, virtual networks and load balancers.
- Understanding of cloud identity and access
- Understanding of the compute runtime and the differences between native compute, virtual and containers
- Configuration and managing databases such as Oracle, Cloud SQL, and Cloud Spanner.
- Excellent troubleshooting
- Working knowledge of various tools, open-source technologies
- Awareness of critical concepts of Agile principles
- Certification in Google professional Cloud DevOps Engineer is desirable.
- Experience with Agile/SCRUM environment.
- Familiar with Agile Team management tools (JIRA, Confluence)
- Understand and promote Agile values: FROCC (Focus, Respect, Openness, Commitment, Courage)
- Good communication skills
- Pro-active team player
- Comfortable working in multi-disciplinary, self-organized teams
- Professional knowledge of English
- Differentiators : knowledge/experience about
- Job Title:- Backend/DevOps Engineer
- Job Location:- Opp. Sola over bridge, Ahmedabad
- Education:- B.E./ B. Tech./ M.E./ M. Tech/ MCA
- Number of Vacancy:- 03
- 5 Days working
- Notice Period:- Can join less than a month
- Job Timing:- 10am to 7:30pm.
About the Role
Are you a server-side developer with a keen interest in reliable solutions?
Is Python your language?
Do you want a challenging role that goes beyond backend development and includes infrastructure and operations problems?
If you answered yes to all of the above, you should join our fast growing team!
We are looking for 3 experienced Backend/DevOps Engineers who will focus on backend development in Python and will be working on reliability, efficiency and scalability of our systems. As a member of our small team you will have a lot of independence and responsibilities.
As Backend/DevOps Engineer you will...:-
- Design and maintain systems that are robust, flexible and preformat
- Be responsible for building complex and take high- scale systems
- Prototype new gameplay ideas and concepts
- Develop server tools for game features and live operations
- Be one of three backend engineers on our small and fast moving team
- Work alongside our C++, Android, and iOS developers
- Contribute to ideas and design for new features
To be successful in this role, we'd expect you to…:-
- Have 3+ years of experience in Python development
- Be familiar with common database access patterns
- Have experience with designing systems and monitoring metrics, looking at graphs.
- Have knowledge of AWS, Kubernetes and Docker.
- Be able to work well in a remote development environment.
- Be able to communicate in English at a native speaking and writing level.
- Be responsible to your fellow remote team members.
- Be highly communicative and go out of your way to contribute to the team and help others
What is the role?
As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.
Key Responsibilities
- Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Working on Docker images and maintaining Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Working on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
- Required Technical and Professional Expertise.
What are we looking for?
- Minimum 4-6 years of experience in IT industry.
- Expertise in implementing and managing Devops CI/CD pipeline.
- Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
- Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
- Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience with working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience and well versed with Jira tool is a plus.
What can you look for?
A wholesome opportunity in a fast-paced environment will enable you to juggle between concepts yet maintain the quality of content, interact, share your ideas, and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore, and Dublin. We have three products in our portfolio: Plum, Empuls, and Compass. Xoxoday works with over 1000 global clients. We help our clients engage and motivate their employees, sales teams, channel partners, or consumers for better business results.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. However, we assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
- Experience working on Linux based infrastructure
- Strong hands-on knowledge of setting up production, staging, and dev environments on AWS/GCP/Azure
- Strong hands-on knowledge of technologies like Terraform, Docker, Kubernetes
- Strong understanding of continuous testing environments such as Travis-CI, CircleCI, Jenkins, etc.
- Configuration and managing databases such as MySQL, Mongo
- Excellent troubleshooting
- Working knowledge of various tools, open-source technologies, and cloud services
- Awareness of critical concepts in DevOps and Agile principles



