Position Overview: We are seeking a talented and experienced Cloud Engineer specialized in AWS cloud services to join our dynamic team. The ideal candidate will have a strong background in AWS infrastructure and services, including EC2, Elastic Load Balancing (ELB), Auto Scaling, S3, VPC, RDS, CloudFormation, CloudFront, Route 53, AWS Certificate Manager (ACM), and Terraform for Infrastructure as Code (IaC). Experience with other AWS services is a plus.
Responsibilities:
• Design, deploy, and maintain AWS infrastructure solutions, ensuring scalability, reliability, and security.
• Configure and manage EC2 instances to meet application requirements.
• Implement and manage Elastic Load Balancers (ELB) to distribute incoming traffic across multiple instances.
• Set up and manage AWS Auto Scaling to dynamically adjust resources based on demand.
• Configure and maintain VPCs, including subnets, route tables, and security groups, to control network traffic.
• Deploy and manage AWS CloudFormation and Terraform templates to automate infrastructure provisioning using Infrastructure as Code (IaC) principles.
• Implement and monitor S3 storage solutions for secure and scalable data storage
• Set up and manage CloudFront distributions for content delivery with low latency and high transfer speeds.
• Configure Route 53 for domain management, DNS routing, and failover configurations.
• Manage AWS Certificate Manager (ACM) for provisioning, managing, and deploying SSL/TLS certificates.
• Collaborate with cross-functional teams to understand business requirements and provide effective cloud solutions.
• Stay updated with the latest AWS technologies and best practices to drive continuous improvement.
Qualifications:
• Bachelor's degree in computer science, Information Technology, or a related field.
• Minimum of 2 years of relevant experience in designing, deploying, and managing AWS cloud solutions.
• Strong proficiency in AWS services such as EC2, ELB, Auto Scaling, VPC, S3, RDS, and CloudFormation.
• Experience with other AWS services such as Lambda, ECS, EKS, and DynamoDB is a plus.
• Solid understanding of cloud computing principles, including IaaS, PaaS, and SaaS.
• Excellent problem-solving skills and the ability to troubleshoot complex issues in a cloud environment.
• Strong communication skills with the ability to collaborate effectively with cross-functional teams.
• Relevant AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are highly desirable.
Additional Information:
• We value creativity, innovation, and a proactive approach to problem-solving.
• We offer a collaborative and supportive work environment where your ideas and contributions are valued.
• Opportunities for professional growth and development. Someshwara Software Pvt Ltd is an equal opportunity employer.
We celebrate diversity and are dedicated to creating an inclusive environment for all employees.
About Someshwara Software
Similar jobs
Role : Principal Devops Engineer
About the Client
It is a Product base company that has to build a platform using AI and ML technology for their transportation and logiticsThey also have a presence in the global market
Responsibilities and Requirements
• Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
• Knowledge in Linux/Unix Administration and Python/Shell Scripting
• Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
• Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
• Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
• Experience in enterprise application development, maintenance and operations
• Knowledge of best practices and IT operations in an always-up, always-available service
• Excellent written and oral communication skills, judgment and decision-making skill
True to its name, is on a mission to unlock $100+ billion of trapped working capital in the economy by creating India’s largest marketplace for invoice discounting to solve the day-to-day. problems faced by businesses. Founded by ex-BCG and ISB / IIM alumni, and backed by SAIF Partners, CashFlo helps democratize access to credit in a fair and transparent manner. Awarded Supply Chain Finance solution of the year in 2019, CashFlo creates a win-win ecosystem for Buyers, suppliers
and financiers through its unique platform model. CashFlo shares its parentage with HCS Ltd., a 25 year old, highly reputed financial services company has raised over Rs. 15,000 Crores in the market till date,
for over 200 corporate clients.
Our leadership team consists of ex-BCG, ISB / IIM alumni with a team of industry veterans from financial services serving as the advisory board. We bring to the table deep insights in the SME lending
space, based on 100+ years of combined experience in Financial Services. We are a team of passionate problem solvers, and are looking for like-minded people to join our team.
The challenge
Solve a complex $300+ billion problem at the cutting edge of Fintech innovation, and make a tangible difference to the small business landscape in India.Find innovative solutions for problems in a yet to be discovered market.
Key Responsibilities
As an early team member, you will get a chance to set the foundations of our engineering culture. You will help articulate our engineering
principles and help set the long-term roadmap. Making decisions on the evolution of CashFlo's technical architectureBuilding new features end to end, from talking to customers to writing code.
Our Ideal Candidate Will Have
3+ years of full-time DevOps engineering experience
Hands-on experience working with AWS services
Deep understanding of virtualization and orchestration tools like Docker, ECS
Experience in writing Infrastructure as Code using tools like CDK, Cloud formation, Terragrunt or Terraform
Experience using centralized logging & monitoring tools such as ELK, CloudWatch, DataDog
Built monitoring dashboards using Prometheus, Grafana
Built and maintained code pipelines and CI/CD
Thorough knowledge of SDLC
Been part of teams that have maintained large deployments
About You
Product-minded. You have a sense for great user experience and feel for when something is off. You love understanding customer pain points
and solving for them.
Get a lot done. You enjoy all aspects of building a product and are comfortable moving across the stack when necessary. You problem solve
independently and enjoy figuring stuff out.
High conviction. When you commit to something, you're in all the way. You're opinionated, but you know when to disagree and commit.
Mediocrity is the worst of all possible outcomes.
Whats in it for me
Gain exposure to the Fintech space - one of the largest and fastest growing markets in India and globally
Shape India’s B2B Payments landscape through cutting edge technology innovation
Be directly responsible for driving company’s success
Join a high performance, dynamic and collaborative work environment that throws new challenges on a daily basis
Fast track your career with trainings, mentoring, growth opportunities on both IC and management track
Work-life balance and fun team events
LogiNext is looking for a technically savvy and passionate Principal DevOps Engineer or Senior Database Administrator to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust infrastructure.
You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in dev, staging and production environments.
Responsibilities:
Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud Scale and optimise a variety of SQL and NoSQL databases (especially MongoDB), web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Plan, implement and maintain robust backup and restoration policies ensuring low RTO and RPO Support several Linux servers running our SaaS platform stack on AWS, Azure, IBM Cloud, Ali Cloud Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations to automate daily tasks Ensure High Availability and Auto-failover with minimum or no manual interventions
Requirements:
Bachelor’s degree in Computer Science, Information Technology or a related field 8 to 10 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Strong background in Linux/Unix Administration and Python/Shell Scripting Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in query analysis, peformance tuning, database redesigning, Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills
As a SaaS DevOps Engineer, you will be responsible for providing automated tooling and process enhancements for SaaS deployment, application and infrastructure upgrades and production monitoring.
-
Development of automation scripts and pipelines for deployment and monitoring of new production environments.
-
Development of automation scripts for upgrades, hotfixes deployments and maintenance.
-
Work closely with Scrum teams and product groups to support the quality and growth of the SaaS services.
-
Collaborate closely with SaaS Operations team to handle day-to-day production activities - handling alerts and incidents.
-
Assist SaaS Operations team to handle customers focus projects: migrations, features enablement.
-
Write Knowledge articles to document known issues and best practices.
-
Conduct regression tests to validate solutions or workarounds.
-
Work in a globally distributed team.
What achievements should you have so far?
-
Bachelor's or master’s degree in Computer Science, Information Systems, or equivalent.
-
Experience with containerization, deployment, and operations.
-
Strong knowledge of CI/CD processes (Git, Jenkins, Pipelines).
-
Good experience with Linux systems and Shell scripting.
-
Basic cloud experience, preferably oriented on MS Azure.
-
Basic knowledge of containerized solutions (Helm, Kubernetes, Docker).
-
Good Networking skills and experience.
-
Having Terraform or CloudFormation knowledge will be considered a plus.
-
Ability to analyze a task from a system perspective.
-
Excellent problem-solving and troubleshooting skills.
-
Excellent written and verbal communication skills; mastery in English and local language.
-
Must be organized, thorough, autonomous, committed, flexible, customer-focused and productive.
Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data platforms at Petabyte scale. Our customers are Fortune 500 companies including Asia's largest telecom company, a unicorn fintech startup of India, and many more. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data observability platform that focuses on ensuring data reliability from every spectrum (compute, data and pipeline) of a cloud or on-premise data platform.
Position Summary-
This role will support the customer implementation of a data quality and reliability product. The candidate is expected to install the product in the client environment, manage proof of concepts with prospects, and become a product expert and troubleshoot post installation, production issues. The role will have significant interaction with the client data engineering team and the candidate is expected to have good communication skills.
Required experience
- 6-7 years experience providing engineering support to data domain/pipelines/data engineers.
- Experience in troubleshooting data issues, analyzing end to end data pipelines and in working with users in resolving issues
- Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
- Basic understanding of SQL
- Experience working with technologies like S3, Kubernetes experience preferred.
- Databricks/Hadoop/Kafka experience preferred but not required
- Preferred experience in development associated with Kafka or big data technologies understand essential Kafka components like Zookeeper, Brokers, and optimization of Kafka clients applications (Producers & Consumers). -
Experience with Automation of Infrastructure, Testing , DB Deployment Automation, Logging/Monitoring/alerting
- AWS services experience on CloudFormation, ECS, Elastic Container Registry, Pipelines, Cloudwatch, Glue, and other related services.
- AWS Elastic Kubernetes Services (EKS) - Kubernetes and containers managing and auto-scaling -
Good knowledge and hands-on experiences with various AWS services like EC2, RDS, EKS, S3, Lambda, API, Cloudwatch, etc.
- Good and quick with log analysis to perform Root Cause Analysis (RCA) on production deployments and container errors on cloud watch.
Working on ways to automate and improve deployment and release processes.
- High understanding of the Serverless architecture concept. - Good with Deployment automation tools and Investigating to resolve technical issues.
technical issues. - Sound knowledge of APIs, databases, and container-based ETL jobs.
- Planning out projects and being involved in project management decisions. Soft Skills
- Adaptability
- Collaboration with different teams
- Good communication skills
- Team player attitude
Job description
The role requires you to design development pipelines from the ground up, Creation of Docker Files, design and operate highly available systems in AWS Cloud environments. Also involves Configuration Management, Web Services Architectures, DevOps Implementation, Database management, Backups, and Monitoring.
Key responsibility area
- Ensure reliable operation of CI/CD pipelines
- Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
- Logging, metrics and alerting management.
- Creation of Bash/Python scripts for automation
- Performing root cause analysis for production errors.
Requirement
- 2 years experience as Team Lead.
- Good Command on kubernetes.
- Proficient in Linux Commands line and troubleshooting.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Hands-on experience with CI tooling preferably with Jenkins.
- Proficient in deployment using Ansible.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Scripting languages: Bash, Python, Groovy.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
Must Have:
Linux, CI/CD(Jenkin), AWS, Scripting(Bash,shell Python, Go), Ngnix, Docker.
Good to have
Configuration Management(Ansible or similar tool), Logging tool( ELK or similar), Monitoring tool(Ngios or similar), IaC(Terraform, cloudformation).
- Job Title:- Backend/DevOps Engineer
- Job Location:- Opp. Sola over bridge, Ahmedabad
- Education:- B.E./ B. Tech./ M.E./ M. Tech/ MCA
- Number of Vacancy:- 03
- 5 Days working
- Notice Period:- Can join less than a month
- Job Timing:- 10am to 7:30pm.
About the Role
Are you a server-side developer with a keen interest in reliable solutions?
Is Python your language?
Do you want a challenging role that goes beyond backend development and includes infrastructure and operations problems?
If you answered yes to all of the above, you should join our fast growing team!
We are looking for 3 experienced Backend/DevOps Engineers who will focus on backend development in Python and will be working on reliability, efficiency and scalability of our systems. As a member of our small team you will have a lot of independence and responsibilities.
As Backend/DevOps Engineer you will...:-
- Design and maintain systems that are robust, flexible and preformat
- Be responsible for building complex and take high- scale systems
- Prototype new gameplay ideas and concepts
- Develop server tools for game features and live operations
- Be one of three backend engineers on our small and fast moving team
- Work alongside our C++, Android, and iOS developers
- Contribute to ideas and design for new features
To be successful in this role, we'd expect you to…:-
- Have 3+ years of experience in Python development
- Be familiar with common database access patterns
- Have experience with designing systems and monitoring metrics, looking at graphs.
- Have knowledge of AWS, Kubernetes and Docker.
- Be able to work well in a remote development environment.
- Be able to communicate in English at a native speaking and writing level.
- Be responsible to your fellow remote team members.
- Be highly communicative and go out of your way to contribute to the team and help others
Job role
Anaxee is India's REACH Engine! To provide access across India, we need to build highly scalable technology which needs scalable Cloud infrastructure. We’re seeking an experienced cloud engineer with expertise in AWS (Amazon Web Services), GCP (Google Cloud Platform), Networking, Security, and Database Management; who will be Managing, Maintaining, Monitoring, Handling Cloud Platforms, and ensuring the security of the same.
You will be surrounded by people who are smart and passionate about the work they are doing.
Every day will bring new and exciting challenges to the job.
Job Location: Indore | Full Time | Experience: 1 year and Above | Salary ∝ Expertise | Rs. 1.8 LPA to Rs. 2.64 LPA
About the company:
Anaxee Digital Runners is building India's largest last-mile Outreach & data collection network of Digital Runners (shared feet-on-street, tech-enabled) to help Businesses & Consumers reach the remotest parts of India, on-demand.
We want to make REACH across India (remotest places), as easy as ordering pizza, on-demand. Already serving 11000 pin codes (57% of India) | Anaxee is one of the very few venture-funded startups in Central India | Website: www.anaxee.com
Important: Check out our company pitch (6 min video) to understand this goal - https://www.youtube.com/watch?v=7QnyJsKedz8
Responsibilities (You will enjoy the process):
#Triage and troubleshoot issues on the AWS and GCP and participate in a rotating on-call schedule and address urgent issues quickly
#Develop and leverage expert-level knowledge of supported applications and platforms in support of project teams (architecture guidance, implementation support) or business units (analysis).
#Monitoring the process on production runs, communicating the information to the advisory team, and raising production support issues to the project team.
#Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management
#Developing and implementing technical efforts to design, build, and deploy AWS and GCP applications at the direction of lead architects, including large-scale data processing and advanced analytics
#Participate in all aspects of the SDLC for AWS and GCP solutions, including planning, requirements, development, testing, and quality assurance
#Troubleshoot incidents, identify root cause, fix, and document problems, and implement preventive measures
#Educate teams on the implementation of new cloud-based initiatives, providing associated training as required
#Build and maintain operational tools for deployment, monitoring, and analysis of AWS and GCP infrastructure and systems; Design, deploy, maintain, automate & troubleshoot virtual servers and storage systems, firewalls, and Load Balancers in our hybrid cloud environment (AWS and GCP)
What makes a great DevOps Engineer (Cloud) for Anaxee:
#Candidate must have sound knowledge, and hands-on experience, in GCP (Google Cloud Platform) and AWS (Amazon Web Services)
#Good hands-on Linux Operating system OR any other similar distributions, viz. Ubuntu, CentOS, RHEL/RedHat, etc.
#1+ years of experience in the industry
#Bachelor's degree preferred with Science/Maths background (B.Sc/BCA/B.E./B.Tech)
#Enthusiasm to learn new software, take ownership and latent desire and curiosity in the related domain like Cloud, Hosting, Programming, Software development, security.
#Demonstrable skills troubleshooting a wide range of technical problems at application and system level, and have strong organizational skills with eye for detail.
#Prior knowledge of risk-chain is an added advantage
#AWS/GCP certifications is a plus
#Previous startup experience would be a huge plus.
The ideal candidate must be experienced in cloud-based tech, with a firm grasp on emerging technologies, platforms, and applications, and have the ability to customize them to help our business become more secure and efficient. From day one, you’ll have an immediate impact on the day-to-day efficiency of our IT operations, and an ongoing impact on our overall growth
What we offer
#Startup Flexibility
#Exciting challenges to learn grow and implement notions
#ESOPs (Employee Stock Ownership Plans)
#Great working atmosphere in a comfortable office,
#And an opportunity to get associated with a fast-growing VC-funded startup.
What happens after you apply?
You will receive an acknowledgment email with company details.
If gets shortlisted, our HR Team will get in touch with you (Call, Email, WhatsApp) in a couple of days
Rest all the information will be communicated to you then via our AMS.
Our expectations before/after you click “Apply Now”
Read about Anaxee: http://www.anaxee.com/
Watch this six mins pitch to get a better understanding of what we are into https://www.youtube.com/watch?v=7QnyJsKedz8
Let's dive into detail (Company Presentation): https://bit.ly/anaxee-deck-brands