Primary Skills:
Linux – Ubuntu Administration, Git, Gerrit, Jenkins Administration, Cloud services (Preferred AWS) Apache, Ansible, Python, Postgresql, Rabbit MQ, CloudWatch AWS, CFT in AWS
Additional Skills Required:
- Should have experience working with Jenkins, Git, Gerrit
- Should have Good understanding of AWS Security and execution.
- Should have Good python skills
- Should have experience of working with GIT, Gerrit, Jira, Confluence,
- Exposure to messaging systems Rabbit MQ
- Exposure to Html, Groovy, Javascript, shell scripting
- Exposure to Kibana, Provisioning, capacity planning and performance analysis at various levels
- Exposure to Android skills.
- Should have experience in working with cloud-native architecture.
- Experience with log stash and elastic search
- Expert in Full Stack design technique as well as experience working across large environments with multiple operating systems/infrastructure for large-scale programs
- May be recognized as a leader in Agile and cultivating teams working in Agile frameworks
- Strong understanding of techniques such as Continuous Integration, Continuous Delivery, Test Driven Development, Cloud Development, resiliency, security
- Stays abreast of cutting edge technologies/trends and uses experience to influence application of those technologies/trends to support the business
- Experience on Modelling and Provisioning cloud infrastructure using AWS CloudFormation
Key Responsibilities:
- Perform a Technical Lead role for DevOPs development and support teams.
- Need to communicate & coordinate with both offshore and onsite teams
- Should translate business requirements into project plans and workable item/activities
- Have a thorough understanding of software development lifecycle and the ability to implement software following the structured approach.
- Need to perform in-depth technical reviews of project deliverables and ensure it should be defect free (minimize post release defects).
- Understand the current applications and technical architecture and improvise them as needed.
- Stay abreast of new technologies, methods to optimize development process and latest SDKs, testing tools etc
Similar jobs
About Lean Technologies
Lean is on a mission to revolutionize the fintech industry by providing developers with a universal API to access their customers' financial accounts across the Middle East. We’re breaking down infrastructure barriers and empowering the growth of the fintech industry. With Sequoia leading our $33 million Series A round, Lean is poised to expand its coverage across the region while continuing to deliver unparalleled value to developers and stakeholders.
Join us and be part of a journey to enable the next generation of financial innovation. We offer competitive salaries, private healthcare, flexible office hours, and meaningful equity stakes to ensure long-term alignment. At Lean, you'll work on solving complex problems, build a lasting legacy, and be part of a diverse, inclusive, and equal opportunity workplace.
About the role:
Are you a highly motivated and experienced software engineer looking to take your career to the next level? Our team at Lean is seeking a talented engineer to help us build the distributed systems that allow our engineering teams to deploy our platform in multiple geographies across various deployment solutions. You will work closely with functional heads across software, QA, and product teams to deliver scalable and customizable release pipelines.
Responsibilities
- Distributed systems architecture – understand and manage the most complex systems
- Continual reliability and performance optimization – enhancing observability stack to improve proactive detection and resolution of issues
- Employing cutting-edge methods and technologies, continually refining existing tools to enhance performance and drive advancements
- Problem-solving capabilities – troubleshooting complex issues and proactively reducing toil through automation
- Experience in technical leadership and setting technical direction for engineering projects
- Collaboration skills – working across teams to drive change and provide guidance
- Technical expertise – depth skills and ability to act as subject matter expert in one or more of: IAAC, observability, coding, reliability, debugging, system design
- Capacity planning – effectively forecasting demand and reacting to changes
- Analyze and improve efficiency, scalability, and stability of various system resources
- Incident response – rapidly detecting and resolving critical incidents. Minimizing customer impact through effective collaboration, escalation (including periodic on-call shifts) and postmortems
Requirements
- 10+ years of experience in Systems Engineering, DevOps, or SRE roles running large-scale infrastructure, cloud, or web services
- Strong background in Linux/Unix Administration and networking concepts
- We work on OCI but would accept candidates with solid GCP/AWS or other cloud providers’ knowledge and experience
- 3+ years of experience with managing Kubernetes clusters, Helm, Docker
- Experience in operating CI/CD pipelines that build and deliver services on the cloud and on-premise
- Work with CI/CD tools/services like Jenkins/GitHub-Actions/ArgoCD etc.
- Experience with configuration management tools either Ansible, Chef, Puppet, or equivalent
- Infrastructure as Code - Terraform
- Experience in production environments with both relational and NoSQL databases
- Coding with one or more of the following: Java, Python, and/or Go
Bonus
- MultiCloud or Hybrid Cloud experience
- OCI and GCP
Why Join Us?
At Lean, we value talent, drive, and entrepreneurial spirit. We are constantly on the lookout for individuals who identify with our mission and values, even if they don’t meet every requirement. If you're passionate about solving hard problems and building a legacy, Lean is the right place for you. We are committed to equal employment opportunities regardless of race, color, ancestry, religion, gender, sexual orientation, or disability.
LOCATION: - Remote (India)
EDUCATION AND EXPERIENCE -
Degree in computer science, software engineering, or a related field -
At least 3-5 years of professional work experience as a
DevOps / test automation / deployment engineer -
Experience in agile software development methodologies
JOB RESPONSIBILITIES: -
Design and development of scalable software test framework to automate test procedures - Perform health checks of existing sites periodically (manually and also developing a automation pipeline) - Manage per site software releases -
Generate release notes, user guides, technical documentation -
Perform upgrades/ downgrades (patch management) -
Generate and Manage configurations -
Suggest process improvements by interfacing with product owner -
File process/installation/deployment related issues & tracking them -
Verify customer-reported issues and translate them into technical tasks for development
REQUIREMENTS:
Following are must to have requirements, -
Automation frameworks like Selenium etc. -
Scripting languages like Perl, Python etc. -
Linux systems -
Version control system like git -
Issue tracking system like Jira -
Networking protocols -
Cloud infrastructure -
Ability to work individually and as part of a team with a sense of urgency -
Excellent communication skills in written and verbal English -
Great attention to detail
PREFERRED:
Following are good to have requirements, -
Knowledge in any of the programming languages like C, C++ etc.
Experience managing cloud-based (e.g., AWS, Google Cloud, etc.) and in-house server infrastructure -
Familiarity with machine learning / artificial intelligence infrastructure -
Experience in data visualization and statistics -
Basic knowledge of hardware infrastructure containing routers, switches and others -
Familiarity with web & data securityDesign and development of scalable software test framework to automate test
Job Description
BUDGET: 20 LPA (MAX)
What you will do - Key Responsibilities
- DevOps architect will be responsible for testing, QC, debugging support, all of the various Server Side and Java software/servers for various products developed or procured by the company, will debug problems with integration of all software, on-field deployment issues and suggest improvements/work-arounds("hacks") and structured solutions/approaches.
- Responsible for Scaling the architecture towards 10M+ users.
- Will work closely with other team members including other Web Developers, Software Developers, Application Engineers, product managers to test and deploy existing products for various specialists and personnel using the software.
- Will act in capacity of Team Lead as necessary to coordinate and organize individual effort towards a successful completion / demo of an application.
- Will be solely responsible for the application approval before demo to clients, sponsors and investors.
Essential Requirements
- Should understand the ins and outs of Docker and Kubernetes
- Can architect complex cloud-based solutions using multiple products on either AWS or GCP
- Should have a solid understanding of cryptography and secure communication
- Know your way around Unix systems and can write complex shell scripts comfortably
- Should have a solid understanding of Processes and Thread Scheduling at the OS level
- Skilled with Ruby, Python or similar scripting languages
- Experienced with installing and managing multiple GPUs spread across multiple machines
- Should have at least 5 years managing large server deployments
Category
DevOps Engineer (IT & Networking)
Expertise
DevOps - 3 Years - Intermediate Python - 2 Years AWS - 3 Years - Intermediate Docker - 3 Years - Intermediate Kubernetes - 3 Years - Intermediate
Please find the JD below:
- Candidate should have good Platform experience on Azure with Terraform.
- The devops engineer needs to help developers, create the Pipelines and K8s Deployment Manifests.
- Good to have experience on migrating data from (AWS) to Azure.
- To manage/automate infrastructure automatically using Terraforms. Jenkins is the key CI/CD tool which we uses and it will be used to run these Terraforms.
- VMs to be provisioned on Azure Cloud and managed.
- Good hands-on experience of Networking on Cloud is required.
- Ability to setup Database on VM as well as managed DB and Proper set up of cloud hosted microservices needs to be done to communicate with the db services.
- Kubernetes, Storage, KeyValult, Networking (load balancing and routing) and VMs are the key infrastructure expertise which are essential.
- Requirement is to administer Kubernetes cluster end to end. (Application deployment, managing namespaces, load balancing, policy setup, using blue green/canary deployment models etc).
- The experience in AWS is desirable.
- Python experience is optional however Power shell is mandatory.
- Know-how on the use of GitHub
- Administration of Azure Kubernetes services
Do Your Thng
DYT - Do Your Thing, is an app, where all social media users can share brands they love with their followers and earn money while doing so! We believe everyone is an influencer. Our aim is to democratise social media and allow people to be rewarded for the content they post. How does DYT help you? It accelerates your career through collaboration opportunities with top brands and gives you access to a community full of experts in the influencer space.
Role: DevOps
Job Description:
We are looking for experienced DevOps Engineers to join our Engineering team. The candidate will be working with our engineers and interact with the tech team for high quality web applications for a product.
Required Experience
- Devops Engineer with 2+ years of experience in development and production operations supporting for Linux & Windows based applications and Cloud deployments (AWS/GC stack)
- Experience working with Continuous Integration and Continuous Deployment Pipeline
- Exposure to managing LAMP stack-based applications
- Experience Resource provisioning automation using tools such as CloudFormation, terraform and ARM Templates.
- Experience in working closely with clients, understanding their requirements, design and implement quality solutions to meet their needs.
- Ability to take ownership on the carried-out work
- Experience coordinating with rest of the team to deliver well-architected and high-quality solutions.
- Experience deploying Docker based applications
- Experience with AWS services.
- Excellent verbal and written communication skills
Desired Experience
- Exposure to AWS, google cloud and Azure Cloud
- Experience in Jenkins, Ansible, Terraform
- Build Monitoring tools and respond to alarms triggered in production environment
- Willingness to quickly become a member of the team and to do what it takes to get the job done
- Ability to work well in a fast-paced environment and listen and learn from stakeholders
- Demonstrate a strong work ethic and incorporate company values in your everyday work.
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve client experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Skills and Qualifications
- BSc in Computer Science, Engineering or relevant field
- Experience as a DevOps Engineer or similar software engineering role
- Proficient with git and git workflows
- Good knowledge of Node or Python
- Working knowledge of databases and SQL
- Problem-solving attitude
- Collaborative team spirit
We are having an excellent job opportunity for the position for AWS Infra Architect for one of the reputed Multinational Company at Hyderabad.
Mandate Skills : Please find the below expectations
- We need at-least 3+ years of experience as an Architect in AWS Primary Skills
- Designing, Planning, Implementation , Providing the solutions in Designing the Architecture
- Automation Using Terraform / Powershell /Python
- Should have good experience in Cloud formation Templates
- Experience in Cloudwatch
- Security in AWS
- Strong Linux Administration skills
Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech
to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering,
AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We
are one of the top & the fastest growing partners for Google Cloud and AWS globally with
over 2,500 clients successfully moved to cloud.
What we believe?
1. Best practices are overrated
○ Implementing best practices can only make one n ‘average’ .
2. Honesty and Transparency
○ We believe in naked truth. We do what we tell and tell what we do.
3. Client Partnership
○ Client - Vendor relationship: No. We partner with clients instead.
○ And our sales team comprises 100% of our clients.
How we work?
It’s all about being Happier first. And rest follows. Searce work culture is defined by
HAPPIER.
1. Humble: Happy people don’t carry ego around. We listen to understand; not to
respond.
2. Adaptable: We are comfortable with uncertainty. And we accept changes well. As
that’s what life's about.
3. Positive: We are super positive about work & life in general. We love to forget and
forgive. We don’t hold grudges. We don’t have time or adequate space for it.
4. Passionate: We are as passionate about the great street-food vendor across the
street as about Tesla’s new model and so on. Passion is what drives us to work and
makes us deliver the quality we deliver.
5. Innovative: Innovate or Die. We love to challenge the status quo.
6. Experimental: We encourage curiosity & making mistakes.
7. Responsible: Driven. Self motivated. Self governing teams. We own it.
Are you the one? Quick self-discovery test:
1. Love for cloud: When was the last time your dinner entailed an act on “How would
‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did
the ‘Sheldon’ version of the same thing.
2. Passion for sales: When was the last time you went at a remote gas station while on
vacation, and ended up helping the gas station owner saasify his 7 gas stations
across other geographies.
3. Compassion for customers: You listen more than you speak. When you do speak,
people feel the need to listen.
4. Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can
attempt to take humanity to Mars, why can’t we take your business to run on cloud ?
Your bucket of undertakings:
This position will be responsible to consult with clients and propose architectural solutions
to help move & improve infra from on-premise to cloud or help optimize cloud spend from
one public cloud to the other.
1. Be the first one to experiment on new age cloud offerings, help define the best
practise as a thought leader for cloud, automation & Dev-Ops, be a solution
visionary and technology expert across multiple channels.
2. Continually augment skills and learn new tech as the technology and client needs
evolve
3. Use your experience in Google cloud platform, AWS or Microsoft Azure to build
hybrid-cloud solutions for customers.
4. Provide leadership to project teams, and facilitate the definition of project
deliverables around core Cloud based technology and methods.
5. Define tracking mechanisms and ensure IT standards and methodology are met;
deliver quality results.
6. Participate in technical reviews of requirements, designs, code and other artifacts
7. Identify and keep abreast of new technical concepts in google cloud platform
8. Security, Risk and Compliance - Advise customers on best practices around access
management, network setup, regulatory compliance and related areas.
Accomplishment Set
● Passionate, persuasive, articulate Cloud professional capable of quickly establishing
interest and credibility
● Good business judgment, a comfortable, open communication style, and a
willingness and ability to work with customers and teams.
● Strong service attitude and a commitment to quality.
● Highly organised and efficient.
● Confident working with others to inspire a high-quality standard.
Education, Experience, etc.
1. Is Education overrated? Yes. We believe so. However there is no way to locate you
otherwise. So unfortunately we might have to look for a Bachelor's or Master's
degree in engineering from a reputed institute or you should be programming from
12. And the latter is better. We will find you faster if you specify the latter in some
manner. Not just degree, but we are not too thrilled by tech certifications too ... :)
2. To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the
new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver’ outlives
those fancy degrees!
3. 1 - 5 years of experience with at least 2 - 3 years of hands-on experience in Cloud
Computing (AWS/GCP/Azure) and IT operational experience in a global enterprise
environment.
4. Good analytical, communication, problem solving, and learning skills.
5. Knowledge on programming against cloud platforms such as Google Cloud Platf
Job Location: Jaipur
Experience Required: Minimum 3 years
About the role:
As a DevOps Engineer for Punchh, you will be working with our developers, SRE, and DevOps teams implementing our next generation infrastructure. We are looking for a self-motivated, responsible, team player who love designing systems that scale. Punchh provides a rich engineering environment where you can be creative, learn new technologies, solve engineering problems, all while delivering business objectives. The DevOps culture here is one with immense trust and responsibility. You will be given the opportunity to make an impact as there are no silos here.
Responsibilities:
- Deliver SLA and business objectives through whole lifecycle design of services through inception to implementation.
- Ensuring availability, performance, security, and scalability of AWS production systems
- Scale our systems and services through continuous integration, infrastructure as code, and gradual refactoring in an agile environment.
- Maintain services once a project is live by monitoring and measuring availability, latency, and overall system and application health.
- Write and maintain software that runs the infrastructure that powers the Loyalty and Data platform for some of the world’s largest brands.
- 24x7 in shifts on call for Level 2 and higher escalations
- Respond to incidents and write blameless RCA’s/postmortems
- Implement and practice proper security controls and processes
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on platform.
Must have:
- Minimum 3 Years of Experience in DevOps.
- BS degree in Computer Science, Mathematics, Engineering, or equivalent practical experience.
- Strong inter-personal skills.
- Must have experience in CI/CD tooling such as Jenkins, CircleCI, TravisCI
- Must have experience in Docker, Kubernetes, Amazon ECS or Mesos
- Experience in code development in at least one high-level programming language fromthis list: python, ruby, golang, groovy
- Proficient in shell scripting, and most importantly, know when to stop scripting and start developing.
- Experience in creation of highly automated infrastructures with any Configuration Management tools like: Terraform, Cloudformation or Ansible.
- In-depth knowledge of the Linux operating system and administration.
- Production experience with a major cloud provider such Amazon AWS.
- Knowledge of web server technologies such as Nginx or Apache.
- Knowledge of Redis, Memcache, or one of the many in-memory data stores.
- Experience with various load balancing technologies such as Amazon ALB/ELB, HA Proxy, F5.
- Comfortable with large-scale, highly-available distributed systems.
Good to have:
- Understanding of Web Standards (REST, SOAP APIs, OWASP, HTTP, TLS)
- Production experience with Hashicorp products such as Vault or Consul
- Expertise in designing, analyzing troubleshooting large-scale distributed systems.
- Experience in an PCI environment
- Experience with Big Data distributions from Cloudera, MapR, or Hortonworks
- Experience maintaining and scaling database applications
- Knowledge of fundamental systems engineering principles such as CAP Theorem, Concurrency Control, etc.
- Understanding of the network fundamentals: OSI, TCI/IP, topologies, etc.
- Understanding of Auditing of Infrastructure and help org. to control Infrastructure costs.
- Experience in Kafka, RabbitMQ or any messaging bus.