GCP Cloud experience mandatory CICD - Azure DevOps IaC tools – Terraform Experience with IAM / Access Management within cloud Networking / Firewalls Kubernetes / Helm / Istio
Responsible for management of Information Security including policies and procedures, training, compliance, and tracking. Designed, implemented, and operated controls based on applicable Trust Services Criteria for SOC 2 and HIPAA compliance. Risk Assessment following the NIST Special Publication guidelines. * Responsible for DevOps and SRE activities. Automated the AWS Infrastructure using CloudFormation, Terraform and Chef following the principles of Infrastructure As Code. Containerizing and deploying the application on AWS Fargate using Terraform. Ensuring that the application infrastructure is scalable, secure and available by ensuring that the application is monitored and audited. Face of the company during counter-party application architecture, compliance, and security reviews.* Implemented TensorFlow based Machine Learning models for predicting probability of reduction in patient opioid usage after SCS procedure. * Product management, solution architect and lead developer for Luciditydirect.com, a Ruby on Rails SAAS application
Job Description:Must to have Devops, Jenkins, Terraforms, shell scripts.Azure Cloud computingKnowledge of code versioning tools such as Git, Bitbucket, etcGood communication in both verbal and non verbalShould be team playerGood to have § proficiency in Azure Cloud Platforms§ Knowledge of build and deployment tools such as Jenkin, Ansible, etcHaving 4 to 6 years development experience on below job descriptionBand : B21) Must Have (Top 3 skills) : Devops, Jenkins and Terraforms2) Good To have : Azure Cloud computing, GIT and Shell scripts
Must-Have’s: Hands-on DevOps (Git, Ansible, Terraform, Jenkins, Python/Ruby) Job Description: Knowledge on what is a DevOps CI/CD Pipeline Understanding of version control systems like Git, including branching and merging strategies Knowledge of what is continuous delivery and integration tools like Jenkins, Github Knowledge developing code using Ruby or Python and Java or PHP Knowledge writing Unix Shell (bash, ksh) scripts Knowledge of what is automation/configuration management using Ansible, Terraform, Chef or Puppet Experience and willingness to keep learning in a Linux environment Ability to provide after-hours support as needed for emergency or urgent situations Nice to have’s: Proficient with container based products like docker and Kubernetes Excellent communication skills (verbal and written) Able to work in a team and be a team player Knowledge of PHP, MySQL, Apache and other open source software BA/BS in computer science or similar
DevOps Engineer Who we are? Searceis a niche ’Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully accelerated on Cloud. DevOps Engineer with experience in developing, automating and debugging digital CI/CD pipelines with help of a variety of automation tools and technology on Public Cloud. Responsibilities This position will be responsible to consult with clients and propose architectural solutions to help move & improve infra from on-premise to cloud or help optimize cloud spend from one public cloud to the other. ● Be the first one to experiment on new age cloud offerings, help define the best practise as a thought leader for cloud, automation & Dev-Ops, be a solution visionary and technology expert across multiple channels. ● Continually augment skills and learn new tech as the technology and client needs evolve. ● Use your experience in Google cloud platform, AWS or Microsoft Azure to build hybrid-cloud solutions for customers. ● Provide leadership to project teams, and facilitate the definition of project deliverables around core Cloud based technology and methods. ● Define tracking mechanisms and ensure IT standards and methodology are met; deliver quality results. ____________________________________________________________ ● Participate in technical reviews of requirements, designs, code and other artifacts. ● Identify and keep abreast of new technical concepts in google cloud platform. ● Security, Risk and Compliance - Advise customers on best practices around access management, network setup, regulatory compliance and related areas. Preferred Qualification Is Education overrated? Yes. We believe so. However there is no way to locate you otherwise. So unfortunately we might have to look for the Bachelor's or Master's degree in engineering from a reputed institute or you should be programming from 12. And the latter is better. We will find you faster if you specify the latter in some manner. Not just degree, but we are not too thrilled by tech certifications too ... :) To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver' outlives those fancy degrees! 5 - 10 years of experience with at least 2 - 3 years of hands-on experience in Cloud Computing (AWS/GCP/Azure) and IT operational experience in a global enterprise environment Good analytical, communication, problem solving, and learning skills Knowledge on programming against cloud platforms such as Google Cloud Platform and lean development methodologies How Do We Work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. 1. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. ____________________________________________________________ Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great vada-pao vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self motivated. Self governing teams. We own it. We welcome *really unconventional* creative thinkers who can work in an agile, flexible environment. We are a flat organization with unlimited growth opportunities, and small team sizes – wherein flexibility is a must, mistakes are encouraged, creativity is rewarded, and excitement is required.
Hello Network - Hiring Alert!Searce Inc is looking for a Director - Cloud Engineering. ### You are a great fit if- You have worked on environments of all shapes and sizes. Onprem, private, public cloud, Hybrid, all windows / linux / healthy mix. Thanks to this experience, you can connect the dots quickly and understand customer pain points- **You are curious**. You keep up with the breakneck pace of innovation of Public cloud. Your try and learn new things.- **You are hands-on**. Not content with just making architecture diagrams, you take pleasure in bringing them to life.### skillset- Min 3 + years of experience on GCP / AWS / Azure. ( Overall 8+ years of experience)- Python, Terraform / Pulumi, CF, Ansible- Kubernetes, GKE, EKS.- Jenkins, Spinnaker, Prometheus, Grafana etc.- Strong Cloud Architecture experience.- Cloud Security- Relational & NoSQL databases- Cloud Economics. Ability to compare services across public cloud platforms and understand pros and cons- Ability to deliver technology deep dive sessions, workshops### Nice to have- Certifications - GCP and/or AWS, professional level- CNCF- Your contributions to the community - tech blog, stackoverflow etc.
If you are looking for good opportunity in Cloud Development/Devops. Here is the right opportunity.EXP: 4-10 YRsLocation:PuneJob Type: PermanentMinimum qualifications: Education: Bachelor-Master degree Proficient in English language. Relevant experience: Should have been working for at least four years as a DevOps/Cloud Engineer Should have worked on AWS Cloud Environment in depth Should have been working in an Infrastructure as code environment or understands it very clearly. Has done Infrastructure coding using Cloudformation/Terraform and Configuration Management using Chef/Ansibleand Enterprise Bus(RabbitMQ/Kafka) Deep understanding of the microservice design and aware of centralized Caching(Redis), centralizedconfiguration(Consul/Zookeeper)
JD: • 10+ years of overall industry experience• 5+ years of cloud experience• 2+ years of architect experience• Varied background preferred between systems and developmento Experience working with applications, not pure infra experience• Azure experience – strong background using Azure for application migrations• Terraform experience – should mention automation technologies in job experience• Hands on experience delivering in the cloud• Must have job experience designing solutions for customers• IaaS Cloud architectworkload migrations to AWS and/or Azure• Security architecture considerations experience• CI/CD experience• Proven applications migration track of record.
We are a self organized engineering team with a passion for programming and solving business problems for our customers. We are looking to expand our team capabilities on the DevOps front and are on a lookout for DevOps professionals having relevant hands on technical experience of 3-7 years. We encourage our team to continuously learn new technologies and apply the learnings in the day to day work even if the new technologies are not adopted. We strive to continuously improve our DevOps practices and expertise to form a solid backbone for the product, customer relationships and sales teams which enables them to add new customers every week to our financing network.As a DevOps Engineer, you :- Will work collaboratively with the engineering and customer support teams to deploy and operate our systems. - Build and maintain tools for deployment, monitoring and operations.- Help automate and streamline our operations and processes. - Troubleshoot and resolve issues in our test and production environments. - Take control of various mandates and change management processes to ensure compliance for various certifications (PCI and ISO 27001 in particular) - Monitor and optimize the usage of various cloud services. - Setup and enforce CI/CD processes and practicesSkills required :- Strong experience with AWS services (EC2, ECS, ELB, S3, SES, to name a few)- Strong background in Linux/Unix administration and hardening - Experience with automation using Ansible, Terraform or equivalent - Experience with continuous integration and continuous deployment tools (Jenkins) - Experience with container related technologies (docker, lxc, rkt, docker swarm, kubernetes) - Working understanding of code and script (Python, Perl, Ruby, Java)- Working understanding of SQL and databases - Working understanding of version control system (GIT is preferred) - Managing IT operations, setting up best practices and tuning them from time-totime. - Ensuring that process overheads do not reduce the productivity and effectiveness of small team. - Willingness to explore and learn new technologies and continuously refactor thetools and processes.
Numerator is a data and technology company reinventing market research. Headquartered in Chicago, IL, Numerator has 1,600 employees worldwide. The company blends proprietary data with advanced technology to create unique insights for the market research industry that has been slow to change. The majority of Fortune 100 companies are Numerator clients. Job Description What We Do and How?We are a market research company, revolutionizing how it's done! We mix fast paced development and unique approaches to bring best practices and strategy to our technology. Our tech stack is deep, leveraging several languages and frameworks including Python, C#, Java, Kotlin, React, Angular, and Django among others. Our engineering hurdles sit at the intersection of technologies ranging from mobile, computer vision and crowdsourcing, to machine learning and big data analytics. Our TeamFrom San Francisco to Chicago to Ottawa, our R&D team is comprised of talented individuals spanning across a robust tech stack. The R&D team is comprised of product, data analytics, engineers across Front End, Back End, DevOps, Business Intelligence, ETL, Data Science, Mobile Apps, and much more. Across these different groups we work towards one common goal: To build products into efficient and seamless user experiences that help our clients succeed. Numerator is looking for a Infrastructure Engineer to join our growing team. This is a unique opportunity where you will get a chance to work with an established and rapidly evolving platforms that handles millions of requests and massive amounts of data. In this position, you will be responsible for taking on new initiatives to automate, enhance, maintain, and scale services in a rapidly-scaling SaaS environment. As a member of our team, you will make an immediate impact as you help build out and expand our technology platforms across several software products. This is a fast-paced role with high growth, visibility, impact, and where many of the decisions for new projects will be driven by you and your team from inception through production. Some of the technologies we frequently use include: Terraform, Ansible, SumoLogic, Kubernetes, and many AWS-native services. What you will get to do• Develop and test the cloud infrastructure to scale a rapidly growing ecosystem. • Monitor and improve DevOps tools and processes, automate mundane tasks, and improve system reliability. • Provide deep expertise to help steer scalability and stability improvements early in the life-cycle of development while working with the rest of the team to automate existing processes that deploy, test, and lead our production environments. • Train teams to improve self-healing and self-service cloud-based ecosystems in an evolving AWS infrastructure. • Build internal tools to demonstrate performance and operational efficiency.• Develop comprehensive monitoring solutions to provide full visibility to the different platform components using tools and services like Kubernetes, Sumologic, Prometheus, Grafana.• Identify and troubleshoot any availability and performance issues at multiple layers of deployment, from hardware, operating environment, network, and application.• Work cross-functionally with various teams to improve Numerator’s infrastructure through automation. • Work with other teams to assist with issue resolutions related to application configuration, deployment, or debugging.• Lead by example and evangelize DevOps best practice within other engineering teams at Numerator. Skills & Requirements What you bring • A minimum of 3 years of work experience in backend software, DevOps, or a related field. • A passion for software engineering, automation and operations and are excited about reliability, availability and performance. • Availability to participate in after-hours on-call support with your fellow engineers.• Strong analytical and problem-solving mindset combined with experience troubleshooting large scale systems.• Fundamental knowledge in networking; operating systems; package build system (IP subnets and routing, ACL’s, Core Ubuntu, PIP and NPM). • Experience with automation technologies to build, deploy and integrate both infrastructure and applications (e.g., Terraform, Ansible).• Experience using scripting languages like Python and *nix tools (Bash, sed/awk, Make). • You enjoy developing and managing real-time distributed platforms and services that scale billions of requests. • Have the ability to manage multiple systems across stratified environments.• A deep enthusiasm for the Cloud and DevOps and keen to get other people involved.• Experience with scaling and operationalizing distributed data stores, file systems and services. • Running services in AWS or other cloud platforms, strong experience with Linux systems. • Experience in modern software paradigms including cloud applications and serverless architectures.• You look ahead to identify opportunities and foster a culture of innovation.• BS, MS or Ph.D. in Computer Science or a related field, or equivalent work experience. Nice to haves• Previous experience working with a geographically distributed software engineering team.• Experience working with Jenkins or Circle-CI• Experience with storage optimizations and management• Solid understanding of building scalable, highly performant systems and services• Expertise with big data, analytics, machine learning, and personalization.• Start-up or CPG industry experience If this sounds like something you would like to be part of, we’d love for you to apply! Don't worry if you think that you don't meet all the qualifications here. The tools, technology, and methodologies we use are constantly changing and we value talent and interest over specific experience. Disclaimer: We do not charge any fee for employment and the same applies to the Recruitment Partners who we work with. Numerator is an equal opportunity employer. Employment decisions are based on merit. Additionally, we do not ask for any refundable security deposit to be paid in bank accounts for employment purposes. We request candidates to be cautious of misleading communications and not pay any fee/ deposit to individuals/ agencies/ employment portals on the pretext of attending Numerator interview process or seeking employment with us. These would be fraudulent in nature. Anyone dealing with such individuals/agencies/employment portals will be doing so at his/her own risk and Numerator will not be held responsible for such loss or damage suffered directly or indirectly. Such fake job offers and appointment letters shall not be treated as any kind of offer or representation by Numerator. Please note, Numerator does not send offer letters from Hotmail, Yahoo, Gmail, or any other public email accounts or demand a fee in lieu of an employment offer /interview. If you receive such e-mails, please do not pay any fee or deposit. We request that you report such offers to recruitment team. If you have already made a payment, please log a complaint with the local police for necessary legal action to be taken. We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law.
We are a self organized engineering team with a passion for programming and solving business problems for our customers. We are looking to expand our team capabilities on the DevOps front and are on a lookout for 4 DevOps professionals having relevant hands on technical experience of 4-8 years. We encourage our team to continuously learn new technologies and apply the learnings in the day to day work even if the new technologies are not adopted. We strive to continuously improve our DevOps practices and expertise to form a solid backbone for the product, customer relationships and sales teams which enables them to add new customers every week to our financing network.As a DevOps Engineer, you :- Will work collaboratively with the engineering and customer support teams to deploy and operate our systems. - Build and maintain tools for deployment, monitoring and operations.- Help automate and streamline our operations and processes. - Troubleshoot and resolve issues in our test and production environments. - Take control of various mandates and change management processes to ensure compliance for various certifications (PCI and ISO 27001 in particular) - Monitor and optimize the usage of various cloud services. - Setup and enforce CI/CD processes and practicesSkills required :- Strong experience with AWS services (EC2, ECS, ELB, S3, SES, to name a few)- Strong background in Linux/Unix administration and hardening - Experience with automation using Ansible, Terraform or equivalent - Experience with continuous integration and continuous deployment tools (Jenkins) - Experience with container related technologies (docker, lxc, rkt, docker swarm, kubernetes) - Working understanding of code and script (Python, Perl, Ruby, Java)- Working understanding of SQL and databases - Working understanding of version control system (GIT is preferred) - Managing IT operations, setting up best practices and tuning them from time-totime. - Ensuring that process overheads do not reduce the productivity and effectiveness of small team. - Willingness to explore and learn new technologies and continuously refactor thetools and processes.
Requirement- At least 3 years of experience with relative experience in managing development operations- Hands-on experience with AWS- Thorough knowledge on setting up release pipeline, managing multiple environments like Beta, Staging, UAT, and Production- Thorough knowledge about best cloud practices and architecture- Hands-on with benchmarking and performance monitoring- Identifying various bottlenecks and taking pre-emptive measures to avoid downtime- Hands-on knowledge with at least one toolset Chef/Puppet/Ansible- Hands-on with CloudFormation / Terraform or other Infrastructure as code is a plus. - Thorough experience with Shell Scripting and should not know to shy away from learning new technologies or programming languages- Experience with other cloud providers like Azure and GCP is a plus- Should be open to R&D for creative ways to improve performance while keeping costs low What we want the person to do? - Manage, Monitor and Provision Infrastructure - Majorly on AWS- Will be responsible for maintaining 100% uptime on production servers (Site Reliability)- Setting up a release pipeline for current releases. Automating releases for Beta, Staging & Production- Maintaining near-production replica environments on Beta and Staging- Automating Releases and Versioning of Static Assets (Experience with Chef/Puppet/Ansible)- Should have hands-on experience with Build Tools like Jenkins, GitHub Actions, AWS CodeBuild etc- Identify performance gaps and ways to fix.- Weekly meetings with Engineering Team to discuss the changes/upgrades. Can be related to code issue/architecture bottlenecks.- Creative Ways of Reducing Costs of Cloud Computing- Convert Infrastructure Deployment / Provision to Infrastructure as Code for reusability and scaling.
You will work on: Your primary work involves developing and maintaining tools for build, release, deployment, monitoring and operations both on cloud as well as on-premises infrastructure. You are required to work closely with Developers and Cloud Architects and own infrastructure automation, CI/CD processes and support operations. What you will do (Responsibilities): Day-to-day operational support of CI/CD infrastructure relied upon by teams deploying software to the cloud or on-premise Write Code to develop deployment of various services to private or public cloud/on-premise environments. Participate in cloud projects to implement new technology solutions, Proof of concepts to improve cloud technology offerings. Work with developers to deploy to private or public cloud/on-premise services, debug and resolve issues. On call responsibilities to respond to emergency situations and scheduled maintenance. Contribute to and maintain documentation for systems, processes, procedures and infrastructure configuration What you bring (Skills): Strong Linux System skills Scripting in bash, python Basic file handling & networking Comfortable in Git repositories specifically on GitHub, Gitlab, Bitbucket, Gerrit Comfortable in interfacing with SQL and No-SQL databases like MySQL, Postgres, MongoDB, ElasticSearch, Redis Great if you know (Skills): Understanding various build and CI/CD systems – Maven, Gradle, Jenkins, Gitlab CI, Spinnaker or Cloud based build systems Exposure to deploying and automating on any public cloud – GCP, Azure or AWS Private cloud experience – VMWare or OpenStack Big DataOps experience – managing infrastructure and processes for Apache Airflow, Beam, Hadoop clusters Containerized applications – Docker based image builds and maintainenace. Kubernetes applications – deploy and develop operators, helm charts, manifests among other artifacts. Advantage Cognologix: Higher degree of autonomy, startup culture & small teams Opportunities to become expert in emerging technologies Remote working options for the right maturity level Competitive salary & family benefits Performance based career advancement About Cognologix: Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business first approach to help meet our client’s strategic goals. We are DevOps focused organization helping our clients focus on their core product activities by handling all aspects of their infrastructure, integration and delivery.
You will work on: You will be working on some of our clients massive scale Infrastructure and DevOps requirements - designing for microservices and large scale data analytics. You will be working on enterprise scale problems - but will be part of our agile team that delivers like a startup. You will have opportunity to be part of team that's building and managing large private cloud. What you will do (Responsibilities): Work on cloud marketplace enablements for some of our clients products Write Kubernetes Operators to automate custom PaaS solutions Participate in cloud projects to implement new technology solutions, Proof of concepts to improve cloud technology offerings. Work with developers to deploy to private or public cloud/on-premise services, debug and resolve issues. On call responsibilities to respond to emergency situations and scheduled maintenance. Contribute to and maintain documentation for systems, processes, procedures and infrastructure configuration What you bring (Skills): Experience with administering of and debugging on Linux based systems with programming skills in Scripting, Golang, Python among others Expertise in Git repositories specifically on GitHub, Gitlab, Bitbucket, Gerrit Comfortable with DevOps for Big Data databases like Terradata, Netezza, Hadoop based ecosystems, BigQuery, RedShift among others Comfortable in interfacing with SQL and No-SQL databases like MySQL, Postgres, MongoDB, ElasticSearch, Redis Great if you know (Skills): Understanding various build and CI/CD systems – Maven, Gradle, Jenkins, Gitlab CI, Spinnaker or Cloud based build systems Exposure to deploying and automating on any public cloud – GCP, Azure or AWS Private cloud experience – VMWare or OpenStack Big DataOps experience – managing infrastructure and processes for Apache Airflow, Beam, Hadoop clusters Containerized applications – Docker based image builds and maintainenace. Kubernetes applications – deploy and develop operators, helm charts, manifests among other artifacts. Advantage Cognologix: Higher degree of autonomy, startup culture & small teams Opportunities to become expert in emerging technologies Remote working options for the right maturity level Competitive salary & family benefits Performance based career advancement About Cognologix: Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business first approach to help meet our client’s strategic goals. We are DevOps focused organization helping our clients focus on their core product activities by handling all aspects of their infrastructure, integration and delivery.
What we look for? You have worked with programmable infrastructure in some way - building a CI/CD pipeline or provisioning infrastructure with programs or provisioning monitoring and logging infrastructure for large sets of machines. You love automating things, sometimes even what seems like you can’t automate - such as one of our engineers used Ansible to setup his Ubuntu machine and runs a playbook everytime he has to install something :) You don’t throw around words such as “high availability” or “resilient systems” without understanding them at least at a basic level. Because you know that the words are easy to talk but there is a fair amount of work to build such a system in practice. You love coaching people - about the 12 factor apps or the latest tool that reduced your time of doing a task by X times and so on. You know that DevOps - is meant to enable Developers do things better and faster! You understand the areas you have worked on very well but you are curious about many systems that you may not have worked on and want to fiddle with them. You know that understanding applications and the runtime technologies gives you a better perspective - you never looked at them as two different things. What you will learn and do? You will work with customers trying to transform their applications and adopting the cloud native technologies. The technologies used will be Kubernetes, Prometheus, Service Mesh, Distributed tracing and public cloud technologies. The problems are and solutions are continuously evolving in the space but fundamentally you will solve problems with simplest and scalable automation. You will build open source tools for problems that you think are common across customers and industry. No one ever benefited from re-inventing the wheel, did they? You will hack around open source projects, understand the capabilities and limitations and apply the right tool for the right job. You will educate the customers - from their operations engineers to developers on scalable ways to build and operate applications in modern cloud native infrastructure. We think InfraCloud is a rocketship you should join! InfraCloud has been working in cloud native technologies with early innovators before Kubernetes was 1.0 and when it seemed like Mesos will become the standard! Our focus & history on area of programmable infrastructure coupled with working with some innovative product companies gives us some solid engineering challenges to work on. From one of our hackathons was born BotKube Project (https://github.com/infracloudio/botkube) which was developed by our engineers and community over the last 1.5 years. When we started developing BotKube’s Microsoft Teams integration, another project was born - Go SDK for Teams (https://github.com/infracloudio/msbotbuilder-go). We are also the second largest contributor as a company to Fission - a Serverless framework for Kubernetes (http://github.com/fission/fission). Another time an engineer working with a telecom company added support for 128bit tracing ID in Jaeger. These are just some examples - and there are many more - do make a point to ask the engineers you talk to about more open source work we do. Our engineers are co-organizers of Kubernetes Pune, Docker Pune, Python Pune and can be found frequently speaking at local meetups and conferences.
1. Should have been working for at least 3 years as a DevOps/Cloud Engineer in an AWS Cloud Environment .2. Has done Infrastructure coding using Cloudformation/Terraform and Configuration also understands it very clearly3. Deep understanding of the microservice design and aware of centralized Caching(Redis),centralized configuration(Consul/Zookeeper)4. Hands-on experience of working on containers and its orchestration using Kubernetes5. Hands-on experience of Linux and Windows Operating System6. Worked on NoSQL Databases like Cassandra, Aerospike, Mongo orCouchbase, Central Logging, monitoring and Caching using stacks like ELK(Elastic) on the cloud, Prometheus, etc.7. Has good knowledge of Network Security, Security Architecture and Secured SDLC practices