
About Skarpsinne Infotech
About
Connect with the team
Similar jobs
- 3+ years of relevant experience
- 2+ years experience with AWS (EC2, ECS, RDS, Elastic Cache, etc)
- Well versed with maintaining infrastructure as code (Terraform, Cloudformation, etc)
- Experience in setting CI/CD pipelines from scratch
- Knowledge of setting up and securing networks (VPN, Intranet, VPC, Peering, etc)
- Understanding of common security issues
- 2+ years work experience in a DevOps or similar role
- Knowledge of OO programming and concepts (Java, C++, C#, Python)
- A drive towards automating repetitive tasks (e.g., scripting via Bash, Python, etc)
- Fluency in one or more scripting languages such as Python or Ruby.
- Familiarity with Microservice-based architectures
- Practical experience with Docker containerization and clustering (Kubernetes/ECS)
- In-depth, hands-on experience with Linux, networking, server, and cloud architectures.
- Experience with CI/CD tools Azure DevOps, AWS cloud formation, Lamda functions, Jenkins, and Ansible
- Experience with AWS, Azure, or another cloud PaaS provider.
- Solid understanding of configuration, deployment, management, and maintenance of large cloud-hosted systems; including auto-scaling, monitoring, performance tuning, troubleshooting, and disaster recovery
- Proficiency with source control, continuous integration, and testing pipelines
- Effective communication skills
Job Responsibilities:
- Deploy and maintain critical applications on cloud-native microservices architecture.
- Implement automation, effective monitoring, and infrastructure-as-code.
- Deploy and maintain CI/CD pipelines across multiple environments.
- Streamline the software development lifecycle by identifying pain points and productivity barriers and determining ways to resolve them.
- Analyze how customers are using the platform and help drive continuous improvement.
- Support and work alongside a cross-functional engineering team on the latest technologies.
- Iterate on best practices to increase the quality & velocity of deployments.
- Sustain and improve the process of knowledge sharing throughout the engineering team
- Identification and prioritization of technical debt that risks instability or creates wasteful operational toil.
- Own daily operational goals with the team.
- Hands-on knowledge on various CI-CD tools (Jenkins/TeamCity, Artifactory, UCD, Bitbucket/Github, SonarQube) including setting up of build-deployment automated pipelines.
- Very good knowledge in scripting tools and languages such as Shell, Perl or Python , YAML/Groovy, build tools such as Maven/Gradle.
- Hands-on knowledge in containerization and orchestration tools such as Docker, OpenShift and Kubernetes.
- Good knowledge in configuration management tools such as Ansible, Puppet/Chef and have worked on setting up of monitoring tools (Splunk/Geneos/New Relic/Elk).
- Expertise in job schedulers/workload automation tools such as Control-M or AutoSys is good to have.
- Hands-on knowledge on Cloud technology (preferably GCP) including various computing services and infrastructure setup using Terraform.
- Should have basic understanding on networking, certificate management, Identity and Access Management and Information security/encryption concepts.
- • Should support day-to-day tasks related to platform and environments upkeep such as upgrades, patching, migration and system/interfaces integration.
- • Should have experience in working in Agile based SDLC delivery model, multi-task and support multiple systems/apps.
- • Big-data and Hadoop ecosystem knowledge is good to have but not mandatory.
- Should have worked on standard release, change and incident management tools such as ServiceNow/Remedy or similar
We (the Software Engineer team) are looking for a motivated, experienced person with a data-driven approach to join our Distribution Team in Bangalore to help design, execute and improve our test sets and infrastructure for producing high-quality Hadoop software.
A Day in the life
You will be part of a team that makes sure our releases are predictable and deliver high value to the customer. This team is responsible for automating and maintaining our test harness, and making test results reliable and repeatable.
You will:
-
work on making our distributed software stack more resilient to high-scale endurance runs and customer simulations
-
provide valuable fixes to our product development teams to the issues you’ve found during exhaustive test runs
-
work with product and field teams to make sure our customer simulations match the expectations and can provide valuable feedback to our customers
-
work with amazing people - We are a fun & smart team, including many of the top luminaries in Hadoop and related open source communities. We frequently interact with the research community, collaborate with engineers at other top companies & host cutting edge researchers for tech talks.
-
do innovative work - Cloudera pushes the frontier of big data & distributed computing, as our track record shows. We work on high-profile open source projects, interacting daily with engineers at other exciting companies, speaking at meet-ups, etc.
-
be a part of a great culture - Transparent and open meritocracy. Everybody is always thinking of better ways to do things, and coming up with ideas that make a difference. We build our culture to be the best workplace in our careers.
You have:
-
strong knowledge in at least 1 of the following languages: Java / Python / Scala / C++ / C#
-
hands-on experience with at least 1 of the following configuration management tools: Ansible, Chef, Puppet, Salt
-
confidence with Linux environments
-
ability to identify critical weak spots in distributed software systems
-
experience in developing automated test cases and test plans
-
ability to deal with distributed systems
-
solid interpersonal skills conducive to a distributed environment
-
ability to work independently on multiple tasks
-
self-driven & motivated, with a strong work ethic and a passion for problem solving
-
innovate and automate and break the code
The right person in this role has an opportunity to make a huge impact at Cloudera and add value to our future decisions. If this position has piqued your interest and you have what we described - we invite you to apply! An adventure in data awaits.
Requirements
- Install, configuration management, performance tuning and monitoring of Web, App and Database servers.
- Install, setup and management of Java, PHP and NodeJS stack with software load balancers.
- Install, setup and administer MySQL, Mongo, Elasticsearch & PostgreSQL DBs.
- Install, set up and maintenance monitoring solutions for like Nagios, Zabbix.
- Design and implement DevOps processes for new projects following the department's objectives of automation.
- Collaborate on projects with development teams to provide recommendations, support and guidance.
- Work towards full automation, monitoring, virtualization and containerization.
- Create and maintain tools for deployment, monitoring and operations.
- Automation of processes in a scalable and easy to understand way that can be detailed and understood through documentation.
- Develop and deploy software that will help drive improvements towards the availability, performance, efficiency, and security of services.
- Maintain 24/7 availability for responsible systems and be open to on-call rotation.
Experience: 5+yrs
Skills Required: -
Experience in Azure Administration, Configuration and Deployment of WindowsLinux VMContainer
based infrastructure Scripting Programming in Python, JavaScriptTypeScript, C Scripting PowerShell ,
Azure CLI and shell Scripts Identity, Access Management and RBAC model Virtual Networking, storage,
and Compute Resources
Azure Database Technologies. Monitoring and Analytics Tools in Azure
Azure DevOps based CICD Build pipeline integrated with GitHub – Java and Node.js
Test Automation and other CICD Tools
Azure Infrastructure using ARM template Terrafor
Job role
Anaxee is India's REACH Engine! To provide access across India, we need to build highly scalable technology which needs scalable Cloud infrastructure. We’re seeking an experienced cloud engineer with expertise in AWS (Amazon Web Services), GCP (Google Cloud Platform), Networking, Security, and Database Management; who will be Managing, Maintaining, Monitoring, Handling Cloud Platforms, and ensuring the security of the same.
You will be surrounded by people who are smart and passionate about the work they are doing.
Every day will bring new and exciting challenges to the job.
Job Location: Indore | Full Time | Experience: 1 year and Above | Salary ∝ Expertise | Rs. 1.8 LPA to Rs. 2.64 LPA
About the company:
Anaxee Digital Runners is building India's largest last-mile Outreach & data collection network of Digital Runners (shared feet-on-street, tech-enabled) to help Businesses & Consumers reach the remotest parts of India, on-demand.
We want to make REACH across India (remotest places), as easy as ordering pizza, on-demand. Already serving 11000 pin codes (57% of India) | Anaxee is one of the very few venture-funded startups in Central India | Website: www.anaxee.com
Important: Check out our company pitch (6 min video) to understand this goal - https://www.youtube.com/watch?v=7QnyJsKedz8
Responsibilities (You will enjoy the process):
#Triage and troubleshoot issues on the AWS and GCP and participate in a rotating on-call schedule and address urgent issues quickly
#Develop and leverage expert-level knowledge of supported applications and platforms in support of project teams (architecture guidance, implementation support) or business units (analysis).
#Monitoring the process on production runs, communicating the information to the advisory team, and raising production support issues to the project team.
#Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management
#Developing and implementing technical efforts to design, build, and deploy AWS and GCP applications at the direction of lead architects, including large-scale data processing and advanced analytics
#Participate in all aspects of the SDLC for AWS and GCP solutions, including planning, requirements, development, testing, and quality assurance
#Troubleshoot incidents, identify root cause, fix, and document problems, and implement preventive measures
#Educate teams on the implementation of new cloud-based initiatives, providing associated training as required
#Build and maintain operational tools for deployment, monitoring, and analysis of AWS and GCP infrastructure and systems; Design, deploy, maintain, automate & troubleshoot virtual servers and storage systems, firewalls, and Load Balancers in our hybrid cloud environment (AWS and GCP)
What makes a great DevOps Engineer (Cloud) for Anaxee:
#Candidate must have sound knowledge, and hands-on experience, in GCP (Google Cloud Platform) and AWS (Amazon Web Services)
#Good hands-on Linux Operating system OR any other similar distributions, viz. Ubuntu, CentOS, RHEL/RedHat, etc.
#1+ years of experience in the industry
#Bachelor's degree preferred with Science/Maths background (B.Sc/BCA/B.E./B.Tech)
#Enthusiasm to learn new software, take ownership and latent desire and curiosity in the related domain like Cloud, Hosting, Programming, Software development, security.
#Demonstrable skills troubleshooting a wide range of technical problems at application and system level, and have strong organizational skills with eye for detail.
#Prior knowledge of risk-chain is an added advantage
#AWS/GCP certifications is a plus
#Previous startup experience would be a huge plus.
The ideal candidate must be experienced in cloud-based tech, with a firm grasp on emerging technologies, platforms, and applications, and have the ability to customize them to help our business become more secure and efficient. From day one, you’ll have an immediate impact on the day-to-day efficiency of our IT operations, and an ongoing impact on our overall growth
What we offer
#Startup Flexibility
#Exciting challenges to learn grow and implement notions
#ESOPs (Employee Stock Ownership Plans)
#Great working atmosphere in a comfortable office,
#And an opportunity to get associated with a fast-growing VC-funded startup.
What happens after you apply?
You will receive an acknowledgment email with company details.
If gets shortlisted, our HR Team will get in touch with you (Call, Email, WhatsApp) in a couple of days
Rest all the information will be communicated to you then via our AMS.
Our expectations before/after you click “Apply Now”
Read about Anaxee: http://www.anaxee.com/
Watch this six mins pitch to get a better understanding of what we are into https://www.youtube.com/watch?v=7QnyJsKedz8
Let's dive into detail (Company Presentation): https://bit.ly/anaxee-deck-brands

PRAXINFO Hiring DevOps Engineer.
Position : DevOps Engineer
Job Location : C.G.Road, Ahmedabad
EXP : 1-3 Years
Salary : 40K - 50K
Required skills:
⦿ Good understanding of cloud infrastructure (AWS, GCP etc)
⦿ Hands on with Docker, Kubernetes or ECS
⦿ Ideally strong Linux background (RHCSA , RHCE)
⦿ Good understanding of monitoring systems (Nagios etc), Logging solutions (Elastisearch etc)
⦿ Microservice architectures
⦿ Experience with distributed systems and highly scalable systems
⦿ Demonstrated history in automating operations processes via services and tools ( Puppet, Ansible etc)
⦿ Systematic problem-solving approach coupled with a strong sense of ownership and drive.
If anyone is interested than share your resume at hiring at praxinfo dot com!
#linux #devops #engineer #kubernetes #docker #containerization #python #shellscripting #git #jenkins #maven #ant #aws #RHCE #puppet #ansible

