Job Description: DevOps Engineer
About Hyno:
Hyno Technologies is a unique blend of top-notch designers and world-class developers for new-age product development. Within the last 2 years we have collaborated with 32 young startups from India, US and EU to to find the optimum solution to their complex business problems. We have helped them to address the issues of scalability and optimisation through the use of technology and minimal cost. To us any new challenge is an opportunity.
As part of Hyno’s expansion plans,Hyno, in partnership with Sparity, is seeking an experienced DevOps Engineer to join our dynamic team. As a DevOps Engineer, you will play a crucial role in enhancing our software development processes, optimising system infrastructure, and ensuring the seamless deployment of applications. If you are passionate about leveraging cutting-edge technologies to drive efficiency, reliability, and scalability in software development, this is the perfect opportunity for you.
Position: DevOps Engineer
Experience: 5-7 years
Responsibilities:
- Collaborate with cross-functional teams to design, develop, and implement CI/CD pipelines for automated application deployment, testing, and monitoring.
- Manage and maintain cloud infrastructure using tools like AWS, Azure, or GCP, ensuring scalability, security, and high availability.
- Develop and implement infrastructure as code (IaC) using tools like Terraform or CloudFormation to automate the provisioning and management of resources.
- Constantly evaluate continuous integration and continuous deployment solutions as the industry evolves, and develop standardised best practices.
- Work closely with development teams to provide support and guidance in building applications with a focus on scalability, reliability, and security.
- Perform regular security assessments and implement best practices for securing the entire development and deployment pipeline.
- Troubleshoot and resolve issues related to infrastructure, deployment, and application performance in a timely manner.
- Follow regulatory and ISO 13485 requirements.
- Stay updated with industry trends and emerging technologies in the DevOps and cloud space, and proactively suggest improvements to current processes.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related field (or equivalent work experience).
- Minimum of 5 years of hands-on experience in DevOps, system administration, or related roles.
- Solid understanding of containerization technologies (Docker, Kubernetes) and orchestration tools
- Strong experience with cloud platforms such as AWS, Azure, or GCP, including services like ECS, S3, RDS, and more.
- Proficiency in at least one programming/scripting language such as Python, Bash, or PowerShell..
- Demonstrated experience in building and maintaining CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI.
- Familiarity with configuration management tools like Ansible, Puppet, or Chef.
- Experience with container (Docker, ECS, EKS), serverless (Lambda), and Virtual Machine (VMware, KVM) architectures.
- Experience with infrastructure as code (IaC) tools like Terraform, CloudFormation, or Pulumi.
- Strong knowledge of monitoring and logging tools such as Prometheus, ELK stack, or Splunk.
- Excellent problem-solving skills and the ability to work effectively in a fast-paced, collaborative environment.
- Strong communication skills and the ability to work independently as well as in a team.
Nice to Have:
- Relevant certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, Certified Kubernetes Administrator (CKA), etc.
- Experience with microservices architecture and serverless computing.
Soft Skills:
- Excellent written and verbal communication skills.
- Ability to manage conflict effectively.
- Ability to adapt and be productive in a dynamic environment.
- Strong communication and collaboration skills supporting multiple stakeholders and business operations.
- Self-starter, self-managed, and a team player.
Join us in shaping the future of DevOps at Hyno in collaboration with Sparity. If you are a highly motivated and skilled DevOps Engineer, eager to make an impact in a remote setting, we'd love to hear from you.
About Hyno Technologies
About
Connect with the team
Similar jobs
Role : Senior Engineer Infrastructure
Key Responsibilities:
● Infrastructure Development and Management: Design, implement, and manage robust and scalable infrastructure solutions, ensuring optimal performance,security, and availability. Lead transition and migration projects, moving legacy systemsto cloud-based solutions.
● Develop and maintain applications and services using Golang.
● Automation and Optimization: Implement automation tools and frameworksto optimize operational processes. Monitorsystem performance, optimizing and modifying systems as necessary.
● Security and Compliance: Ensure infrastructure security by implementing industry best practices and compliance requirements. Respond to and mitigate security incidents and vulnerabilities.
Qualifications:
● Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent practical experience).
● Good understanding of prominent backend languageslike Golang, Python, Node.js, or others.
● In-depth knowledge of network architecture,system security, infrastructure scalability.
● Proficiency with development tools,server management, and database systems.
● Strong experience with cloud services(AWS.), deployment,scaling, and management.
● Knowledge of Azure is a plus
● Familiarity with containers and orchestration services,such as Docker, Kubernetes, etc.
● Strong problem-solving skills and analytical thinking.
● Excellent verbal and written communication skills.
● Ability to thrive in a collaborative team environment.
● Genuine passion for backend development and keen interest in scalable systems.
We are looking to fill the role of AWS devops engineer . To join our growing team, please review the list of responsibilities and qualifications.
Responsibilities:
- Engineer solutions using AWS services (Cloud Formation, EC2, Lambda, Route 53, ECS, EFS )
- Balance hardware, network, and software layers to arrive at a scalable and maintainable solution that meets requirements for uptime, performance, and functionality
- Monitor server applications and use tools and log files to troubleshoot and resolve problems
- Maintain 99.99% availability of the web and integration services
- Anticipate, identify, mitigate, and resolve issues relating to client facing infrastructure
- Monitor, analyse, and predict trends for system performance, capacity, efficiency, and reliability and recommend enhancements in order to better meet client SLAs and standards
- Research and recommend innovative and automated approaches for system administration and DevOps tasks
- Deploy and decommission client environments for multi and single tenant hosted applications following and updating as needed established processes and procedures
- Follow and develop CPA change control processes for modifications to systems and associated components
Practice configuration management, including maintenance of component inventory and related documentation per company policies and procedures.
Qualifications :
- Git/GitHub version control tools
- Linux and/or Windows Virtualisation (VMWare, Xen, KVM, Virtual Box )
- Cloud computing (AWS, Google App Engine, Rackspace Cloud)
- Application Servers, servlet containers and web servers (WebSphere, Tomcat)
- Bachelors / Masters Degree - 2+ years experience in software development
- Must have experience with AWS VPC networking and security
Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.
Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/" target="_blank">https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Title: Cloud DevOps Engineer
Role: Individual Contributor (4-8 yrs)
Requirements:
- Energetic self-starter, a fast learner, with a desire to work in a startup environment
- Experience working with Public Clouds like AWS
- Operating and Monitoring cloud infrastructure on AWS.
- Primary focus on building, implementing and managing operational support
- Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure.
- Expert at one of the scripting languages – Python, shell, etc
- Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc
- Handling load monitoring, capacity planning, and services monitoring.
- Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues.
- Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
- Provides free and subscription-based website and email services hosted and operated at data centres in Mumbai and Hyderabad.
- Serve global audience and customers through sophisticated content delivery networks.
- Operate a service infrastructure using the latest technologies for web services and a very large storage infrastructure.
- Provides virtualized infrastructure, allows seamless migration and the addition of services for scalability.
- Pioneers and earliest adopters of public cloud and NoSQL big data store - since more than a decade.
- Provide innovative internet services with work on multiple technologies like php, java, nodejs, python and c++ to scale our services as per need.
- Has Internet infrastructure peering arrangements with all the major and minor ISPs and telecom service providers.
- Have mail traffic exchange agreements with major Internet services.
Job Details :
- This job position provides competitive professional opportunity both to experienced and aspiring engineers. The company's technology and operations groups are managed by senior professionals with deep subject matter expertise.
- The company believes having an open work environment offering mentoring and learning opportunities with an informal and flexible work culture, which allows professionals to actively participate and contribute to the success of our services and business.
- You will be part of a team that keeps the business running for cloud products and services that are used 24- 7 by the company's consumers and enterprise customers around the world. You will be asked to contribute to operate, maintain and provide escalation support for the company's cloud infrastructure that powers all of cloud offerings.
Job Role :
- As a senior engineer, your role grows as you gain experience in our operations. We facilitate a hands-on learning experience after an induction program, to get you into the role as quickly as possible.
- The systems engineer role also requires candidates to research and recommend innovative and automated approaches for system administration tasks.
- The work culture allows a seamless integration with different product engineering teams. The teams work together and share responsibility to triage in complex operational situations. The candidate is expected to stay updated on best practices and help evolve processes both for resilience of services and compliance.
- You will be required to provide support for both, production and non-production environments to ensure system updates and expected service levels. You will be required to specifically handle 24/7 L2 and L3 oversight for incident responses and have an excellent understanding of the end-to-end support process from client to different support escalation levels.
- The role also requires a discipline to create, update and maintain process documents, based on operation incidents, technologies and tools used in the processes to resolve issues.
QUALIFICATION AND EXPERIENCE :
- A graduate degree or senior diploma in engineering or technology with some or all of the following:
- Knowledge and work experience with KVM, AWS (Glacier, S3, EC2), RabbitMQ, Fluentd, Syslog, Nginx is preferred
- Installation and tuning of Web Servers, PHP, Java servlets, memory-based databases for scalability and performance
- Knowledge of email related protocols such as SMTP, POP3, IMAP along with experience in maintenance and administration of MTAs such as postfix, qmail, etc will be an added advantage
- Must have knowledge on monitoring tools, trend analysis, networking technologies, security tools and troubleshooting aspects.
- Knowledge of analyzing and mitigating security related issues and threats is certainly desirable.
- Knowledge of agile development/SDLC processes and hands-on participation in planning sprints and managing daily scrum is desirable.
- Preferably, programming experience in Shell, Python, Perl or C.
- 3 - 6 years of software development, and operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.
- Design cloud infrastructure that is secure, scalable, and highly available on AWS
- Experience managing any distributed NoSQL system (Kafka/Cassandra/etc.)
- Experience with Containers, Microservices, deployment and service orchestration using Kubernetes, EKS (preferred), AKS or GKE.
- Strong scripting language knowledge, such as Python, Shell
- Experience and a deep understanding of Kubernetes.
- Experience in Continuous Integration and Delivery.
- Work collaboratively with software engineers to define infrastructure and deployment requirements
- Provision, configure and maintain AWS cloud infrastructure
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux-based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
- Perform infrastructure cost analysis and optimization
- AWS
- Docker
- Kubernetes
- Envoy
- Istio
- Jenkins
- Cloud Security & SIEM stacks
- Terraform
About Us
At Digilytics™, we build and deliver easy to use AI products to the secured lending and consumer industry sectors. In an ever-crowded world of clever technology solutions looking for a problem to solve, our solutions start with a keen understanding of what creates and what destroys value in our clients’ business.
Founded by Arindom Basu (Founding member of Infosys Consulting), the leadership of Digilytics™ is deeply rooted in leveraging disruptive technology to drive profitable business growth. With over 50 years of combined experience in technology-enabled change, the Digilytics™ leadership is focused on building a values-first firm that will stand the test of time.
We are currently focused on developing a product, Revel FS, to revolutionise loan origination for mortgages and secured lending. We are also developing a second product, Revel CI, focused on improving trade (secondary) sales to consumer industry clients like auto and FMCG players.
The leadership strongly believes in the ethos of enabling intelligence across the organization. Digiliytics AI is headquartered in London, with a presence across India.
Website: http://www.digilytics.ai">www.digilytics.ai
- Know about our product
- https://www.digilytics.ai/RevEL/Digilytics">Digilytics RelEL
- https://www.digilytics.ai/RevUP/">Digilytics RelUP
- What's it like working at Digilytics https://www.digilytics.ai/about-us.html">https://www.digilytics.ai/about-us.html
- Digilytics featured in Forbes: https://bit.ly/3zDQc4z">https://bit.ly/3zDQc4z
Responsibilities
- Experience with Azure services (Virtual machines, Containers, Databases, Security/Firewall, Function Apps etc)
- Hands-on experience on Kubernetes/Docker/helm.
- Deployment of java Builds & administration/configuration of Nginx/Reverse Proxy, Load balancer, Ms-SQL, Github, Disaster Recovery,
- Linux – Must have basic knowledge- User creation/deletion, ACL, LVM etc.
- CI/CD - Azure DevOps or any other automation tool like Terraform, Jenkins etc.
- Experience with SharePoint and O365 administration
- Azure/Kubernetes certification will be preferred.
- Microsoft Partnership experience is good to have.
- Excellent understanding of required technologies
- Good interpersonal skills and the ability to communicate ideas clearly at all levels
- Ability to work in unfamiliar business areas and to use your skills to create solutions
- Ability to both work in and lead a team and to deliver and accept peer review
- Flexible approach to working environment and hours to meet the needs of the business and clients
Must Haves:
- Hands-on experience on Kubernetes/Docker/helm.
- Experience on Azure/Aws or any other cloud provider.
- Linux & CI/CD tools knowledge.
Experience & Education:
- A start up mindset with proven experience working in both smaller and larger organizations having multicultural exposure
- Between 4-9 years of experience working closely with the relevant technologies, and developing world-class software and solutions
- Domain and industry experience by serving customers in one or more of these industries - Financial Services, Professional Services, other Retail Consumer Services
- A bachelor's degree, or equivalent, in Software Engineering and Computer Science
- Strong experience using Java programming languages or DevOps on Google Cloud.
- Strong communication skills.
- Experience in Agile methodologies
- Certification on Professional Google Cloud Data engineer will be an added advantage.
- Experience on Google Cloud Platform.
- Experience on Java or DevOps
Required Key Skills :
- Excellent verbal and written communication and interpersonal skills.
- Ability to work independently and within a team environment.
- Interpersonal skills
- GCP, Cloud, Programming
- Agile
- Java programming language or DevOps experience.
CTC- 4L - 7L
- He has to perform architectural analysis, and he should know how to design enterprise-level systems.
- He should know how to design and simulate tools for the perfect delivery of systems.
- He should know how to design, develop, and maintain systems, processes, procedures to deliver a high-quality service design.
- He has to work with other members of a team and other departments to establish healthy communication and information flow.
- He should know how to deliver a high-performing solution architecture that can support the development efforts of a business.
- He has to plan, design, and configure the most typical business solutions as needed.
- He has to prepare technical documents and other presentations for multiple solutions areas.
- He has to be sure that the best practices for configuration management are carried our as it was needed.
- He has to work on customer specifications, analyze them, and conduct the best product recommendations associated with the platform
Requirements
- AWS Solution Architect 9-10 Years
- Responsible for managing applications on public cloud (AWS) infrastructure.
- Responsible for larger migrations of applications from VM to cloud/cloud-native.
- Responsible for setting up monitoring for cloud/cloud-native-based infrastructure and applications.
- MUST: AWS Solution Architect Professional certification.
- Solve complex Cloud Infrastructure problems.
- Drive DevOps culture in the organization by working with engineering and product teams.
- Be a trusted technical advisor to developers and help them architect scalable, robust, and highly-available systems.
- Frequently collaborate with developers to help them learn how to run and maintain systems in production.
- Drive a culture of CI/CD. Find bottlenecks in the software delivery pipeline. Fix bottlenecks with developers to help them deliver working software faster. Develop and maintain infrastructure solutions for automation, alerting, monitoring, and agility.
- Evaluate cutting edge technologies and build PoCs, feasibility reports, and implementation strategies.
- Work with engineering teams to identify and remove infrastructure bottlenecks enabling them to move fast. (In simple words you'll be a bridge between tech, operations & product)
Skills required:
Must have:
- Deep understanding of open source DevOps tools.
- Scripting experience in one or more among Python, Shell, Go, etc.
- Strong experience with AWS (EC2, S3, VPC, Security, Lambda, Cloud Formation, SQS, etc)
- Knowledge of distributed system deployment.
- Deployed and Orchestrated applications with Kubernetes.
- Implemented CI/CD for multiple applications.
- Setup monitoring and alert systems for services using ELK stack or similar.
- Knowledge of Ansible, Jenkins, Nginx.
- Worked with Queue based systems.
- Implemented batch jobs and automated recurring tasks.
- Implemented caching infrastructure and policies.
- Implemented central logging.
Good to have:
- Experience dealing with PI information security.
- Experience conducting internal Audits and assisting External Audits.
- Experience implementing solutions on-premise.
- Experience with blockchain.
- Experience with Private Cloud setup.
Required Experience:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- You need to have 2-4 years of DevOps & Automation experience.
- Need to have a deep understanding of AWS.
- Need to be an expert with Git or similar version control systems.
- Deep understanding of at least one open-source distributed systems (Kafka, Redis, etc)
- Ownership attitude is a must.
We offer a suite of memberships and subscriptions to spice up your lifestyle. We believe in practicing an ultimate work life balance and satisfaction. Working hard doesn’t mean clocking in extra hours, it means having a zeal to contribute the best of your talents. Our people culture helps us inculcate measures and benefits which help you feel confident and happy each and every day. Whether you’d like to skill up, go off the grid, attend your favourite events or be an epitome of fitness. We have you covered round and about.
- Health Memberships
- Sports Subscriptions
- Entertainment Subscriptions
- Key Conferences and Event Passes
- Learning Stipend
- Team Lunches and Parties
- Travel Reimbursements
- ESOPs
Thats what we think would bloom up your personal life, as a gesture for helping us with your talents.
Join us to be a part of our Exciting journey to Build one Digital Identity Platform!!!