Key Skills Required:
· You will be part of the DevOps engineering team, configuring project environments, troubleshooting integration issues in different systems also be involved in building new features for next generation of cloud recovery services and managed services.
· You will directly guide the technical strategy for our clients and build out a new capability within the company for DevOps to improve our business relevance for customers.
· You will be coordinating with Cloud and Data team for their requirements and verify the configurations required for each production server and come with Scalable solutions.
· You will be responsible to review infrastructure and configuration of micro services and packaging and deployment of application
To be the right fit, you'll need:
· Expert in Cloud Services like AWS.
· Experience in Terraform Scripting.
· Experience in container technology like Docker and orchestration like Kubernetes.
· Good knowledge of frameworks such as Jenkins, CI/CD pipeline, Bamboo Etc.
· Experience with various version control system like GIT, build tools (Mavan, ANT, Gradle ) and cloud automation tools (Chef, Puppet, Ansible)
Experience: 6-10 years
Employment Type: Permanent
- Application modernization containerizing an existing system running on Virtual Machines with Docker Enterprise Edition.
- Implement and manage the Continuous Integration and DevOps model in the development team.
- Develop, Support and Maintain the DevOps tools for deployment, monitoring and operations.
- Work with the development, QA, Release Engineering, and Infrastructure team to maintain a high-quality deployment platform.
- Explore new technologies which will help with improving the existing technology platform.
- Manage build/release of all Dev, Test, Production environment
- Work closely with software developers to deploy and operate systems.
- Help automate and streamline operations and software delivery processes
- Maintain the configuration, discover conflicts, document the process and manage the schedule of releases for each environment
- Build and maintain tools for deployment, monitoring and operations, ensuring high availability of systems.
- Professional experience in hands-on solution architecture or system administration
- Solid development experience with Linux or other UNIX variants
- Strong experience in AWS Cloud
- Knowledge in Docker and container orchestration.
- Knowledge in a Configuration Management tool (Chef, Puppet, Salt, Ansible etc)
- Expertise in at least two programming languages
- Knowledge in RDBMS, Caching, NoSQL, Monitoring, Alerts
- Experience with Continuous integration tools such as Jenkins, GIT, Maven, Sonar and Continuous Delivery Automation.
- Good Communications skills in English
VISEO is a global technology company with HQ located in France. The Singapore office serves as VISEO Asia Pacific Japan (APJ) headquarters, with additional offices in China, Hong Kong, Philippines, Indonesia, and Australia, to address projects in other regions like Thailand, Malaysia, Korea, Japan, and India. VISEO uses technology as a powerful lever of transformation and innovation to help its clients take advantage of digital opportunities, address new uses and compete with new players who change the rules of the game. With more than 2,500 employees globally, our worldwide presence best meets clients' needs through supporting global roll-outs. If you are interested in joining us, VISEO ensures the development of your skills by having regular exchanges with peers, coaching by a technical mentor, and official certification with our partners, such as but not limited to: SAP, Salesforce.com and Commerce Cloud, Docker, Azure, AWS, Anaplan, Cegid etc. In addition, you will have fun while challenging yourself, participating in agile projects, external events and conferences, and internal technical communities, which will contribute to your career growth!
Are you passionate about system administration, scripting and process automation?
- Deploying, automating, maintaining and managing AWS and GCP cloud based production systems, to ensure the availability, performance, scalability and security of production systems.
- Build, release and configuration management of testing and production systems.
- System troubleshooting and problem solving across platform and application domains.
- Suggesting architecture improvements, recommending process improvements.
- Evaluate new technology options and products
- Ensuring critical system security through the use of best in class cloud security solutions.
- Keeping an update related to developer tools,DevOps cloud computing, Continuous Integration, Continuous Deployment, Blue Green Deployment, Continuous Monitoring, Automate Infrastructure, Continuous Delivery and Continuous Build, Continuous Testing.
- 1+ years experience with using a broad range of cloud technologies which will be covering AWS EC2, RDS, ELB, EBD, S3, VPC, Glacier, IAM, CloudWatch, Docker, Lambda etc. (or equivalents in GCP and/or Azure) to develop and maintain cloud solutions, with focus on practicing cloud security.
- Solid experience as a DevOps Engineer in a 24x7 uptime cloud environment, including automation experience with configuration management tools.
- Scripting Skills: Strong scripting and automation skills.
- Operating Systems: Windows and Linux system administration.
- Monitoring Tools: Experience with system monitoring tools .
- Problem Solving: Ability to analyze and resolve complex infrastructure resource and application deployment issues.
NEST® works with a range of international clientele in the development of bespoke software solutions. NEST® specializes in data and direct engagement structures, digital marketing, data with blockchain security and much more.
NEST® founders have had decades in the visual arts, architecture, sculpture as well as inking the human canvas. More mundanely, from fifteen countries our formal training is in law, philosophy, international logistics and technology. There are currently NEST® branch offices in England, Singapore, USA and India. What happened yesterday isn’t important. We believe what you privately achieve tomorrow is.
1) Docker and Kubernetes
2) Working knowledge of NodeJS applications, for purpose of troubleshootingCI/CD processes
3) CI/CD tools (Github Actions)
4) Solid experience with cloud technologies and platforms such as AWS
5)Concepts of REST, SOAP and API.
● Competitive Salary
● Flexible Hours
● Paid Vacations
● Fun & Creative Environment / Projects
● Quarterly Performance Reviews
● Generous Bonus Schemes
● Oversea Working Placements
● Dynamic International Clientele
● Multiple Learning Opportunities
● Control And Work On Full Product Life-Cycle
● Research and Implement Multiple New Technologies
VMware Horizon / View, VMware vSphere, NSX • Azure Cloud VDI/AVD – Azure Virtual Desktop • VMware Virtualization • Hyper converged Infrastructure, VSAN, Storage devices SAN/NAS etc.
Roles and Responsibilities
- Designing, implementing, testing and deployment of the virtual desktop infrastructure
- Facilitating the transition of the VDI solution to Operations, providing operational and end user support when required.
- Acting as the single point of contact for all technical engagements on the VDI infrastructure.
- Creation of documented standard processes and procedures for all aspects of VDI infrastructure, administration and management.
- Working with the vendor in assessing the VDI infrastructure architecture from a deployment, performance, security and compliance perspective.
Knowledge around security best practices and understanding of vulnerability assessments.
- Providing mentoring and guidance to other VDI Administrators.
Document designs, development plans, operations procedures for the VDI solution.
Role – Devops
Experience 3 – 6 Years
Roles & Responsibilities –
- 3-6 years of experience in deploying and managing highly scalable fault resilient systems
- Strong experience in container orchestration and server automation tools such as Kubernetes, Google Container Engine, Docker Swarm, Ansible, Terraform
- Strong experience with Linux-based infrastructures, Linux/Unix administration, AWS, Google Cloud, Azure
- Strong experience with databases such as MySQL, Hadoop, Elasticsearch, Redis, Cassandra, and MongoDB.
- Experience in configuring CI/CD pipelines using Jenkins, GitLab CI, Travis.
- Proficient in technologies such as Docker, Kafka, Raft and Vagrant
- Experience in implementing queueing services such as RabbitMQ, Beanstalkd, Amazon SQS and knowledge in ElasticStack is a plus.
- Automate deployments of infrastructure components and repetitive tasks.
- Drive changes strictly via the infrastructure-as-code methodology.
- Promote the use of source control for all changes including application and system-level changes.
- Design & Implement self-recovering systems after failure events.
- Participate in system sizing and capacity planning of various components.
- Create and maintain technical documents such as installation/upgrade MOPs.
- Coordinate & collaborate with internal teams to facilitate installation & upgrades of systems.
- Support 24x7 availability for corporate sites & tools.
- Participate in rotating on-call schedules.
- Actively involved in researching, evaluating & selecting new tools & technologies.
- Cloud computing – AWS, OCI, OpenStack
- Automation/Configuration management tools such as Terraform & Chef
- Atlassian tools administration (JIRA, Confluence, Bamboo, Bitbucket)
- Scripting languages - Ruby, Python, Bash
- Systems administration experience – Linux (Redhat), Mac, Windows
- SCM systems - Git
- Build tools - Maven, Gradle, Ant, Make
- Networking concepts - TCP/IP, Load balancing, Firewall
- High-Availability, Redundancy & Failover concepts
- SQL scripting & queries - DML, DDL, stored procedures
- Decisive and ability to work under pressure
- Prioritizing workload and multi-tasking ability
- Excellent written and verbal communication skills
- Database systems – Postgres, Oracle, or other RDBMS
- Mac automation tools - JAMF or other
- Atlassian Datacenter products
- Project management skills
- 3+ years of hands-on experience in the field or related area
- Requires MS or BS in Computer Science or equivalent field
Position: DevOps Lead
● Research, evangelize and implement best practices and tools for GitOps, DevOps, continuous integration, build automation, deployment automation, configuration management, infrastructure as code.
● Develop software solutions to support DevOps tooling; including investigation of bug fixes, feature enhancements, and software/tools updates
● Participate in the full systems life cycle with solution design, development, implementation, and product support using Scrum and/or other Agile practices
● Evaluating, implementing, and streamlining DevOps practices.
● Design and drive the implementation of fully automated CI/CD pipelines.
● Designing and creating Cloud services and architecture for highly available and scalable environments. Lead the monitoring, debugging, and enhancing pipelines for optimal operation and performance. Supervising, examining, and handling technical operations.
● 5 years of experience in managing application development, software delivery lifecycle, and/or infrastructure development and/or administration
● Experience with source code repository management tools, code merge and quality checks, continuous integration, and automated deployment & management using tools like Bitbucket, Git, Ansible, Terraform, Artifactory, Service Now, Sonarqube, Selenium.
● Minimum of 4 years of experience with approaches and tooling for automated build, delivery, and release of the software
● Experience and/or knowledge of CI/CD tools: Jenkins, Bitbucket Pipelines, Gitlab CI, GoCD.
● Experience with Linux systems: CentOS, RHEL, Ubuntu, Secure Linux... and Linux Administration.
● Minimum of 4 years experience with managing medium/large teams including progress monitoring and reporting
● Experience and/or knowledge of Docker, Cloud, and Orchestration: GCP, AWS, Kubernetes.
● Experience and/or knowledge of system monitoring, logging, high availability, redundancy, autoscaling, and failover.
● Experience automating manual and/or repetitive processes.
● Experience and/or knowledge with networking and load balancing: Nginx, Firewall, IP network
We have grown over 1400% in revenues in the last year.
Interface.ai provides an Intelligent Virtual Assistant (IVA) to FIs to automate calls and customer inquiries across multiple channels and engage their customers with financial insights and upsell/cross-sell.
Our IVA is transforming financial institutions’ call centers from a cost to a revenue center.
Our core technology is built 100% in-house with several breakthroughs in Natural Language Understanding. Our parser is built based on zero-shot learning that helps us to launch industry-specific IVA that can achieve over 90% accuracy on Day-1.
We are 45 people strong with employees spread across India and US locations. Many of them come from ML teams at Apple, Microsoft, and Salesforce in the US along with enterprise architects with over 20+ years of experience building large-scale systems. Our India team consists of people from ISB, IIMs, and many who have been previously part of early-stage startups.
We are a fully remote team.
Founders come from Banking and Enterprise Technology backgrounds with previous experience scaling companies from scratch to $50M+ in revenues.
As a Site Reliability Engineer you will be in charge of:
- Designing, analyzing and troubleshooting large-scale distributed systems
- Engaging in cross-functional team discussions on design, deployment, operation, and maintenance, in a fast-moving, collaborative set up
- Building automation scripts to validate the stability, scalability, and reliability of interface.ai’s products & services as well as enhance interface.ai’s employees’ productivity
- Debugging and optimizing code and automating routine tasks
- Troubleshoot and diagnose issues (hardware or software), propose and implement solutions to ensure they occur with reduced frequency
- Perform the periodic on-call duty to handle security, availability, and reliability of interface.ai’s products
- You will follow and write good code and solid engineering practices
You can be a great fit if you are :
- Extremely self motivated
- Ability to learn quickly
- Growth Mindset (read this if you don't know what it means - https://www.amazon.com/Mindset-Psychology-Carol-S-Dweck/dp/0345472322" target="_blank">link)
- Emotional Maturity (read this if you don't know what it means - https://medium.com/@krisgage/15-signs-of-emotional-maturity-38b1a2ab9766" target="_blank">link)
- Passionate about the possibilities at the intersection of AI + Banking
- Worked in a startup of 5 to 30 employees
- Developer with a strong interest in systems Design. You will be building, maintaining, and scaling our cloud infrastructure through software tooling and automation.
- 4-8 years of industry experience developing and troubleshooting large-scale infrastructure on the cloud
- Have a solid understanding of system availability, latency, and performance
- Strong programming skills in at least one major programming language and the ability to learn new languages as needed
- Strong System/network debugging skills
- Experience with management/automation tools such as Terraform/Puppet/Chef/SALT
- Experience with setting up production-level monitoring and telemetry
- Expertise in Container management & AWS
- Experience with kubernetes is a plus
- Experience building CI/CD pipelines
- Experience working with Web sockets, Redis, Postgres, Elastic search, Logstash
- Experience working in an agile team environment and proficient understanding of code versioning tools, such as Git.
- Ability to effectively articulate technical challenges and solutions.
- Proactive outlook for ways to make our systems more reliable
Reports to: Project Manager
Location: Remote | India
Employment Type: Full-time
Start Date: ASAP
Who We are
Fabric is the new commerce infrastructure for the Internet. Our mission is to accelerate the GMV of the Internet by providing a platform and ecosystem to fundamentally change the way commerce happens in a multi-channel world.
We're building a future where Direct-to-Consumer Brands, Retailers, and B2B Businesses (wholesalers, manufacturers, and distributors) have the commerce capabilities that today are only afforded by Marketplace organizations with billions of dollars in R&D. We’re building a future where the customer experience of discovery, shopping, or replenishment is individualized, delightful, and seamless in all channels. We’re building a future where merchandising, marketing, and commerce operations teams have intelligent, powerful, and practical tools to best serve their customers and grow every channel of commerce. We’re building a future where developers have a platform that is highly secure, scalable, the most adaptable, and simplest to build upon.
We are a team of passionate people who love what we do. Join us to build the new commerce fabric for the internet.
The DevOps Engineer partners with Product, Engineering and Design teams to deliver new features and enhancements for YDV’s new eCommerce platform. This position focuses on providing eCommerce and related technology expertise to design, develop, and support of on-line, customer facing, eCommerce business solutions.
The successful candidate will have experience of a strong, hands-on technologist. A person who is comfortable with multiple priorities in a fast-paced environment is required. Work with other engineers, managers, Product Management, QA, and Operations teams to develop innovative solutions that meet market needs with respect to functionality, performance, reliability, realistic implementation schedules, and adherence to development goals and principles
- Work on our client’s e-commerce solution. Our eCommerce solution runs on AWS with a server less architecture and a ReactJs front-end. We use several kinds of in-memory and persistent data storage.
- Manage deployment of code components through a Continuous Integration and Continuous Deployment pipeline.
- Lead a small team dedicated to management and projects, supporting a high-performing, cloud infrastructure
- Work with a small but experienced tech team, providing an unparalleled continuous learning environment with growth potential
- Able to move the needle / contribute in a significant manner
- 4+ years experience with CI/CD using git, Webhooks, Jenkins/CircleCI or similar
- 1+ years experience of working with cloud-native infrastructure on major cloud players like AWS, Heroku, MS Azure, GCP, with hands-on experience of managing services like:
- Function As A Service, like AWS Lambda
- API Gateways
- Cloud Servers, Like EC2
- Container orchestration using AWS ECS/Kubernetes or similar, serverless/clusterless container offerings like AWS Fargate on ECS
- Network and Security: VPC, Security Groups, Route53, CloudFront
- IaC : AWS CloudFormation or Terraform
- Infrastructure monitoring and log management with ELK, Datadog or similar
- Proficiency in at least one scripting language
- Have worked closely with the QA Automation engineering team for automatic segmented deployment.
- Know and understand how to configure high-availability, high-performing applications
- Server and service monitoring with Nagios or similar
- Knowledge of firewall, switch and network configuration & debugging (DHCP, DNS, IPv4, IP routing)
*Brownie Points on:*
- Experience of owning CI/CD pipelines and creating branching strategies including release tagging and versioning.
- Experience of deploying *js applications managed by npm/yarn/serverless
- Experience with multiple IaC tools like Terraform, Ansible, Chef, Puppet.
DevOps Engineer responsibilities include deploying product updates, identifying production issues, and implementing integrations that meet customer needs. If you have a solid background in working with cloud technologies, set up efficient deployment processes, and are motivated to work with diverse and talented teams, we’d like to meet you.
Ultimately, you will execute and automate operational processes fast, accurately, and securely.
Skills and Experience
4+ years of experience in building infrastructure experience with Cloud Providers ( AWS, Azure, GCP)
Experience in deploying containerized applications build on NodeJS/PHP/Python to kubernetes cluster.
Experience in monitoring production workload with relevant metrics and dashboards.
Experience in writing automation scripts using Shell, Python, Terraform, etc.
Experience in following security practices while setting up the infrastructure.
Self-motivated, able, and willing to help where help is needed
Able to build relationships, be culturally sensitive, have goal alignment, have learning agility
Roles and Responsibilities
Manage various resources across different cloud providers. (Azure, AWS, and GCP)
Monitor and optimize infrastructure cost.
Manage various kubernetes clusters with appropriate monitoring and alerting setup.
Build CI/CD pipelines to orchestrate provisioning and deployment of various services into kubernetes infrastructure.
Work closely with the development team on upcoming features to determine the correct infrastructure and related tools.
Assist the support team with escalated customer issues.
Develop, improve, and thoroughly document operational practices and procedures.
Responsible for setting up good security practices across various clouds.