MLOps Engineer
Required Candidate profile :
- 3+ years’ experience in developing continuous integration and deployment (CI/CD) pipelines (e.g. Jenkins, Github Actions) and bringing ML models to CI/CD pipelines
- Candidate with strong Azure expertise
- Exposure of Productionize the models
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops
- Proficient knowledge of git, Docker and containers, Kubernetes
- Familiarity with Terraform
- E2E production experience with Azure ML, Azure ML Pipelines
- Experience in Azure ML extension for Azure Devops
- Worked on Model Drift (Concept Drift, Data Drift preferable on Azure ML.)
- Candidate will be part of a cross-functional team that builds and delivers production-ready data science projects. You will work with team members and stakeholders to creatively identify, design, and implement solutions that reduce operational burden, increase reliability and resiliency, ensure disaster recovery and business continuity, enable CI/CD, optimize ML and AI services, and maintain it all in infrastructure as code everything-in-version-control manner.
- Candidate with strong Azure expertise
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops
About Cyphertree Technologies Pvt. Ltd.
About
Connect with the team
Similar jobs
- Configure, optimize, document, and support of the infrastructure components of software products (which are hosted in collocated facilities and cloud services such as AWS)
- Design and build tools and frameworks that support deployment and management and platforms
- Design, build, and deliver cloud computing solutions, hosted services, and underlying software infrastructures
- Build core functionality of our cloud-based platform product, deliver secure, reliable services and construct third party integrations
- Assist in coaching application developers on proper DevOps techniques for building scalable applications in the microservices paradigm
- Foster collaboration with software product development and architecture teams to ensure releases are delivered with repeatable and auditable processes
- Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restores of different environments
- Work independently across multiple platforms and applications to understand dependencies
- Evaluate new tools, technologies, and processes to improve speed, efficiency, and scalability of continuous integration environments
- Design and architect solutions for existing client-facing applications as they are moved into cloud environments such as AWS
- Competencies
- Full understanding of scripting and automated process management in languages such as Shell, Ruby and/ or Python
- Working Knowledge SCM tools such as Git, GitHub, Bitbucket, etc.
- Working knowledge of Amazon Web Services and related APIs
- Ability to deliver and manage web or cloud-based services
- General familiarity with monitoring tools
- General familiarity with configuration/provisioning tools such as Terraform
- Experience
- Experience working within an Agile type environment
- 4+ years of experience with cloud-based provisioning (Azure, AWS, Google), monitoring, troubleshooting, and related DevOps technologies
- 4+ years of experience with containerization/orchestration technologies like Rancher, Docker and Kubernetes
We are an on-demand E-commerce technology and Services Company and a tech-enabled 3PL (third party logistics). We unlocks ecommerce for companies by managing the entire operations lifecycle:
Sell, Fulfil & Reconcile.
Using us, companies can: -
• Store their inventory in our fulfilment centers (FCs)
• Sell their products on multiple sales channels (online marketplaces like Amazon, Flipkart, and their own website)
• Get their orders processed within a defined SLA
• Reconcile payments against the sales company combines infrastructure and dedicated experts to give brands: accountability, peace of mind, and control over the ecommerce journey.
The company is working on a remarkable concept for running an E-commerce business- starting from establishing an online presence for many enterprises. It offers a combination of products and services to create a comprehensive platform and manage all aspects of running a brand online, including the development of an exclusive web store, handling logistics, integrating all marketplaces and so on.
Who are we looking for?
We are looking for a skilled and passionate DevOps Engineer to join our Centre of Excellence to build and scale effective software solutions for our Ecommerce Domain.
Wondering what your Responsibilities would be?
• Building and setting up new development tools and infrastructure
• Provide full support to the software development teams to deploy, run and roll out new services and new capabilities in Cloud infrastructure
• Implement CI/CD and DevOps best practices for software application teams and assist in executing the integration and operation processes
• Build proactive monitoring and alerting infrastructure services to support operations and system health
• Be hands-on in developing prototypes and conducting Proof of Concepts
• Work in an agile, collaborative environment, partnering with other engineers to bring new solutions to the table
• Join the DevOps Chapter where you’ll have the opportunity to investigate and share information about technologies within the DevOps Engineering Community
What Makes you Eligible?
• Bachelor’s Degree or higher in Computer Science or Software Engineering with appropriate experience
• Minimum of 1 year of proven experience as DevOps Engineer
• Experience in working with a DevOps culture, following Agile Software Development methodologies of Scrum
• Proven experience in source code management tools like Bitbucket and Git
• Solid experience in CI/CD pipelines like Jenkins
• Shown ability with configuration management tools (e.g., Terraform, Ansible, Docker and Kubernetes) and repository tools like Artifactory
• Experience in Cloud architecture & provisioning
• Knowledge in Programming / Querying NoSQL databases
• Teamwork skills with a problem-solving attitude
Who We Are
Grip is building a new category of investment options for the new-age of Indian Investors. Millennials
don’t communicate, shop, pay, entertain and work like the previous generation - then why should they
invest the same way?
Started in June’20, Grip has seen 20% month-on-month growth to become one of India’s fastest-
growing destinations for alternative investments. Today, Grip offers multiple investment options providing
8-16% annual returns and 1-60 month tenures as the country’s only multi- asset (lease financing,
inventory financing, corporate bonds start-up equity, commercial real estate ) investment platform. With a
minimum investment size of INR 20,000, Grip is democratizing investment options that have previously
only been available to the largest funds and family offices.
Finance and technology (FinTech) is what we do, but people are at the core of our mission. From client-
facing roles to technology, and everywhere in between, you’ll work alongside a diverse team who loves
to solve problems, think creatively, and fly the plane as we continue to build it.
In the News
- Money Control : Click Here
- Hindu Business : Click Here
What We Can Offer You
● Young and fast-growing company with a healthy work-life balance
● Great culture based on the following core values
○ Courage
○ Transparency
○ Ownership
○ Customer Obsession
○ Celebration
● Lean structure and no micromanaging. You get to own your work
● The Company has turned two so you get a seat on rocketship that’s just taking off!
● High focus on Learning & Development and monetary support for relevant upskilling
● Competitive compensation along with equity ownership for wealth creation
What You’ll Do
● Design cloud infrastructure that is secure, scalable, and highly available onAWS
● Work collaboratively with software engineering to define infrastructure and
deploying requirements
● Provision, configure and maintain AWS cloud infrastructure defined as code
● Ensure configuration and compliance with configuration management tools
● Administer and troubleshoot Linux based systems
● Troubleshoot problems across a wide array of services and functional areas
● Build and maintain operational tools for deployment, monitoring, and analysis of AWS
infrastructure and systems
● Perform infrastructure cost analysis and optimization
Your Superpowers
● At least 3-5 years of experience building and maintaining AWS infrastructure
(VPC,EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront,
S3,RDS,Elasticbeanstalk)
● Strong understanding of how to secure AWS environments and meet
compliance requirements
● Expertise using Chef/Cloudformation/Ansible for configuration management
● Hands-on experience deploying and managing infrastructure with Terraform
● Solid foundation of networking and Linux administration
● Experience with Docker, GitHub, Kubernetes,, ELK and deploying applications onAWS
● Ability to learn/use a wide variety of open source technologies and tools
● Strong bias for action and ownership
You need to drive automation for implementing scalable and robust applications. You would indulge your dedication and passion to build server-side optimization ensuring low-latency and high-end performance for the cloud deployed within datacentre. You should have sound knowledge of Open stack and Kubernetes domain.
YOUR ‘OKR’ SUMMARY
OKR means Objective and Key Results.
As a DevOps Engineer, you will understand the overall movement of data in the entire platform, find bottlenecks,define solutions, develop key pieces, write APIs and own deployment of those. You will work with internal and external development teams to discover these opportunities and to solve hard problems. You will also guide engineers in solving the complex problems, developing your acceptance tests for those and reviewing the work and
the test results.
What you will do
- As a DevOps Engineer responsible for systems being used by customer across the globe.
- Set the goals for overall system and divide into goals for the sub-system.
- Guide/motivate/convince/mentor the architects on sub-systems and help them achieving improvements with agility and speed.
- Identify the performance bottleneck and come up with the solution to optimize time and cost taken by build/test system.
- Be a thought leader to contribute to the capacity planning for software/hardware, spanning internal and public cloud, solving the trade-off between turnaround time and utilization.
- Bring in technologies enabling massively parallel systems to improve turnaround time by an order of magnitude.
What you will need
A strong sense of ownership, urgency, and drive. As an integral part of the development team, you will need the following skills to succeed.
- BS or BE/B.Tech or equivalent experience in EE/CS with 10+ years of experience.
- Strong background of Architecting and shipping distributed scalable software product with good understanding of system programming.
- Excellent background of Cloud technologies like: OpenStack, Docker, Kubernetes, Ansible, Ceph is must.
- Excellent understanding of hybrid, multi-cloud architecture and edge computing concepts.
- Ability to identify the bottleneck and come up with solution to optimize it.
- Programming and software development skills in Python, Shell-script along with good understanding of distributed systems and REST APIs.
- Experience in working with SQL/NoSQL database systems such as MySQL, MongoDB or Elasticsearch.
- Excellent knowledge and working experience with Docker containers and Virtual Machines.
- Ability to effectively work across organizational boundaries to maximize alignment and productivity between teams.
- Ability and flexibility to work and communicate effectively in a multi-national, multi-time-zone corporate.
Additional Advantage:
- Deep understanding of technology and passionate about what you do.
- Background in designing high performant scalable software systems with strong focus to optimizehardware cost.
- Solid collaborative and interpersonal skills, specifically a proven ability to effectively guide andinfluence within a dynamic environment.
- Strong commitment to get the most performance out of a system being worked on.
- Prior development of a large software project using service-oriented architecture operating with real time constraints.
What's In It for You?
- You will get a chance to work on cloud-native and hyper-scale products
- You will be working with industry leaders in cloud.
- You can expect a steep learning curve.
- You will get the experience of solving real time problems, eventually you become a problem solver.
Benefits & Perks:
- Competitive Salary
- Health Insurance
- Open Learning - 100% Reimbursement for online technical courses.
- Fast Growth - opportunities to grow quickly and surely
- Creative Freedom + Flat hierarchy
- Sponsorship to all those employees who represent company in events and meet ups.
- Flexible working hours
- 5 days week
- Hybrid Working model (Office and WFH)
Our Hiring Process:
Candidates for this position can expect the hiring process as follows (subject to successful clearing of every round)
- Initial Resume screening call with our Recruiting team
- Next, candidates will be invited to solve coding exercises.
- Next, candidates will be invited for first technical interview
- Next, candidates will be invited for final technical interview
- Finally, candidates will be invited for Culture Plus interview with HR
- Candidates may be asked to interview with the Leadership team
- Successful candidates will subsequently be made an offer via email
As always, the interviews and screening call will be conducted via a mix of telephonic and video call.
So, if you are looking at an opportunity to really make a difference- make it with us…
Coredge.io provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable central, state or local laws.
Striim (pronounced “stream” with two i’s for integration and intelligence) was founded in 2012 with a simple goal of helping companies make data useful the instant it’s born.
Striim’s enterprise-grade, streaming integration with intelligence platform makes it easy to build continuous, streaming data pipelines – including change data capture (CDC) – to power real-time cloud integration, log correlation, edge processing, and streaming analytics
2 - 5 Years of Experience in any Programming any language (Polyglot Preferred ) & System Operations • Awareness of Devops & Agile Methodologies • Proficient in leveraging CI and CD tools to automate testing and deployment . • Experience in working in an agile and fast paced environment . • Hands on knowledge of at least one cloud platform (AWS / GCP / Azure). • Cloud networking knowledge: should understand VPC, NATs, and routers. • Contributions to open source is a plus. • Good written communication skills are a must. Contributions to technical blogs / whitepapers will be an added advantage.
- Preferred experience in development associated with Kafka or big data technologies understand essential Kafka components like Zookeeper, Brokers, and optimization of Kafka clients applications (Producers & Consumers). -
Experience with Automation of Infrastructure, Testing , DB Deployment Automation, Logging/Monitoring/alerting
- AWS services experience on CloudFormation, ECS, Elastic Container Registry, Pipelines, Cloudwatch, Glue, and other related services.
- AWS Elastic Kubernetes Services (EKS) - Kubernetes and containers managing and auto-scaling -
Good knowledge and hands-on experiences with various AWS services like EC2, RDS, EKS, S3, Lambda, API, Cloudwatch, etc.
- Good and quick with log analysis to perform Root Cause Analysis (RCA) on production deployments and container errors on cloud watch.
Working on ways to automate and improve deployment and release processes.
- High understanding of the Serverless architecture concept. - Good with Deployment automation tools and Investigating to resolve technical issues.
technical issues. - Sound knowledge of APIs, databases, and container-based ETL jobs.
- Planning out projects and being involved in project management decisions. Soft Skills
- Adaptability
- Collaboration with different teams
- Good communication skills
- Team player attitude
Roles and Responsibilities
- Primary stakeholder collaborating with Dir Engineering on software/infrastructure architecture, monitoring/alerting framework and all other architectural level technical issues
- Design and manage implementation of Silvermine’s high performance, scalable, extensible and resilient microservices application stack based of existing, partially migrated monolithic application and for new product development. Includes:
- Utilizing either ECS Fargate (no EC2 clusters) or EKS as the orchestration framework – to be tested up to a minimum of 100k concurrent users
- Exploring, designing and implementing use of on demand compute (Lambda) where appropriate
- Scalable and redundant data architecture supporting microservices design principles
- A scalable reverse proxy layer to isolate microservices from managing network connections
- Utilizing CDN capabilities to offload origin load via an intelligent caching strategy
- Leveraging best in breed AWS service offerings to enable team to focus on application stack instead of application scaffolding while minimizing operational complexity and cost
- Monitoring and optimizing of stack for
- Security and monitoring
- Leverage AWS and 3rd party services to monitor the application stack and data; secure them from DDOS attacks and security breaches; and alert the team in the vent of an incident
- Using APM and logging tools:
- Monitor application stack and infrastructure component performance
- Proactively detect, triage and mitigate stack performance issues
- Alert upon exception events
- Provide triaging tools for debugging and Root Cause Analysis.
- Enhance the CI/CD pipeline to support automated testing, a resilient deployment model (e.g., blue-green, canary) and 100% rollback support (including the data layer)
- Development a comprehensive, supportable, repeatable IAC implementation using CloudFormation or Terraform
- Take a leadership role and exhibit expertise in the development of standards, architectural governance, design patterns, best practices and optimization of existing architecture.
- Partner with teams and leaders to provide strategic consultation for business process design/optimization, creating strategic technology road maps, performing rapid prototyping and implementing technical solutions to accelerate the fulfillment of the business strategic vision.
- Staying up to date on emerging technologies (AI, Automation, Cloud etc.) and trends with a clear focus on productivity, ease of use and fit-for-purpose, by researching, testing, and evaluating.
- Providing POCs and product implementation guidelines.
- Applying imagination and innovation by creating, inventing, and implementing new or better approaches, alternatives and breakthrough ideas that are valued by customers within the function.
- Assessing current state of solutions, defining future state needs, identifying gaps and recommending new technology solutions and strategic business execution improvements.
- Overseeing and facilitating the evaluation and selection technology, product standards and the design of standard configurations/implementation patterns.
- Partnering with other architects and solution owners to create standards and set strategies for the enterprise.
- Communicating directly with business colleagues on applying digital workplace technologies to solve identified business challenges.
Skills Required:
- Good mentorship skills to coach and guide the team on AWS DevOps.
- Jenkins, Python, Pipeline as Code, Cloud Formation Templates and Terraform.
- Experience with Dockers, Containers, Lambda and Fargate is a must
- Experience with CI/CD and Release management
- Strong proficiency in PowerShell scripting
- Demonstrable expertise in Java
- Familiarity with REST APIs
Qualifications:
- Minimum of 5 years of relevant experience in Devops.
- Bachelors or Masters in Computer Science or equivalent degree.
- AWS Certifications is added advantage
Job Description
Please connect me on Linkedin or share your Resume on shrashti jain
• 8+ years of overall experience and relevant of at least 4+ years. (Devops experience has be more when compared to the overall experience)
• Experience with Kubernetes and other container management solutions
• Should have hands on and good understanding on DevOps tools and automation framework
• Demonstrated hands-on experience with DevOps techniques building continuous integration solutions using Jenkins, Docker, Git, Maven
• Experience with n-tier web application development and experience in J2EE / .Net based frameworks
• Look for ways to improve: Security, Reliability, Diagnostics, and costs
• Knowledge of security, networking, DNS, firewalls, WAF etc
• Familiarity with Helm, Terraform for provisioning GKE,Bash/shell scripting
• Must be proficient in one or more scripting languages: Unix Shell, Perl, Python
• Knowledge and experience with Linux OS
• Should have working experience with monitoring tools like DataDog, Elk, and/or SPLUNK, or any other monitoring tools/processes
• Experience working in Agile environments
• Ability to handle multiple competing priorities in a fast-paced environment
• Strong Automation and Problem-solving skills and ability
• Experience of implementing and supporting AWS based instances and services (e.g. EC2, S3, EBS, ELB, RDS, IAM, Route53, Cloudfront, Elasticache).
•Very strong hands with Automation tools such Terraform
• Good experience with provisioning tools such as Ansible, Chef
• Experience with CI CD tools such as Jenkins.
•Experience managing production.
• Good understanding of security in IT and the cloud
• Good knowledge of TCP/IP
• Good Experience with Linux, networking and generic system operations tools
• Experience with Clojure and/or the JVM
• Understanding of security concepts
• Familiarity with blockchain technology, in particular Tendermint
One of our US based client is looking for a Devops professional who can handle Technical as well as Trainings for them in US.
If you are hired, you will be sent to US for the working from there. Training & Technical work ratio will be 70% & 30% respectively.
Company Will sponsor for US Visa.
If you are an Experienced Devops professional and also given professional trainings then feel free to connect with us for more.
Implement integrations requested by customers
Deploy updates and fixes
Provide Level 2 technical support
Build tools to reduce occurrences of errors and improve customer experience
Develop software to integrate with internal back-end systems
Perform root cause analysis for production errors
Investigate and resolve technical issues
Develop scripts to automate visualization
Design procedures for system troubleshooting and maintenance
Multiple Clouds [AWS/Azure/GCP] hands on experience
Good Experience on Docker implementation at scale.
Kubernets implementation and orchestration.
Required Skills and Experience
- 4+ years of relevant experience with DevOps tools Jenkins, Ansible, Chef etc
- 4+ years of experience in continuous integration/deployment and software tools development experience with Python and shell scripts etc
- Building and running Docker images and deployment on Amazon ECS
- Working with AWS services (EC2, S3, ELB, VPC, RDS, Cloudwatch, ECS, ECR, EKS)
- Knowledge and experience working with container technologies such as Docker and Amazon ECS, EKS, Kubernetes
- Experience with source code and configuration management tools such as Git, Bitbucket, and Maven
- Ability to work with and support Linux environments (Ubuntu, Amazon Linux, CentOS)
- Knowledge and experience in cloud orchestration tools such as AWS Cloudformation/Terraform etc
- Experience with implementing "infrastructure as code", “pipeline as code” and "security as code" to enable continuous integration and delivery
- Understanding of IAM, RBAC, NACLs, and KMS
- Good communication skills
Good to have:
- Strong understanding of security concepts, methodologies and apply them such as SSH, public key encryption, access credentials, certificates etc.
- Knowledge of database administration such as MongoDB.
- Knowledge of maintaining and using tools such as Jira, Bitbucket, Confluence.
- Work with Leads and Architects in designing and implementation of technical infrastructure, platform, and tools to support modern best practices and facilitate the efficiency of our development teams through automation, CI/CD pipelines, and ease of access and performance.
- Establish and promote DevOps thinking, guidelines, best practices, and standards.
- Contribute to architectural discussions, Agile software development process improvement, and DevOps best practices.