
Key Responsibilities:-
• Collaborate with Data Scientists to test and scale new algorithms through pilots and later industrialize the solutions at scale to the comprehensive fashion network of the Group
• Influence, build and maintain the large-scale data infrastructure required for the AI projects, and integrate with external IT infrastructure/service to provide an e2e solution
• Leverage an understanding of software architecture and software design patterns to write scalable, maintainable, well-designed and future-proof code
• Design, develop and maintain the framework for the analytical pipeline
• Develop common components to address pain points in machine learning projects, like model lifecycle management, feature store and data quality evaluation
• Provide input and help implement framework and tools to improve data quality
• Work in cross-functional agile teams of highly skilled software/machine learning engineers, data scientists, designers, product managers and others to build the AI ecosystem within the Group
• Deliver on time, demonstrating a strong commitment to deliver on the team mission and agreed backlog

About PGP Glass Pvt Ltd
About
Similar jobs
2. Kubernetes Engineer
DevOps Systems Engineer with experience in Docker Containers, Docker Swarm, Docker Compose, Ansible, Jenkins and other tools. As part of this role she/he must work with the container best practices in design, development and implementation.
At Least 4 Years of experience in DevOps with knowledge of:
- Docker
- Docker Cloud / Containerization
- DevOps Best Practices
- Distributed Applications
- Deployment Architecture
- Atleast AWS Experience
- Exposure to Kubernetes / Serverless Architecture
Skills:
- 3-7+ years of experience in DevOps Engineering
- Strong experience with Docker Containers, Implementing Docker Containers, Container Clustering
- Experience with Docker Swarm, Docker Compose, Docker Engine
- Experience with Provisioning and managing VM's - Virtual Machines
- Experience / strong knowledge with Network Topologies, Network Research,
- Jenkins, BitBucket, Jira
- Ansible or other Automation Configuration Management System tools
- Scripting & Programming using languages such as BASH, Perl, Python, AWK, SED, PHP, Shell
- Linux Systems Administration -: Redhat
Additional Preference:
Security, SSL configuration, Best Practices
THE POSITION
Ashnik is looking for experienced technology consultant to work in DevOps team in pre-sales function. The primary area of focus will be Microservices, CI/CD pipeline, Docker, Kubernetes, containerization and container security. A person in this role will be responsible for leading technical discussions with customers and partners and helping them arrive at final solution.
QUALIFICATION AND EXPERIENCE
- Engineering or equivalent degree
- Must have at least 8 years of experience in IT industry designing and delivering solutions
- Must have at least 3 years of hand-on experience of Linux and Operating system
- Must have at least 3 years of experience of working in an environment with highly virtualized or cloud based infrastructure
- Must have at least 2 years hands on experience in CI/CD pipeline, micro-services, containerization and kubernetes
- Though coding is not needed in this role, the person should have ability to understand and debug code if required
- Should be able to explain complex solution in simpler ways
- Should be ready to travel 20-40% in a month
- Should be able to engage with customers to understand the fundamental/driving requirement
DESIRED SKILLS
- Past experience of working with Docker and/or Kubernetes at Scale
- Past experience of working in a DevOps team
- Prior experience in Pre-sales role
RESPONSIBILITIES
- Own pre-sales or sales engineering responsibility to design, present, deliver technical solution
- Be the point of contact for all technical queries for Sales team and partners
- Build full-fledged solution proposals with details of implementation and scope of work for customers
- Contribute in technical writings through blogs, whitepapers, solution demos
- Make presentations at events and participate in events.
- Conduct customer workshops to educate them about features in Docker Enterprise Edition
- Coordinate technical escalations with principal vendor
- Get an understanding of various other components and considerations involved in the areas mentioned above
- Be able to articulate value of technology vendor’s products that Ashnik is partnering with eg. Docker, Sysdig, Hashicorp, Ansible, Jenkins etc.
- Work with partners and sales team for responding to RFPs and tenders
WHAT IS IN IT FOR YOU?
You would be adding a great experience of working with a leading open source solutions company in South East Asia region to your career. You would get to learn from the leaders and grow in the industry. This would be a great opportunity for you to grow in your career through continuous learning, adding depth and breadth of technologies. Since we work with leading open source technologies and engage with large enterprises, it creates enormous possibilities for career growth for our team. Not to mention that our people find the journey with Ashnik to be exciting and a fulfilling experience.
Experience: 8-10yrs
Notice Period: max 15days
Must-haves*
1. Knowledge about Database/NoSQL DB hosting fundamentals (RDS multi-AZ, DynamoDB, MongoDB, and such)
2. Knowledge of different storage platforms on AWS (EBS, EFS, FSx) - mounting persistent volumes with Docker Containers
3. In-depth knowledge of Security principles on AWS (WAF, DDoS, Security Groups, NACL's, IAM groups, and SSO)
4. Knowledge on CI/CD platforms is required (Jenkins, GitHub actions, etc.) - Migration of AWS Code pipelines to GitHub actions
5. Knowledge of vast variety of AWS services (SNS, SES, SQS, Athena, Kinesis, S3, ECS, EKS, etc.) is required
6. Knowledge on Infrastructure as Code tool is required We use Cloudformation. (Terraform is a plus), ideally, we would like to migrate to Terraform from CloudFormation
7. Setting CloudWatch Alarms and SMS/Email Slack alerts.
8. Some Knowledge on configuring any kind of monitoring tool such as Prometheus, Dynatrace, etc. (We currently use Datadog, CloudWatch)
9. Experience with any CDN provider configurations (Cloudflare, Fastly, or CloudFront)
10. Experience with either Python or Go scripting language.
11. Experience with Git branching strategy
12. Containers hosting knowledge on both Windows and Linux
The below list is *Nice to Have*
1. Integration experience with Code Quality tools (SonarQube, NetSparker, etc) with CI/CD
2. Kubernetes
3. CDN's other than CloudFront (Cloudflare, Fastly, etc)
4. Collaboration with multiple teams
5. GitOps

Experience of Linux
Experience using Python or Shell scripting (for Automation)
Hands-on experience with Implementation of CI/CD Processes
Experience working with one cloud platforms (AWS or Azure or Google)
Experience working with configuration management tools such as Ansible & Chef
Experience working with Containerization tool Docker.
Experience working with Container Orchestration tool Kubernetes.
Experience in source Control Management including SVN and/or Bitbucket
& GitHub
Experience with setup & management of monitoring tools like Nagios, Sensu & Prometheus or any other popular tools
Hands-on experience in Linux, Scripting Language & AWS is mandatory
Troubleshoot and Triage development, Production issues
Below is the Job Description for the position of DevOps Azure Engineer in Xceedance co.
Qualifications BE/ B.Tech/ MCA in computer science
Key Requirement for the Position Develop Azure application design and connectivity patterns, Azure networking topologies, and Azure storage facilities.
• Run code conformance tools as part of releases.
• Design Azure app service web app by using Azure CLI, PowerShell, and other tools.
• Implement containerized solution using Docker and Azure Kubernetes Service
• Automating the build and deployment process through Azure DevOps approach and tools from development to production
• Design and implement CI/CD pipelines
• Script and update build and deployments.
• Coordinate environment usage and alignment.
• Develop, maintain, and optimize automated deployments code for development, test, staging and production environments.
• Configure the application and container platform with proactive monitoring tools and trigger alerts through communication channels
• Develop infrastructure and platform code
• Effectively contribute to building the overall knowledge and expertise of the technical team
• Provide Level 2/3 technical support
Location Noida or Gurgaon


Cloud native technologies - Kubernetes (EKS, GKE, AKS), AWS ECS, Helm, CircleCI, Harness, Severless platforms (AWS Fargate etc.)
Infrastructure as Code tools - Terraform, CloudFormation, Ansible
Scripting - Python, Bash
Desired Skills & Experience:
Projects/Internships with coding experience in either of Javascript, Python, Golang, Java etc.
Hands-on scripting and software development fluency in any programming language (Python, Go, Node, Ruby).
Basic understanding of Computer Science fundamentals - Networking, Web Architecture etc.
Infrastructure automation experience with knowledge of at least a few of these tools: Chef, Puppet, Ansible, CloudFormation, Terraform, Packer, Jenkins etc.
Bonus points if you have contributed to open source projects, participated in competitive coding platforms like Hackerearth, CodeForces, SPOJ etc.
You’re willing to learn various new technologies and concepts. The “cloud-native” field of software is evolving fast and you’ll need to quickly learn new technologies as required.
Communication: You like discussing a plan upfront, welcome collaboration, and are an excellent verbal and written communicator.
B.E/B.Tech/M.Tech or equivalent experience.

- Job Title:- Backend/DevOps Engineer
- Job Location:- Opp. Sola over bridge, Ahmedabad
- Education:- B.E./ B. Tech./ M.E./ M. Tech/ MCA
- Number of Vacancy:- 03
- 5 Days working
- Notice Period:- Can join less than a month
- Job Timing:- 10am to 7:30pm.
About the Role
Are you a server-side developer with a keen interest in reliable solutions?
Is Python your language?
Do you want a challenging role that goes beyond backend development and includes infrastructure and operations problems?
If you answered yes to all of the above, you should join our fast growing team!
We are looking for 3 experienced Backend/DevOps Engineers who will focus on backend development in Python and will be working on reliability, efficiency and scalability of our systems. As a member of our small team you will have a lot of independence and responsibilities.
As Backend/DevOps Engineer you will...:-
- Design and maintain systems that are robust, flexible and preformat
- Be responsible for building complex and take high- scale systems
- Prototype new gameplay ideas and concepts
- Develop server tools for game features and live operations
- Be one of three backend engineers on our small and fast moving team
- Work alongside our C++, Android, and iOS developers
- Contribute to ideas and design for new features
To be successful in this role, we'd expect you to…:-
- Have 3+ years of experience in Python development
- Be familiar with common database access patterns
- Have experience with designing systems and monitoring metrics, looking at graphs.
- Have knowledge of AWS, Kubernetes and Docker.
- Be able to work well in a remote development environment.
- Be able to communicate in English at a native speaking and writing level.
- Be responsible to your fellow remote team members.
- Be highly communicative and go out of your way to contribute to the team and help others


You will be responsible for
1. Setting up, maintaining cloud (AWS/GCP/Azure) and kubernetes cluster and automating
their operation
2. All operational aspects of devtron platform including maintenance, upgrades,
automation.
3. Providing kubernetes expertise to facilitate smooth and fast customer onboarding on
devtron platform
Responsibilities:
1. Manage devtron platform on multiple kubernetes clusters
2. Designing and embedding industry best practices for online services including disaster
recovery, business continuity, monitoring/alerting, and service health measurement
3. Providing operational support for day to day activities involving the deployment of
services
4. Identify opportunities for improving the security, reliability, and scalability of the platform
5. Facilitate smooth and fast customer onboarding on devtron platform
6. Drive customer engagement
Requirements:
● Bachelor's Degree in Computer Science or a related field.
● 2+ years working as a devops engineer
● Proficient in 1 or more programming languages (e.g. Python, Go, Ruby).
● Familiar with shell scripts, Linux commands, network fundamentals
● Understanding of large scale distributed systems
● Basic understanding of cloud computing (AWS/GCP/Azure)
Preferred Qualifications:
● Great analytical and interpersonal skills
● Passion for creating efficient, reliable, reusable programs/scripts.
● Excited about technology, have a strong interest in learning about and playing with the
latest technologies and doing POC.
● Strong customer focus, ownership, urgency and drive.
● Knowledge and experience with cloud native tools like prometheus, kubernetes, docker,
grafana.

As DevOps Engineer, you'll be part of the team building the stage for our Software Engineers to work on, helping to enhance our product performance and reliability.
Responsibilities:
- Build & operate infrastructure to support website, backed cluster, ML projects in the organization.
- Helping teams become more autonomous and allowing the Operation team to focus on improving the infrastructure and optimizing processes.
- Delivering system management tooling to the engineering teams.
- Working on your own applications which will be used internally.
- Contributing to open source projects that we are using (or that we may start).
- Be an advocate for engineering best practices in and out of the company.
- Organizing tech talks and participating in meetups and representing Box8 at industry events.
- Sharing pager duty for the rare instances of something serious happening.
- Collaborate with other developers to understand & setup tooling needed for Continuous Integration/Delivery/Deployment (CI/CD) practices.
Requirements:
- 1+ Years Of Industry Experience Scale existing back end systems to handle ever increasing amounts of traffic and new product requirements.
- Ruby On Rails or Python and Bash/Shell skills.
- Experience managing complex systems at scale.
- Experience with Docker, rkt or similar container engine.
- Experience with Kubernetes or similar clustering solutions.
- Experience with tools such as Ansible or Chef Understanding of the importance of smart metrics and alerting.
- Hands on experience with cloud infrastructure provisioning, deployment, monitoring (we are on AWS and use ECS, ELB, EC2, Elasticache, Elasticsearch, S3, CloudWatch).
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Knowledge of data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience in working on linux based servers.
- Managing large scale production grade infrastructure on AWS Cloud.
- Good Knowledge on scripting languages like ruby, python or bash.
- Experience in creating in deployment pipeline from scratch.
- Expertise in any of the CI tools, preferably Jenkins.
- Good knowledge of docker containers and its usage.
- Using Infra/App Monitoring tools like, CloudWatch/Newrelic/Sensu.
Good to have:
- Knowledge of Ruby on Rails based applications and its deployment methodologies.
- Experience working on Container Orchestration tools like Kubernetes/ECS/Mesos.
- Extra Points For Experience With Front-end development NewRelic GCP Kafka, Elasticsearch.


2. Has done Infrastructure coding using Cloudformation/Terraform and Configuration also understands it very clearly
3. Deep understanding of the microservice design and aware of centralized Caching(Redis),centralized configuration(Consul/Zookeeper)
4. Hands-on experience of working on containers and its orchestration using Kubernetes
5. Hands-on experience of Linux and Windows Operating System
6. Worked on NoSQL Databases like Cassandra, Aerospike, Mongo or
Couchbase, Central Logging, monitoring and Caching using stacks like ELK(Elastic) on the cloud, Prometheus, etc.
7. Has good knowledge of Network Security, Security Architecture and Secured SDLC practices

