Founded in 1998, Directi group is a prestigious tech conglomerate with 19 years of history and 25 software products in the market. The group has churned out successful mass market businesses like Zeta, Media.net, Radix, Flock, Ringo, Skenzo, and Codechef, without any external funding.
At Anarock Tech, we are building a modern technology platform with automated analytics and reporting tools. This offers timely solutions to our real estate clients, while delivering financially favorable and efficient results.
If it excites you to - drive innovation, create industry-first solutions, build new capabilities ground-up, and work with multiple new technologies, Anarock is the place for you.
Key Job Responsibilities:
- Deploy and maintain critical applications on cloud-native microservices architecture
- Implement automation, effective monitoring, and infrastructure-as-code
- Deploy and maintain CI/CD pipelines across multiple environments
- Support and work alongside a cross-functional engineering team on the latest technologies
- Iterate on best practices to increase the quality & velocity of deployments
- Sustain and improve the process of knowledge sharing throughout the engineering team
- Experience maintaining and deploying highly-available, fault-tolerant systems at scale
- A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)
- Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda)
- Version control system experience (e.g. Git)
- Experience implementing CI/CD (e.g. Jenkins, TravisCI)
- Operational (e.g. HA/Backups) NoSQL experience (e.g. Cassandra, MongoDB, Redis)
- Bachelor or Master’s degree in CS, or equivalent practical experience
- Effective communication
Skills that will help you build a success story with us
- Worked in startup environment with high levels of ownership and full
- Experience with search techniques and solid foundation in search engines (Solr, Elasticsearch or others)
- Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with huge amount of data
- What to look for at Anarock
- Who are we A glimpse of Anarock Tech, know us better
- Anarock - Media – Visit our media page
Anarock Ethos - Values Over Value:
Our assurance of consistent ethical dealing with clients and partners reflects our motto - Values Over Value.
We value diversity within ANAROCK Group and are committed to offering equal opportunities in employment. We do not discriminate against any team member or applicant for employment based on nationality, race, color, religion, caste, gender identity / expression, sexual orientation, disability, social origin and status, indigenous status, political opinion, age, marital status or any other personal characteristics or status. ANAROCK Group values all talent and will do its utmost to hire, nurture and grow them.
Sizzle is an exciting new startup in the world of gaming. At Sizzle, we’re building AI to automatically create highlights of gaming streamers and esports tournaments.
You will be responsible for:
- Managing all DevOps and infrastructure for Sizzle
- We have both cloud and on-premise servers
- Work closely with all AI and backend engineers on processing requirements and managing both development and production requirements
- Optimize the pipeline to ensure ultra fast processing
- Work closely with management team on infrastructure upgrades
You should have the following qualities:
- 3+ years of experience in DevOps, and CI/CD
- Strong background in Linux system administration
- Deep expertise with AI/ML pipeline processing, especially with GPU processing. This doesn’t need to include model training, data gathering, etc. We’re looking more for experience on model deployment, and inferencing tasks at scale
- Deep expertise in Python including:
- Multiprocessing / multithreaded applications
- Performance profiling including memory, CPU, GPU profiling
- Comfortable with OOP and the nuances of Python classes and other OOP structures
- Error handling and building robust scripts that will be expected to run for weeks to months at a time
- Deploying to production servers and monitoring and maintaining the scripts
- DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend)
- Expertise in Docker-based virtualization including:
- Creating & maintaining custom Docker images
- Deployment of Docker images on cloud and on-premise services
- Monitoring of production Docker images with robust error handling
- Expertise in AWS infrastructure, networking, availability
- Experience with running Nvidia GPU / CUDA-based tasks
Optional but beneficial to have:
- Experience with image processing in python (e.g. openCV, Pillow, etc)
- Experience with PostgreSQL and MongoDB (Or SQL familiarity)
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Bachelors or Masters degree in computer science or related field
- Ideally a gamer or someone interested in watching gaming content online
- Hands on experience in AWS provisioning of AWS services like EC2, S3,EBS, AMI, VPC, ELB, RDS, Auto scaling groups, Cloud Formation.
- Good experience on Build and release process and extensively involved in the CICD using
- Experienced on configuration management tools like Ansible.
- Designing, implementing and supporting fully automated Jenkins CI/CD
- Extensively worked on Jenkins for continuous Integration and for end to end Automation for all Builds and Deployments.
- Proficient with Docker based container deployments to create shelf environments for dev teams and containerization of environment delivery for releases.
- Experience working on Docker hub, creating Docker images and handling multiple images primarily for middleware installations and domain configuration.
- Good knowledge in version control system in Git and GitHub.
- Good experience in build tools
- Implemented CI/CD pipeline using Jenkins, Ansible, Docker, Kubernetes ,YAML and Manifest
Do Your Thng
DYT - Do Your Thing, is an app, where all social media users can share brands they love with their followers and earn money while doing so! We believe everyone is an influencer. Our aim is to democratise social media and allow people to be rewarded for the content they post. How does DYT help you? It accelerates your career through collaboration opportunities with top brands and gives you access to a community full of experts in the influencer space.
We are looking for experienced DevOps Engineers to join our Engineering team. The candidate will be working with our engineers and interact with the tech team for high quality web applications for a product.
- Devops Engineer with 2+ years of experience in development and production operations supporting for Linux & Windows based applications and Cloud deployments (AWS/GC stack)
- Experience working with Continuous Integration and Continuous Deployment Pipeline
- Exposure to managing LAMP stack-based applications
- Experience Resource provisioning automation using tools such as CloudFormation, terraform and ARM Templates.
- Experience in working closely with clients, understanding their requirements, design and implement quality solutions to meet their needs.
- Ability to take ownership on the carried-out work
- Experience coordinating with rest of the team to deliver well-architected and high-quality solutions.
- Experience deploying Docker based applications
- Experience with AWS services.
- Excellent verbal and written communication skills
- Exposure to AWS, google cloud and Azure Cloud
- Experience in Jenkins, Ansible, Terraform
- Build Monitoring tools and respond to alarms triggered in production environment
- Willingness to quickly become a member of the team and to do what it takes to get the job done
- Ability to work well in a fast-paced environment and listen and learn from stakeholders
- Demonstrate a strong work ethic and incorporate company values in your everyday work.
We are looking for a DevOps Engineer who can deliver high-value features in short periods of time through cross-team collaboration. Capable to bring a collaborative approach to software development, testing, and deployment. To puts the team with varying objectives together to work toward more efficient and high-quality code releases.
Job Location: Sultanpur, New Delhi, 110030.
- Setup and maintain DevOps tools, Cloud monitoring tools, Cloud security.
- The DevOps engineer needs to be agile enough to wear a technical hat and manage operations simultaneously.
- Monitor and maintain highly available systems on Kubernetes (multiple production applications).
- Implement and manage CI/CD pipelines.
- Implement an auto-scaling system for our Kubernetes nodes.
- Monitoring and maintaining highly available databases (Redis, MongoDB, Postgres, and Cassandra).
- Monitor cost fluctuations and optimize.
- Provide support to developers by assigning bugs and alerting them about failures in time.
- Analyse architecture problems and caveats and provide precise solutions/tools for them.
- Incident response to system alerts during local day hours.
- System security and admin credential administration.
- Deploy and Manage GCP services (EC2, S3, VPC, Route53, Autoscaling, etc) Configuration management of all services stack.
The DevOps Engineer must have the following skills:
- Core Skill Set: DevSecOps, Cloud Native Deployments, Deployments using Docker / Kubernetes with supporting non-functioning components (i.e. API gateways, SSO / IAM, Logging / Monitoring, Load Balancers, Firewalls, etc), Deployments in On-Premise environments using modern approaches that are cloud portable.
- Minimum 2+ years of relevant experience primarily in DevOps and cloud computing.
- Prior experience working with AWS (EKS, Lambda) or other cloud platforms like GCP,
- and Azure.
- Intermediate knowledge of containers and Docker and orchestration.
- Hands-on experience with Kubernetes.
- Experience with CI/CD platforms like GitHub Actions, Jenkins, Travi, etc.
- Experienced in logging and monitoring of cloud resources with EFK, Prometheus, and Grafana.
- Good command over fundamental OS(Linux) and networking skills.
Learn about our Culture:
Wigzo is a culture-driven company powered by its employees, their vision, and their inspiration. All the employees live by the culture and values that define us. We value people for their talent, personality, competency, and ability to learn and grow.
We create a work environment that allows people to thrive and show their best performance. We believe in meritocracy. We take pride in our diversity and strive to embrace diverse voices and create an inclusive workplace.
To know more please visit: https://www.wigzo.com/employee-spotlight/
About Wigzo Technologies by Shiprocket:
Wigzo is an e-commerce marketing automation platform in which Shiprocket has acquired a majority stake. Together, we help businesses of all sizes delve deeper into data to unleash possibilities to enhance sales and income. Wigzo enables e-commerce firms to personalize each customer interaction, resulting in increased engagement, retention, loyalty, and lifetime value.
An Omnichannel marketing automation suite, Wigzo enables you to understand your customers/visitors more intelligently so you market to them what they want and not what you have. It works on real-time customer insights with real-time communication and a personalization engine that helps marketers manage basic communication and also provides dynamic email, personalized notifications, user retention, real-time engagement, real-time content, and much more.
Wigzo is holding 1000+ customers globally, it’s been 6+ years in the industry, with 100+ e-commerce brands working with us which results in 15 times business growth. For more information please visit our website: https://www.wigzo.com/
The role requires you to design development pipelines from the ground up, Creation of Docker Files, design and operate highly available systems in AWS Cloud environments. Also involves Configuration Management, Web Services Architectures, DevOps Implementation, Database management, Backups, and Monitoring.
Key responsibility area
- Ensure reliable operation of CI/CD pipelines
- Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
- Logging, metrics and alerting management.
- Creation of Bash/Python scripts for automation
- Performing root cause analysis for production errors.
- 2 years experience as Team Lead.
- Good Command on kubernetes.
- Proficient in Linux Commands line and troubleshooting.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Hands-on experience with CI tooling preferably with Jenkins.
- Proficient in deployment using Ansible.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Scripting languages: Bash, Python, Groovy.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
Linux, CI/CD(Jenkin), AWS, Scripting(Bash,shell Python, Go), Ngnix, Docker.
Good to have
Configuration Management(Ansible or similar tool), Logging tool( ELK or similar), Monitoring tool(Ngios or similar), IaC(Terraform, cloudformation).
DevOps (Maestro, GITLAB, Jenkins, Linux scripting, Oracle DB)
Assigned to multi-skilled Agile team, the main activity is to package and deploy the deliverables, setup and maintain Non-production environments, manage the Change process with direct interactions with Infra teams, dev teams and architects in an International environment. 4 years application development experience + 2 years of Ops activity are required.
Knowledge of Agile methodology (Scrum) is a nice to have.
Job Dsecription: (8-12 years)
○ Develop best practices for team and also responsible for the architecture
○ solutions and documentation operations in order to meet the engineering departments quality and standards
○ Participate in production outage and handle complex issues and works towards Resolution
○ Develop custom tools and integration with existing tools to increase engineering Productivity
Required Experience and Expertise
○ Deep understanding of Kernel, Networking and OS fundamentals
○ Strong experience in writing helm charts.
○ Deep understanding of K8s.
○ Good knowledge in service mesh.
○ Good Database understanding
Notice Period: 30 day max
• Develop and maintain CI/CD tools to build and deploy scalable web and responsive applications in production environment
• Design and implement monitoring solutions that identify both system bottlenecks and production issues
• Design and implement workflows for continuous integration, including provisioning, deployment, testing, and version control of the software.
• Develop self-service solutions for the engineering team in order to deliver sites/software with great speed and quality
o Automating Infra creation
o Provide easy to use solutions to engineering team
• Conduct research, tests, and implements new metrics collection systems that can be reused and applied as engineering best practices
o Update our processes and design new processes as needed.
o Establish DevOps Engineer team best practices.
o Stay current with industry trends and source new ways for our business to improve.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
• Manage timely resolution of all critical and/or complex problems
• Maintain, monitor, and establish best practices for containerized environments.
• Mentor new DevOps engineers
What you will bring
• The desire to work in fast-paced environment.
• 5+ years’ experience building, maintaining, and deploying production infrastructures in AWS or other cloud providers
• Containerization experience with applications deployed on Docker and Kubernetes
• Understanding of NoSQL and Relational Database with respect to deployment and horizontal scalability
• Demonstrated knowledge of Distributed and Scalable systems Experience with maintaining and deployment of critical infrastructure components through Infrastructure-as-Code and configuration management tooling across multiple environments (Ansible, Terraform etc)
• Strong knowledge of DevOps and CI/CD pipeline (GitHub, BitBucket, Artifactory etc)
• Strong understanding of cloud and infrastructure components (server, storage, network, data, and applications) to deliver end-to-end cloud Infrastructure architectures and designs and recommendations
o AWS services like S3, CloudFront, Kubernetes, RDS, Data Warehouses to come up with architecture/suggestions for new use cases.
• Test our system integrity, implemented designs, application developments and other processes related to infrastructure, making improvements as needed
Good to have
• Experience with code quality tools, static or dynamic code analysis and compliance and undertaking and resolving issues identified from vulnerability and compliance scans of our infrastructure
• Good knowledge of REST/SOAP/JSON web service API implementation